diff --git a/samtranslator/schema/schema.json b/samtranslator/schema/schema.json index 4e0bcc39d..d8565c159 100644 --- a/samtranslator/schema/schema.json +++ b/samtranslator/schema/schema.json @@ -18100,7 +18100,7 @@ "type": "string" }, "InstanceType": { - "markdownDescription": "The instance type to use when launching fleet instances. The following instance types are available for non-Elastic fleets:\n\n- stream.standard.small\n- stream.standard.medium\n- stream.standard.large\n- stream.compute.large\n- stream.compute.xlarge\n- stream.compute.2xlarge\n- stream.compute.4xlarge\n- stream.compute.8xlarge\n- stream.memory.large\n- stream.memory.xlarge\n- stream.memory.2xlarge\n- stream.memory.4xlarge\n- stream.memory.8xlarge\n- stream.memory.z1d.large\n- stream.memory.z1d.xlarge\n- stream.memory.z1d.2xlarge\n- stream.memory.z1d.3xlarge\n- stream.memory.z1d.6xlarge\n- stream.memory.z1d.12xlarge\n- stream.graphics-design.large\n- stream.graphics-design.xlarge\n- stream.graphics-design.2xlarge\n- stream.graphics-design.4xlarge\n- stream.graphics-desktop.2xlarge\n- stream.graphics.g4dn.xlarge\n- stream.graphics.g4dn.2xlarge\n- stream.graphics.g4dn.4xlarge\n- stream.graphics.g4dn.8xlarge\n- stream.graphics.g4dn.12xlarge\n- stream.graphics.g4dn.16xlarge\n- stream.graphics-pro.4xlarge\n- stream.graphics-pro.8xlarge\n- stream.graphics-pro.16xlarge\n- stream.graphics.g5.xlarge\n- stream.graphics.g5.2xlarge\n- stream.graphics.g5.4xlarge\n- stream.graphics.g5.8xlarge\n- stream.graphics.g5.16xlarge\n- stream.graphics.g5.12xlarge\n- stream.graphics.g5.24xlarge\n- stream.graphics.g6.xlarge\n- stream.graphics.g6.2xlarge\n- stream.graphics.g6.4xlarge\n- stream.graphics.g6.8xlarge\n- stream.graphics.g6.16xlarge\n- stream.graphics.g6.12xlarge\n- stream.graphics.g6.24xlarge\n- stream.graphics.gr6.4xlarge\n- stream.graphics.gr6.8xlarge\n- stream.graphics.g6f.large\n- stream.graphics.g6f.xlarge\n- stream.graphics.g6f.2xlarge\n- stream.graphics.g6f.4xlarge\n- stream.graphics.gr6f.4xlarge\n\nThe following instance types are available for Elastic fleets:\n\n- stream.standard.small\n- stream.standard.medium", + "markdownDescription": "The instance type to use when launching fleet instances. The following instance types are available for non-Elastic fleets:\n\n- stream.standard.small\n- stream.standard.medium\n- stream.standard.large\n- stream.compute.large\n- stream.compute.xlarge\n- stream.compute.2xlarge\n- stream.compute.4xlarge\n- stream.compute.8xlarge\n- stream.memory.large\n- stream.memory.xlarge\n- stream.memory.2xlarge\n- stream.memory.4xlarge\n- stream.memory.8xlarge\n- stream.memory.z1d.large\n- stream.memory.z1d.xlarge\n- stream.memory.z1d.2xlarge\n- stream.memory.z1d.3xlarge\n- stream.memory.z1d.6xlarge\n- stream.memory.z1d.12xlarge\n- stream.graphics-design.large\n- stream.graphics-design.xlarge\n- stream.graphics-design.2xlarge\n- stream.graphics-design.4xlarge\n- stream.graphics.g4dn.xlarge\n- stream.graphics.g4dn.2xlarge\n- stream.graphics.g4dn.4xlarge\n- stream.graphics.g4dn.8xlarge\n- stream.graphics.g4dn.12xlarge\n- stream.graphics.g4dn.16xlarge\n- stream.graphics.g5.xlarge\n- stream.graphics.g5.2xlarge\n- stream.graphics.g5.4xlarge\n- stream.graphics.g5.8xlarge\n- stream.graphics.g5.16xlarge\n- stream.graphics.g5.12xlarge\n- stream.graphics.g5.24xlarge\n- stream.graphics.g6.xlarge\n- stream.graphics.g6.2xlarge\n- stream.graphics.g6.4xlarge\n- stream.graphics.g6.8xlarge\n- stream.graphics.g6.16xlarge\n- stream.graphics.g6.12xlarge\n- stream.graphics.g6.24xlarge\n- stream.graphics.gr6.4xlarge\n- stream.graphics.gr6.8xlarge\n- stream.graphics.g6f.large\n- stream.graphics.g6f.xlarge\n- stream.graphics.g6f.2xlarge\n- stream.graphics.g6f.4xlarge\n- stream.graphics.gr6f.4xlarge\n\nThe following instance types are available for Elastic fleets:\n\n- stream.standard.small\n- stream.standard.medium", "title": "InstanceType", "type": "string" }, @@ -18135,7 +18135,7 @@ "title": "SessionScriptS3Location" }, "StreamView": { - "markdownDescription": "The AppStream 2.0 view that is displayed to your users when they stream from the fleet. When `APP` is specified, only the windows of applications opened by users display. When `DESKTOP` is specified, the standard desktop that is provided by the operating system displays.\n\nThe default value is `APP` .", + "markdownDescription": "The WorkSpaces Applications view that is displayed to your users when they stream from the fleet. When `APP` is specified, only the windows of applications opened by users display. When `DESKTOP` is specified, the standard desktop that is provided by the operating system displays.\n\nThe default value is `APP` .", "title": "StreamView", "type": "string" }, @@ -18306,7 +18306,7 @@ "type": "array" }, "AppstreamAgentVersion": { - "markdownDescription": "The version of the AppStream 2.0 agent to use for this image builder. To use the latest version of the AppStream 2.0 agent, specify [LATEST].", + "markdownDescription": "The version of the WorkSpaces Applications agent to use for this image builder. To use the latest version of the WorkSpaces Applications agent, specify [LATEST].", "title": "AppstreamAgentVersion", "type": "string" }, @@ -18346,7 +18346,7 @@ "type": "string" }, "InstanceType": { - "markdownDescription": "The instance type to use when launching the image builder. The following instance types are available:\n\n- stream.standard.small\n- stream.standard.medium\n- stream.standard.large\n- stream.compute.large\n- stream.compute.xlarge\n- stream.compute.2xlarge\n- stream.compute.4xlarge\n- stream.compute.8xlarge\n- stream.memory.large\n- stream.memory.xlarge\n- stream.memory.2xlarge\n- stream.memory.4xlarge\n- stream.memory.8xlarge\n- stream.memory.z1d.large\n- stream.memory.z1d.xlarge\n- stream.memory.z1d.2xlarge\n- stream.memory.z1d.3xlarge\n- stream.memory.z1d.6xlarge\n- stream.memory.z1d.12xlarge\n- stream.graphics-design.large\n- stream.graphics-design.xlarge\n- stream.graphics-design.2xlarge\n- stream.graphics-design.4xlarge\n- stream.graphics-desktop.2xlarge\n- stream.graphics.g4dn.xlarge\n- stream.graphics.g4dn.2xlarge\n- stream.graphics.g4dn.4xlarge\n- stream.graphics.g4dn.8xlarge\n- stream.graphics.g4dn.12xlarge\n- stream.graphics.g4dn.16xlarge\n- stream.graphics-pro.4xlarge\n- stream.graphics-pro.8xlarge\n- stream.graphics-pro.16xlarge\n- stream.graphics.g5.xlarge\n- stream.graphics.g5.2xlarge\n- stream.graphics.g5.4xlarge\n- stream.graphics.g5.8xlarge\n- stream.graphics.g5.16xlarge\n- stream.graphics.g5.12xlarge\n- stream.graphics.g5.24xlarge\n- stream.graphics.g6.xlarge\n- stream.graphics.g6.2xlarge\n- stream.graphics.g6.4xlarge\n- stream.graphics.g6.8xlarge\n- stream.graphics.g6.16xlarge\n- stream.graphics.g6.12xlarge\n- stream.graphics.g6.24xlarge\n- stream.graphics.gr6.4xlarge\n- stream.graphics.gr6.8xlarge\n- stream.graphics.g6f.large\n- stream.graphics.g6f.xlarge\n- stream.graphics.g6f.2xlarge\n- stream.graphics.g6f.4xlarge\n- stream.graphics.gr6f.4xlarge", + "markdownDescription": "The instance type to use when launching the image builder. The following instance types are available:\n\n- stream.standard.small\n- stream.standard.medium\n- stream.standard.large\n- stream.compute.large\n- stream.compute.xlarge\n- stream.compute.2xlarge\n- stream.compute.4xlarge\n- stream.compute.8xlarge\n- stream.memory.large\n- stream.memory.xlarge\n- stream.memory.2xlarge\n- stream.memory.4xlarge\n- stream.memory.8xlarge\n- stream.memory.z1d.large\n- stream.memory.z1d.xlarge\n- stream.memory.z1d.2xlarge\n- stream.memory.z1d.3xlarge\n- stream.memory.z1d.6xlarge\n- stream.memory.z1d.12xlarge\n- stream.graphics-design.large\n- stream.graphics-design.xlarge\n- stream.graphics-design.2xlarge\n- stream.graphics-design.4xlarge\n- stream.graphics.g4dn.xlarge\n- stream.graphics.g4dn.2xlarge\n- stream.graphics.g4dn.4xlarge\n- stream.graphics.g4dn.8xlarge\n- stream.graphics.g4dn.12xlarge\n- stream.graphics.g4dn.16xlarge\n- stream.graphics.g5.xlarge\n- stream.graphics.g5.2xlarge\n- stream.graphics.g5.4xlarge\n- stream.graphics.g5.8xlarge\n- stream.graphics.g5.16xlarge\n- stream.graphics.g5.12xlarge\n- stream.graphics.g5.24xlarge\n- stream.graphics.g6.xlarge\n- stream.graphics.g6.2xlarge\n- stream.graphics.g6.4xlarge\n- stream.graphics.g6.8xlarge\n- stream.graphics.g6.16xlarge\n- stream.graphics.g6.12xlarge\n- stream.graphics.g6.24xlarge\n- stream.graphics.gr6.4xlarge\n- stream.graphics.gr6.8xlarge\n- stream.graphics.g6f.large\n- stream.graphics.g6f.xlarge\n- stream.graphics.g6f.2xlarge\n- stream.graphics.g6f.4xlarge\n- stream.graphics.gr6f.4xlarge", "title": "InstanceType", "type": "string" }, @@ -18493,7 +18493,7 @@ "items": { "$ref": "#/definitions/AWS::AppStream::Stack.AccessEndpoint" }, - "markdownDescription": "The list of virtual private cloud (VPC) interface endpoint objects. Users of the stack can connect to AppStream 2.0 only through the specified endpoints.", + "markdownDescription": "The list of virtual private cloud (VPC) interface endpoint objects. Users of the stack can connect to WorkSpaces Applications only through the specified endpoints.", "title": "AccessEndpoints", "type": "array" }, @@ -18529,7 +18529,7 @@ "items": { "type": "string" }, - "markdownDescription": "The domains where AppStream 2.0 streaming sessions can be embedded in an iframe. You must approve the domains that you want to host embedded AppStream 2.0 streaming sessions.", + "markdownDescription": "The domains where WorkSpaces Applications streaming sessions can be embedded in an iframe. You must approve the domains that you want to host embedded WorkSpaces Applications streaming sessions.", "title": "EmbedHostDomains", "type": "array" }, @@ -26814,7 +26814,7 @@ "type": "string" }, "RestoreTestingSelectionName": { - "markdownDescription": "The unique name of the restore testing selection that belongs to the related restore testing plan.", + "markdownDescription": "The unique name of the restore testing selection that belongs to the related restore testing plan.\n\nThe name consists of only alphanumeric characters and underscores. Maximum length is 50.", "title": "RestoreTestingSelectionName", "type": "string" }, @@ -33169,7 +33169,7 @@ "items": { "type": "string" }, - "markdownDescription": "The columns within the underlying AWS Glue table that can be utilized within collaborations.", + "markdownDescription": "The columns within the underlying AWS Glue table that can be used within collaborations.", "title": "AllowedColumns", "type": "array" }, @@ -48568,7 +48568,7 @@ "title": "RecordingMode" }, "RoleARN": { - "markdownDescription": "Amazon Resource Name (ARN) of the IAM role assumed by AWS Config and used by the configuration recorder. For more information, see [Permissions for the IAM Role Assigned](https://docs.aws.amazon.com/config/latest/developerguide/iamrole-permissions.html) to AWS Config in the AWS Config Developer Guide.\n\n> *Pre-existing AWS Config role*\n> \n> If you have used an AWS service that uses AWS Config , such as AWS Security Hub or AWS Control Tower , and an AWS Config role has already been created, make sure that the IAM role that you use when setting up AWS Config keeps the same minimum permissions as the already created AWS Config role. You must do this so that the other AWS service continues to run as expected.\n> \n> For example, if AWS Control Tower has an IAM role that allows AWS Config to read Amazon Simple Storage Service ( Amazon S3 ) objects, make sure that the same permissions are granted within the IAM role you use when setting up AWS Config . Otherwise, it may interfere with how AWS Control Tower operates. For more information about IAM roles for AWS Config , see [*Identity and Access Management for AWS Config*](https://docs.aws.amazon.com/config/latest/developerguide/security-iam.html) in the *AWS Config Developer Guide* .", + "markdownDescription": "Amazon Resource Name (ARN) of the IAM role assumed by AWS Config and used by the configuration recorder. For more information, see [Permissions for the IAM Role Assigned](https://docs.aws.amazon.com/config/latest/developerguide/iamrole-permissions.html) to AWS Config in the AWS Config Developer Guide.\n\n> *Pre-existing AWS Config role*\n> \n> If you have used an AWS service that uses AWS Config , such as Security Hub or AWS Control Tower , and an AWS Config role has already been created, make sure that the IAM role that you use when setting up AWS Config keeps the same minimum permissions as the already created AWS Config role. You must do this so that the other AWS service continues to run as expected.\n> \n> For example, if AWS Control Tower has an IAM role that allows AWS Config to read Amazon Simple Storage Service ( Amazon S3 ) objects, make sure that the same permissions are granted within the IAM role you use when setting up AWS Config . Otherwise, it may interfere with how AWS Control Tower operates. For more information about IAM roles for AWS Config , see [*Identity and Access Management for AWS Config*](https://docs.aws.amazon.com/config/latest/developerguide/security-iam.html) in the *AWS Config Developer Guide* .", "title": "RoleARN", "type": "string" } @@ -83126,17 +83126,17 @@ "additionalProperties": false, "properties": { "Base": { - "markdownDescription": "The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of `0` is used.\n\nBase value characteristics:\n\n- Only one capacity provider in a strategy can have a base defined\n- Default value is `0` if not specified\n- Valid range: 0 to 100,000\n- Base requirements are satisfied first before weight distribution", + "markdownDescription": "The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of `0` is used.\n\nBase value characteristics:\n\n- Only one capacity provider in a strategy can have a base defined\n- The default value is `0` if not specified\n- The valid range is 0 to 100,000\n- Base requirements are satisfied first before weight distribution", "title": "Base", "type": "number" }, "CapacityProvider": { - "markdownDescription": "The short name of the capacity provider.", + "markdownDescription": "The short name of the capacity provider. This can be either an AWS managed capacity provider ( `FARGATE` or `FARGATE_SPOT` ) or the name of a custom capacity provider that you created.", "title": "CapacityProvider", "type": "string" }, "Weight": { - "markdownDescription": "The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The `weight` value is taken into consideration after the `base` value, if defined, is satisfied.\n\nIf no `weight` value is specified, the default value of `0` is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of `0` can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of `0` , any `RunTask` or `CreateService` actions using the capacity provider strategy will fail.\n\nWeight value characteristics:\n\n- Weight is considered after the base value is satisfied\n- Default value is `0` if not specified\n- Valid range: 0 to 1,000\n- At least one capacity provider must have a weight greater than zero\n- Capacity providers with weight of `0` cannot place tasks\n\nTask distribution logic:\n\n- Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider\n- Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios\n\nExamples:\n\nEqual Distribution: Two capacity providers both with weight `1` will split tasks evenly after base requirements are met.\n\nWeighted Distribution: If capacityProviderA has weight `1` and capacityProviderB has weight `4` , then for every 1 task on A, 4 tasks will run on B.", + "markdownDescription": "The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The `weight` value is taken into consideration after the `base` value, if defined, is satisfied.\n\nIf no `weight` value is specified, the default value of `0` is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of `0` can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of `0` , any `RunTask` or `CreateService` actions using the capacity provider strategy will fail.\n\nWeight value characteristics:\n\n- Weight is considered after the base value is satisfied\n- The default value is `0` if not specified\n- The valid range is 0 to 1,000\n- At least one capacity provider must have a weight greater than zero\n- Capacity providers with weight of `0` cannot place tasks\n\nTask distribution logic:\n\n- Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider\n- Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios\n\nExamples:\n\nEqual Distribution: Two capacity providers both with weight `1` will split tasks evenly after base requirements are met.\n\nWeighted Distribution: If capacityProviderA has weight `1` and capacityProviderB has weight `4` , then for every 1 task on A, 4 tasks will run on B.", "title": "Weight", "type": "number" } @@ -83322,17 +83322,17 @@ "additionalProperties": false, "properties": { "Base": { - "markdownDescription": "The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of `0` is used.\n\nBase value characteristics:\n\n- Only one capacity provider in a strategy can have a base defined\n- Default value is `0` if not specified\n- Valid range: 0 to 100,000\n- Base requirements are satisfied first before weight distribution", + "markdownDescription": "The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of `0` is used.\n\nBase value characteristics:\n\n- Only one capacity provider in a strategy can have a base defined\n- The default value is `0` if not specified\n- The valid range is 0 to 100,000\n- Base requirements are satisfied first before weight distribution", "title": "Base", "type": "number" }, "CapacityProvider": { - "markdownDescription": "The short name of the capacity provider.", + "markdownDescription": "The short name of the capacity provider. This can be either an AWS managed capacity provider ( `FARGATE` or `FARGATE_SPOT` ) or the name of a custom capacity provider that you created.", "title": "CapacityProvider", "type": "string" }, "Weight": { - "markdownDescription": "The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The `weight` value is taken into consideration after the `base` value, if defined, is satisfied.\n\nIf no `weight` value is specified, the default value of `0` is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of `0` can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of `0` , any `RunTask` or `CreateService` actions using the capacity provider strategy will fail.\n\nWeight value characteristics:\n\n- Weight is considered after the base value is satisfied\n- Default value is `0` if not specified\n- Valid range: 0 to 1,000\n- At least one capacity provider must have a weight greater than zero\n- Capacity providers with weight of `0` cannot place tasks\n\nTask distribution logic:\n\n- Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider\n- Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios\n\nExamples:\n\nEqual Distribution: Two capacity providers both with weight `1` will split tasks evenly after base requirements are met.\n\nWeighted Distribution: If capacityProviderA has weight `1` and capacityProviderB has weight `4` , then for every 1 task on A, 4 tasks will run on B.", + "markdownDescription": "The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The `weight` value is taken into consideration after the `base` value, if defined, is satisfied.\n\nIf no `weight` value is specified, the default value of `0` is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of `0` can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of `0` , any `RunTask` or `CreateService` actions using the capacity provider strategy will fail.\n\nWeight value characteristics:\n\n- Weight is considered after the base value is satisfied\n- The default value is `0` if not specified\n- The valid range is 0 to 1,000\n- At least one capacity provider must have a weight greater than zero\n- Capacity providers with weight of `0` cannot place tasks\n\nTask distribution logic:\n\n- Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider\n- Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios\n\nExamples:\n\nEqual Distribution: Two capacity providers both with weight `1` will split tasks evenly after base requirements are met.\n\nWeighted Distribution: If capacityProviderA has weight `1` and capacityProviderB has weight `4` , then for every 1 task on A, 4 tasks will run on B.", "title": "Weight", "type": "number" } @@ -83646,17 +83646,17 @@ "additionalProperties": false, "properties": { "Base": { - "markdownDescription": "The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of `0` is used.\n\nBase value characteristics:\n\n- Only one capacity provider in a strategy can have a base defined\n- Default value is `0` if not specified\n- Valid range: 0 to 100,000\n- Base requirements are satisfied first before weight distribution", + "markdownDescription": "The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of `0` is used.\n\nBase value characteristics:\n\n- Only one capacity provider in a strategy can have a base defined\n- The default value is `0` if not specified\n- The valid range is 0 to 100,000\n- Base requirements are satisfied first before weight distribution", "title": "Base", "type": "number" }, "CapacityProvider": { - "markdownDescription": "The short name of the capacity provider.", + "markdownDescription": "The short name of the capacity provider. This can be either an AWS managed capacity provider ( `FARGATE` or `FARGATE_SPOT` ) or the name of a custom capacity provider that you created.", "title": "CapacityProvider", "type": "string" }, "Weight": { - "markdownDescription": "The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The `weight` value is taken into consideration after the `base` value, if defined, is satisfied.\n\nIf no `weight` value is specified, the default value of `0` is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of `0` can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of `0` , any `RunTask` or `CreateService` actions using the capacity provider strategy will fail.\n\nWeight value characteristics:\n\n- Weight is considered after the base value is satisfied\n- Default value is `0` if not specified\n- Valid range: 0 to 1,000\n- At least one capacity provider must have a weight greater than zero\n- Capacity providers with weight of `0` cannot place tasks\n\nTask distribution logic:\n\n- Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider\n- Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios\n\nExamples:\n\nEqual Distribution: Two capacity providers both with weight `1` will split tasks evenly after base requirements are met.\n\nWeighted Distribution: If capacityProviderA has weight `1` and capacityProviderB has weight `4` , then for every 1 task on A, 4 tasks will run on B.", + "markdownDescription": "The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The `weight` value is taken into consideration after the `base` value, if defined, is satisfied.\n\nIf no `weight` value is specified, the default value of `0` is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of `0` can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of `0` , any `RunTask` or `CreateService` actions using the capacity provider strategy will fail.\n\nWeight value characteristics:\n\n- Weight is considered after the base value is satisfied\n- The default value is `0` if not specified\n- The valid range is 0 to 1,000\n- At least one capacity provider must have a weight greater than zero\n- Capacity providers with weight of `0` cannot place tasks\n\nTask distribution logic:\n\n- Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider\n- Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios\n\nExamples:\n\nEqual Distribution: Two capacity providers both with weight `1` will split tasks evenly after base requirements are met.\n\nWeighted Distribution: If capacityProviderA has weight `1` and capacityProviderB has weight `4` , then for every 1 task on A, 4 tasks will run on B.", "title": "Weight", "type": "number" } @@ -84237,7 +84237,7 @@ "type": "string" }, "PidMode": { - "markdownDescription": "The process namespace to use for the containers in the task. The valid values are `host` or `task` . On Fargate for Linux containers, the only valid value is `task` . For example, monitoring sidecars might need `pidMode` to access information about other containers running in the same task.\n\nIf `host` is specified, all containers within the tasks that specified the `host` PID mode on the same container instance share the same process namespace with the host Amazon EC2 instance.\n\nIf `task` is specified, all containers within the specified task share the same process namespace.\n\nIf no value is specified, the default is a private namespace for each container.\n\nIf the `host` PID mode is used, there's a heightened risk of undesired process namespace exposure.\n\n> This parameter is not supported for Windows containers. > This parameter is only supported for tasks that are hosted on AWS Fargate if the tasks are using platform version `1.4.0` or later (Linux). This isn't supported for Windows containers on Fargate.", + "markdownDescription": "The process namespace to use for the containers in the task. The valid values are `host` or `task` . On Fargate for Linux containers, the only valid value is `task` . For example, monitoring sidecars might need `pidMode` to access information about other containers running in the same task.\n\nIf `host` is specified, all containers within the tasks that specified the `host` PID mode on the same container instance share the same process namespace with the host Amazon EC2 instance.\n\nIf `task` is specified, all containers within the specified task share the same process namespace.\n\nIf no value is specified, the The default is a private namespace for each container.\n\nIf the `host` PID mode is used, there's a heightened risk of undesired process namespace exposure.\n\n> This parameter is not supported for Windows containers. > This parameter is only supported for tasks that are hosted on AWS Fargate if the tasks are using platform version `1.4.0` or later (Linux). This isn't supported for Windows containers on Fargate.", "title": "PidMode", "type": "string" }, @@ -84264,7 +84264,7 @@ }, "RuntimePlatform": { "$ref": "#/definitions/AWS::ECS::TaskDefinition.RuntimePlatform", - "markdownDescription": "The operating system that your tasks definitions run on. A platform family is specified only for tasks using the Fargate launch type.", + "markdownDescription": "The operating system that your tasks definitions run on.", "title": "RuntimePlatform" }, "Tags": { @@ -84339,7 +84339,7 @@ "type": "array" }, "Cpu": { - "markdownDescription": "The number of `cpu` units reserved for the container. This parameter maps to `CpuShares` in the docker container create commandand the `--cpu-shares` option to docker run.\n\nThis field is optional for tasks using the Fargate launch type, and the only requirement is that the total amount of CPU reserved for all containers within a task be lower than the task-level `cpu` value.\n\n> You can determine the number of CPU units that are available per EC2 instance type by multiplying the vCPUs listed for that instance type on the [Amazon EC2 Instances](https://docs.aws.amazon.com/ec2/instance-types/) detail page by 1,024. \n\nLinux containers share unallocated CPU units with other containers on the container instance with the same ratio as their allocated amount. For example, if you run a single-container task on a single-core instance type with 512 CPU units specified for that container, and that's the only task running on the container instance, that container could use the full 1,024 CPU unit share at any given time. However, if you launched another copy of the same task on that container instance, each task is guaranteed a minimum of 512 CPU units when needed. Moreover, each container could float to higher CPU usage if the other container was not using it. If both tasks were 100% active all of the time, they would be limited to 512 CPU units.\n\nOn Linux container instances, the Docker daemon on the container instance uses the CPU value to calculate the relative CPU share ratios for running containers. The minimum valid CPU share value that the Linux kernel allows is 2, and the maximum valid CPU share value that the Linux kernel allows is 262144. However, the CPU parameter isn't required, and you can use CPU values below 2 or above 262144 in your container definitions. For CPU values below 2 (including null) or above 262144, the behavior varies based on your Amazon ECS container agent version:\n\n- *Agent versions less than or equal to 1.1.0:* Null and zero CPU values are passed to Docker as 0, which Docker then converts to 1,024 CPU shares. CPU values of 1 are passed to Docker as 1, which the Linux kernel converts to two CPU shares.\n- *Agent versions greater than or equal to 1.2.0:* Null, zero, and CPU values of 1 are passed to Docker as 2.\n- *Agent versions greater than or equal to 1.84.0:* CPU values greater than 256 vCPU are passed to Docker as 256, which is equivalent to 262144 CPU shares.\n\nOn Windows container instances, the CPU limit is enforced as an absolute limit, or a quota. Windows containers only have access to the specified amount of CPU that's described in the task definition. A null or zero CPU value is passed to Docker as `0` , which Windows interprets as 1% of one CPU.", + "markdownDescription": "The number of `cpu` units reserved for the container. This parameter maps to `CpuShares` in the docker container create command and the `--cpu-shares` option to docker run.\n\nThis field is optional for tasks using the Fargate launch type, and the only requirement is that the total amount of CPU reserved for all containers within a task be lower than the task-level `cpu` value.\n\n> You can determine the number of CPU units that are available per EC2 instance type by multiplying the vCPUs listed for that instance type on the [Amazon EC2 Instances](https://docs.aws.amazon.com/ec2/instance-types/) detail page by 1,024. \n\nLinux containers share unallocated CPU units with other containers on the container instance with the same ratio as their allocated amount. For example, if you run a single-container task on a single-core instance type with 512 CPU units specified for that container, and that's the only task running on the container instance, that container could use the full 1,024 CPU unit share at any given time. However, if you launched another copy of the same task on that container instance, each task is guaranteed a minimum of 512 CPU units when needed. Moreover, each container could float to higher CPU usage if the other container was not using it. If both tasks were 100% active all of the time, they would be limited to 512 CPU units.\n\nOn Linux container instances, the Docker daemon on the container instance uses the CPU value to calculate the relative CPU share ratios for running containers. The minimum valid CPU share value that the Linux kernel allows is 2, and the maximum valid CPU share value that the Linux kernel allows is 262144. However, the CPU parameter isn't required, and you can use CPU values below 2 or above 262144 in your container definitions. For CPU values below 2 (including null) or above 262144, the behavior varies based on your Amazon ECS container agent version:\n\n- *Agent versions less than or equal to 1.1.0:* Null and zero CPU values are passed to Docker as 0, which Docker then converts to 1,024 CPU shares. CPU values of 1 are passed to Docker as 1, which the Linux kernel converts to two CPU shares.\n- *Agent versions greater than or equal to 1.2.0:* Null, zero, and CPU values of 1 are passed to Docker as 2.\n- *Agent versions greater than or equal to 1.84.0:* CPU values greater than 256 vCPU are passed to Docker as 256, which is equivalent to 262144 CPU shares.\n\nOn Windows container instances, the CPU limit is enforced as an absolute limit, or a quota. Windows containers only have access to the specified amount of CPU that's described in the task definition. A null or zero CPU value is passed to Docker as `0` , which Windows interprets as 1% of one CPU.", "title": "Cpu", "type": "number" }, @@ -85118,7 +85118,7 @@ "additionalProperties": false, "properties": { "CpuArchitecture": { - "markdownDescription": "The CPU architecture.\n\nYou can run your Linux tasks on an ARM-based platform by setting the value to `ARM64` . This option is available for tasks that run on Linux Amazon EC2 instance or Linux containers on Fargate.", + "markdownDescription": "The CPU architecture.\n\nYou can run your Linux tasks on an ARM-based platform by setting the value to `ARM64` . This option is available for tasks that run on Linux Amazon EC2 instance, Amazon ECS Managed Instances, or Linux containers on Fargate.", "title": "CpuArchitecture", "type": "string" }, @@ -134958,7 +134958,7 @@ }, "WorkDocsConfiguration": { "$ref": "#/definitions/AWS::Kendra::DataSource.WorkDocsConfiguration", - "markdownDescription": "Provides the configuration information to connect to Amazon WorkDocs as your data source.", + "markdownDescription": "Provides the configuration information to connect to WorkDocs as your data source.", "title": "WorkDocsConfiguration" } }, @@ -136064,7 +136064,7 @@ "items": { "type": "string" }, - "markdownDescription": "A list of regular expression patterns to exclude certain files in your Amazon WorkDocs site repository. Files that match the patterns are excluded from the index. Files that don\u2019t match the patterns are included in the index. If a file matches both an inclusion and exclusion pattern, the exclusion pattern takes precedence and the file isn't included in the index.", + "markdownDescription": "A list of regular expression patterns to exclude certain files in your WorkDocs site repository. Files that match the patterns are excluded from the index. Files that don\u2019t match the patterns are included in the index. If a file matches both an inclusion and exclusion pattern, the exclusion pattern takes precedence and the file isn't included in the index.", "title": "ExclusionPatterns", "type": "array" }, @@ -136072,7 +136072,7 @@ "items": { "$ref": "#/definitions/AWS::Kendra::DataSource.DataSourceToIndexFieldMapping" }, - "markdownDescription": "A list of `DataSourceToIndexFieldMapping` objects that map Amazon WorkDocs data source attributes or field names to Amazon Kendra index field names. To create custom fields, use the `UpdateIndex` API before you map to Amazon WorkDocs fields. For more information, see [Mapping data source fields](https://docs.aws.amazon.com/kendra/latest/dg/field-mapping.html) . The Amazon WorkDocs data source field names must exist in your Amazon WorkDocs custom metadata.", + "markdownDescription": "A list of `DataSourceToIndexFieldMapping` objects that map WorkDocs data source attributes or field names to Amazon Kendra index field names. To create custom fields, use the `UpdateIndex` API before you map to WorkDocs fields. For more information, see [Mapping data source fields](https://docs.aws.amazon.com/kendra/latest/dg/field-mapping.html) . The WorkDocs data source field names must exist in your WorkDocs custom metadata.", "title": "FieldMappings", "type": "array" }, @@ -136080,17 +136080,17 @@ "items": { "type": "string" }, - "markdownDescription": "A list of regular expression patterns to include certain files in your Amazon WorkDocs site repository. Files that match the patterns are included in the index. Files that don't match the patterns are excluded from the index. If a file matches both an inclusion and exclusion pattern, the exclusion pattern takes precedence and the file isn't included in the index.", + "markdownDescription": "A list of regular expression patterns to include certain files in your WorkDocs site repository. Files that match the patterns are included in the index. Files that don't match the patterns are excluded from the index. If a file matches both an inclusion and exclusion pattern, the exclusion pattern takes precedence and the file isn't included in the index.", "title": "InclusionPatterns", "type": "array" }, "OrganizationId": { - "markdownDescription": "The identifier of the directory corresponding to your Amazon WorkDocs site repository.\n\nYou can find the organization ID in the [AWS Directory Service](https://docs.aws.amazon.com/directoryservicev2/) by going to *Active Directory* , then *Directories* . Your Amazon WorkDocs site directory has an ID, which is the organization ID. You can also set up a new Amazon WorkDocs directory in the AWS Directory Service console and enable a Amazon WorkDocs site for the directory in the Amazon WorkDocs console.", + "markdownDescription": "The identifier of the directory corresponding to your WorkDocs site repository.\n\nYou can find the organization ID in the [AWS Directory Service](https://docs.aws.amazon.com/directoryservicev2/) by going to *Active Directory* , then *Directories* . Your WorkDocs site directory has an ID, which is the organization ID. You can also set up a new WorkDocs directory in the AWS Directory Service console and enable a WorkDocs site for the directory in the WorkDocs console.", "title": "OrganizationId", "type": "string" }, "UseChangeLog": { - "markdownDescription": "`TRUE` to use the Amazon WorkDocs change log to determine which documents require updating in the index. Depending on the change log's size, it may take longer for Amazon Kendra to use the change log than to scan all of your documents in Amazon WorkDocs.", + "markdownDescription": "`TRUE` to use the WorkDocs change log to determine which documents require updating in the index. Depending on the change log's size, it may take longer for Amazon Kendra to use the change log than to scan all of your documents in WorkDocs.", "title": "UseChangeLog", "type": "boolean" } @@ -153874,7 +153874,7 @@ "additionalProperties": false, "properties": { "FindingPublishingFrequency": { - "markdownDescription": "Specifies how often Amazon Macie publishes updates to policy findings for the account. This includes publishing updates to AWS Security Hub and Amazon EventBridge (formerly Amazon CloudWatch Events ). Valid values are:\n\n- FIFTEEN_MINUTES\n- ONE_HOUR\n- SIX_HOURS", + "markdownDescription": "Specifies how often Amazon Macie publishes updates to policy findings for the account. This includes publishing updates to Security Hub and Amazon EventBridge (formerly Amazon CloudWatch Events ). Valid values are:\n\n- FIFTEEN_MINUTES\n- ONE_HOUR\n- SIX_HOURS", "title": "FindingPublishingFrequency", "type": "string" }, @@ -192855,7 +192855,7 @@ "type": "array" }, "Name": { - "markdownDescription": "The name of the sheet. This name is displayed on the sheet's tab in the Amazon QuickSight console.", + "markdownDescription": "The name of the sheet. This name is displayed on the sheet's tab in the Quick Suite console.", "title": "Name", "type": "string" }, @@ -205399,7 +205399,7 @@ "type": "array" }, "Name": { - "markdownDescription": "The name of the sheet. This name is displayed on the sheet's tab in the Amazon QuickSight console.", + "markdownDescription": "The name of the sheet. This name is displayed on the sheet's tab in the Quick Suite console.", "title": "Name", "type": "string" }, @@ -207926,13 +207926,11 @@ }, "LogicalTableMap": { "additionalProperties": false, - "markdownDescription": "Configures the combination and transformation of the data from the physical tables.", "patternProperties": { "^[a-zA-Z0-9]+$": { "$ref": "#/definitions/AWS::QuickSight::DataSet.LogicalTable" } }, - "title": "LogicalTableMap", "type": "object" }, "Name": { @@ -207960,14 +207958,10 @@ "type": "object" }, "RowLevelPermissionDataSet": { - "$ref": "#/definitions/AWS::QuickSight::DataSet.RowLevelPermissionDataSet", - "markdownDescription": "The row-level security configuration for the data that you want to create.", - "title": "RowLevelPermissionDataSet" + "$ref": "#/definitions/AWS::QuickSight::DataSet.RowLevelPermissionDataSet" }, "RowLevelPermissionTagConfiguration": { - "$ref": "#/definitions/AWS::QuickSight::DataSet.RowLevelPermissionTagConfiguration", - "markdownDescription": "The element you can use to define tags for row-level security.", - "title": "RowLevelPermissionTagConfiguration" + "$ref": "#/definitions/AWS::QuickSight::DataSet.RowLevelPermissionTagConfiguration" }, "Tags": { "items": { @@ -208531,22 +208525,16 @@ "additionalProperties": false, "properties": { "Alias": { - "markdownDescription": "A display name for the logical table.", - "title": "Alias", "type": "string" }, "DataTransforms": { "items": { "$ref": "#/definitions/AWS::QuickSight::DataSet.TransformOperation" }, - "markdownDescription": "Transform operations that act on this logical table. For this structure to be valid, only one of the attributes can be non-null.", - "title": "DataTransforms", "type": "array" }, "Source": { - "$ref": "#/definitions/AWS::QuickSight::DataSet.LogicalTableSource", - "markdownDescription": "Source of this logical table.", - "title": "Source" + "$ref": "#/definitions/AWS::QuickSight::DataSet.LogicalTableSource" } }, "required": [ @@ -219779,7 +219767,7 @@ "type": "array" }, "Name": { - "markdownDescription": "The name of the sheet. This name is displayed on the sheet's tab in the Amazon QuickSight console.", + "markdownDescription": "The name of the sheet. This name is displayed on the sheet's tab in the Quick Suite console.", "title": "Name", "type": "string" }, @@ -252117,7 +252105,7 @@ "type": "string" }, "PlatformIdentifier": { - "markdownDescription": "The platform identifier of the notebook instance runtime environment.", + "markdownDescription": "The platform identifier of the notebook instance runtime environment. The default value is `notebook-al2-v2` .", "title": "PlatformIdentifier", "type": "string" }, @@ -254863,7 +254851,7 @@ "items": { "$ref": "#/definitions/AWS::SecurityHub::AutomationRule.NumberFilter" }, - "markdownDescription": "The likelihood that a finding accurately identifies the behavior or issue that it was intended to identify. `Confidence` is scored on a 0\u2013100 basis using a ratio scale. A value of `0` means 0 percent confidence, and a value of `100` means 100 percent confidence. For example, a data exfiltration detection based on a statistical deviation of network traffic has low confidence because an actual exfiltration hasn't been verified. For more information, see [Confidence](https://docs.aws.amazon.com/securityhub/latest/userguide/asff-top-level-attributes.html#asff-confidence) in the *AWS Security Hub User Guide* .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", + "markdownDescription": "The likelihood that a finding accurately identifies the behavior or issue that it was intended to identify. `Confidence` is scored on a 0\u2013100 basis using a ratio scale. A value of `0` means 0 percent confidence, and a value of `100` means 100 percent confidence. For example, a data exfiltration detection based on a statistical deviation of network traffic has low confidence because an actual exfiltration hasn't been verified. For more information, see [Confidence](https://docs.aws.amazon.com/securityhub/latest/userguide/asff-top-level-attributes.html#asff-confidence) in the *Security Hub User Guide* .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", "title": "Confidence", "type": "array" }, @@ -254871,7 +254859,7 @@ "items": { "$ref": "#/definitions/AWS::SecurityHub::AutomationRule.DateFilter" }, - "markdownDescription": "A timestamp that indicates when this finding record was created.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", + "markdownDescription": "A timestamp that indicates when this finding record was created.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", "title": "CreatedAt", "type": "array" }, @@ -254879,7 +254867,7 @@ "items": { "$ref": "#/definitions/AWS::SecurityHub::AutomationRule.NumberFilter" }, - "markdownDescription": "The level of importance that is assigned to the resources that are associated with a finding. `Criticality` is scored on a 0\u2013100 basis, using a ratio scale that supports only full integers. A score of `0` means that the underlying resources have no criticality, and a score of `100` is reserved for the most critical resources. For more information, see [Criticality](https://docs.aws.amazon.com/securityhub/latest/userguide/asff-top-level-attributes.html#asff-criticality) in the *AWS Security Hub User Guide* .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", + "markdownDescription": "The level of importance that is assigned to the resources that are associated with a finding. `Criticality` is scored on a 0\u2013100 basis, using a ratio scale that supports only full integers. A score of `0` means that the underlying resources have no criticality, and a score of `100` is reserved for the most critical resources. For more information, see [Criticality](https://docs.aws.amazon.com/securityhub/latest/userguide/asff-top-level-attributes.html#asff-criticality) in the *Security Hub User Guide* .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", "title": "Criticality", "type": "array" }, @@ -254895,7 +254883,7 @@ "items": { "$ref": "#/definitions/AWS::SecurityHub::AutomationRule.DateFilter" }, - "markdownDescription": "A timestamp that indicates when the potential security issue captured by a finding was first observed by the security findings product.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", + "markdownDescription": "A timestamp that indicates when the potential security issue captured by a finding was first observed by the security findings product.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", "title": "FirstObservedAt", "type": "array" }, @@ -254919,7 +254907,7 @@ "items": { "$ref": "#/definitions/AWS::SecurityHub::AutomationRule.DateFilter" }, - "markdownDescription": "A timestamp that indicates when the security findings provider most recently observed a change in the resource that is involved in the finding.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", + "markdownDescription": "A timestamp that indicates when the security findings provider most recently observed a change in the resource that is involved in the finding.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", "title": "LastObservedAt", "type": "array" }, @@ -254935,7 +254923,7 @@ "items": { "$ref": "#/definitions/AWS::SecurityHub::AutomationRule.DateFilter" }, - "markdownDescription": "The timestamp of when the note was updated.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", + "markdownDescription": "The timestamp of when the note was updated.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", "title": "NoteUpdatedAt", "type": "array" }, @@ -255063,7 +255051,7 @@ "items": { "$ref": "#/definitions/AWS::SecurityHub::AutomationRule.StringFilter" }, - "markdownDescription": "One or more finding types in the format of namespace/category/classifier that classify a finding. For a list of namespaces, classifiers, and categories, see [Types taxonomy for ASFF](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-findings-format-type-taxonomy.html) in the *AWS Security Hub User Guide* .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", + "markdownDescription": "One or more finding types in the format of namespace/category/classifier that classify a finding. For a list of namespaces, classifiers, and categories, see [Types taxonomy for ASFF](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-findings-format-type-taxonomy.html) in the *Security Hub User Guide* .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", "title": "Type", "type": "array" }, @@ -255071,7 +255059,7 @@ "items": { "$ref": "#/definitions/AWS::SecurityHub::AutomationRule.DateFilter" }, - "markdownDescription": "A timestamp that indicates when the finding record was most recently updated.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", + "markdownDescription": "A timestamp that indicates when the finding record was most recently updated.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", "title": "UpdatedAt", "type": "array" }, @@ -255111,12 +255099,12 @@ "title": "DateRange" }, "End": { - "markdownDescription": "A timestamp that provides the end date for the date filter.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", + "markdownDescription": "A timestamp that provides the end date for the date filter.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", "title": "End", "type": "string" }, "Start": { - "markdownDescription": "A timestamp that provides the start date for the date filter.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", + "markdownDescription": "A timestamp that provides the start date for the date filter.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", "title": "Start", "type": "string" } @@ -255147,7 +255135,7 @@ "additionalProperties": false, "properties": { "Comparison": { - "markdownDescription": "The condition to apply to the key value when filtering Security Hub findings with a map filter.\n\nTo search for values that have the filter value, use one of the following comparison operators:\n\n- To search for values that include the filter value, use `CONTAINS` . For example, for the `ResourceTags` field, the filter `Department CONTAINS Security` matches findings that include the value `Security` for the `Department` tag. In the same example, a finding with a value of `Security team` for the `Department` tag is a match.\n- To search for values that exactly match the filter value, use `EQUALS` . For example, for the `ResourceTags` field, the filter `Department EQUALS Security` matches findings that have the value `Security` for the `Department` tag.\n\n`CONTAINS` and `EQUALS` filters on the same field are joined by `OR` . A finding matches if it matches any one of those filters. For example, the filters `Department CONTAINS Security OR Department CONTAINS Finance` match a finding that includes either `Security` , `Finance` , or both values.\n\nTo search for values that don't have the filter value, use one of the following comparison operators:\n\n- To search for values that exclude the filter value, use `NOT_CONTAINS` . For example, for the `ResourceTags` field, the filter `Department NOT_CONTAINS Finance` matches findings that exclude the value `Finance` for the `Department` tag.\n- To search for values other than the filter value, use `NOT_EQUALS` . For example, for the `ResourceTags` field, the filter `Department NOT_EQUALS Finance` matches findings that don\u2019t have the value `Finance` for the `Department` tag.\n\n`NOT_CONTAINS` and `NOT_EQUALS` filters on the same field are joined by `AND` . A finding matches only if it matches all of those filters. For example, the filters `Department NOT_CONTAINS Security AND Department NOT_CONTAINS Finance` match a finding that excludes both the `Security` and `Finance` values.\n\n`CONTAINS` filters can only be used with other `CONTAINS` filters. `NOT_CONTAINS` filters can only be used with other `NOT_CONTAINS` filters.\n\nYou can\u2019t have both a `CONTAINS` filter and a `NOT_CONTAINS` filter on the same field. Similarly, you can\u2019t have both an `EQUALS` filter and a `NOT_EQUALS` filter on the same field. Combining filters in this way returns an error.\n\n`CONTAINS` and `NOT_CONTAINS` operators can be used only with automation rules. For more information, see [Automation rules](https://docs.aws.amazon.com/securityhub/latest/userguide/automation-rules.html) in the *AWS Security Hub User Guide* .", + "markdownDescription": "The condition to apply to the key value when filtering Security Hub findings with a map filter.\n\nTo search for values that have the filter value, use one of the following comparison operators:\n\n- To search for values that include the filter value, use `CONTAINS` . For example, for the `ResourceTags` field, the filter `Department CONTAINS Security` matches findings that include the value `Security` for the `Department` tag. In the same example, a finding with a value of `Security team` for the `Department` tag is a match.\n- To search for values that exactly match the filter value, use `EQUALS` . For example, for the `ResourceTags` field, the filter `Department EQUALS Security` matches findings that have the value `Security` for the `Department` tag.\n\n`CONTAINS` and `EQUALS` filters on the same field are joined by `OR` . A finding matches if it matches any one of those filters. For example, the filters `Department CONTAINS Security OR Department CONTAINS Finance` match a finding that includes either `Security` , `Finance` , or both values.\n\nTo search for values that don't have the filter value, use one of the following comparison operators:\n\n- To search for values that exclude the filter value, use `NOT_CONTAINS` . For example, for the `ResourceTags` field, the filter `Department NOT_CONTAINS Finance` matches findings that exclude the value `Finance` for the `Department` tag.\n- To search for values other than the filter value, use `NOT_EQUALS` . For example, for the `ResourceTags` field, the filter `Department NOT_EQUALS Finance` matches findings that don\u2019t have the value `Finance` for the `Department` tag.\n\n`NOT_CONTAINS` and `NOT_EQUALS` filters on the same field are joined by `AND` . A finding matches only if it matches all of those filters. For example, the filters `Department NOT_CONTAINS Security AND Department NOT_CONTAINS Finance` match a finding that excludes both the `Security` and `Finance` values.\n\n`CONTAINS` filters can only be used with other `CONTAINS` filters. `NOT_CONTAINS` filters can only be used with other `NOT_CONTAINS` filters.\n\nYou can\u2019t have both a `CONTAINS` filter and a `NOT_CONTAINS` filter on the same field. Similarly, you can\u2019t have both an `EQUALS` filter and a `NOT_EQUALS` filter on the same field. Combining filters in this way returns an error.\n\n`CONTAINS` and `NOT_CONTAINS` operators can be used only with automation rules. For more information, see [Automation rules](https://docs.aws.amazon.com/securityhub/latest/userguide/automation-rules.html) in the *Security Hub User Guide* .", "title": "Comparison", "type": "string" }, @@ -255255,7 +255243,7 @@ "additionalProperties": false, "properties": { "Comparison": { - "markdownDescription": "The condition to apply to a string value when filtering Security Hub findings.\n\nTo search for values that have the filter value, use one of the following comparison operators:\n\n- To search for values that include the filter value, use `CONTAINS` . For example, the filter `Title CONTAINS CloudFront` matches findings that have a `Title` that includes the string CloudFront.\n- To search for values that exactly match the filter value, use `EQUALS` . For example, the filter `AwsAccountId EQUALS 123456789012` only matches findings that have an account ID of `123456789012` .\n- To search for values that start with the filter value, use `PREFIX` . For example, the filter `ResourceRegion PREFIX us` matches findings that have a `ResourceRegion` that starts with `us` . A `ResourceRegion` that starts with a different value, such as `af` , `ap` , or `ca` , doesn't match.\n\n`CONTAINS` , `EQUALS` , and `PREFIX` filters on the same field are joined by `OR` . A finding matches if it matches any one of those filters. For example, the filters `Title CONTAINS CloudFront OR Title CONTAINS CloudWatch` match a finding that includes either `CloudFront` , `CloudWatch` , or both strings in the title.\n\nTo search for values that don\u2019t have the filter value, use one of the following comparison operators:\n\n- To search for values that exclude the filter value, use `NOT_CONTAINS` . For example, the filter `Title NOT_CONTAINS CloudFront` matches findings that have a `Title` that excludes the string CloudFront.\n- To search for values other than the filter value, use `NOT_EQUALS` . For example, the filter `AwsAccountId NOT_EQUALS 123456789012` only matches findings that have an account ID other than `123456789012` .\n- To search for values that don't start with the filter value, use `PREFIX_NOT_EQUALS` . For example, the filter `ResourceRegion PREFIX_NOT_EQUALS us` matches findings with a `ResourceRegion` that starts with a value other than `us` .\n\n`NOT_CONTAINS` , `NOT_EQUALS` , and `PREFIX_NOT_EQUALS` filters on the same field are joined by `AND` . A finding matches only if it matches all of those filters. For example, the filters `Title NOT_CONTAINS CloudFront AND Title NOT_CONTAINS CloudWatch` match a finding that excludes both `CloudFront` and `CloudWatch` in the title.\n\nYou can\u2019t have both a `CONTAINS` filter and a `NOT_CONTAINS` filter on the same field. Similarly, you can't provide both an `EQUALS` filter and a `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filter on the same field. Combining filters in this way returns an error. `CONTAINS` filters can only be used with other `CONTAINS` filters. `NOT_CONTAINS` filters can only be used with other `NOT_CONTAINS` filters.\n\nYou can combine `PREFIX` filters with `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filters for the same field. Security Hub first processes the `PREFIX` filters, and then the `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filters.\n\nFor example, for the following filters, Security Hub first identifies findings that have resource types that start with either `AwsIam` or `AwsEc2` . It then excludes findings that have a resource type of `AwsIamPolicy` and findings that have a resource type of `AwsEc2NetworkInterface` .\n\n- `ResourceType PREFIX AwsIam`\n- `ResourceType PREFIX AwsEc2`\n- `ResourceType NOT_EQUALS AwsIamPolicy`\n- `ResourceType NOT_EQUALS AwsEc2NetworkInterface`\n\n`CONTAINS` and `NOT_CONTAINS` operators can be used only with automation rules V1. `CONTAINS_WORD` operator is only supported in `GetFindingsV2` , `GetFindingStatisticsV2` , `GetResourcesV2` , and `GetResourceStatisticsV2` APIs. For more information, see [Automation rules](https://docs.aws.amazon.com/securityhub/latest/userguide/automation-rules.html) in the *AWS Security Hub User Guide* .", + "markdownDescription": "The condition to apply to a string value when filtering Security Hub findings.\n\nTo search for values that have the filter value, use one of the following comparison operators:\n\n- To search for values that include the filter value, use `CONTAINS` . For example, the filter `Title CONTAINS CloudFront` matches findings that have a `Title` that includes the string CloudFront.\n- To search for values that exactly match the filter value, use `EQUALS` . For example, the filter `AwsAccountId EQUALS 123456789012` only matches findings that have an account ID of `123456789012` .\n- To search for values that start with the filter value, use `PREFIX` . For example, the filter `ResourceRegion PREFIX us` matches findings that have a `ResourceRegion` that starts with `us` . A `ResourceRegion` that starts with a different value, such as `af` , `ap` , or `ca` , doesn't match.\n\n`CONTAINS` , `EQUALS` , and `PREFIX` filters on the same field are joined by `OR` . A finding matches if it matches any one of those filters. For example, the filters `Title CONTAINS CloudFront OR Title CONTAINS CloudWatch` match a finding that includes either `CloudFront` , `CloudWatch` , or both strings in the title.\n\nTo search for values that don\u2019t have the filter value, use one of the following comparison operators:\n\n- To search for values that exclude the filter value, use `NOT_CONTAINS` . For example, the filter `Title NOT_CONTAINS CloudFront` matches findings that have a `Title` that excludes the string CloudFront.\n- To search for values other than the filter value, use `NOT_EQUALS` . For example, the filter `AwsAccountId NOT_EQUALS 123456789012` only matches findings that have an account ID other than `123456789012` .\n- To search for values that don't start with the filter value, use `PREFIX_NOT_EQUALS` . For example, the filter `ResourceRegion PREFIX_NOT_EQUALS us` matches findings with a `ResourceRegion` that starts with a value other than `us` .\n\n`NOT_CONTAINS` , `NOT_EQUALS` , and `PREFIX_NOT_EQUALS` filters on the same field are joined by `AND` . A finding matches only if it matches all of those filters. For example, the filters `Title NOT_CONTAINS CloudFront AND Title NOT_CONTAINS CloudWatch` match a finding that excludes both `CloudFront` and `CloudWatch` in the title.\n\nYou can\u2019t have both a `CONTAINS` filter and a `NOT_CONTAINS` filter on the same field. Similarly, you can't provide both an `EQUALS` filter and a `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filter on the same field. Combining filters in this way returns an error. `CONTAINS` filters can only be used with other `CONTAINS` filters. `NOT_CONTAINS` filters can only be used with other `NOT_CONTAINS` filters.\n\nYou can combine `PREFIX` filters with `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filters for the same field. Security Hub first processes the `PREFIX` filters, and then the `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filters.\n\nFor example, for the following filters, Security Hub first identifies findings that have resource types that start with either `AwsIam` or `AwsEc2` . It then excludes findings that have a resource type of `AwsIamPolicy` and findings that have a resource type of `AwsEc2NetworkInterface` .\n\n- `ResourceType PREFIX AwsIam`\n- `ResourceType PREFIX AwsEc2`\n- `ResourceType NOT_EQUALS AwsIamPolicy`\n- `ResourceType NOT_EQUALS AwsEc2NetworkInterface`\n\n`CONTAINS` and `NOT_CONTAINS` operators can be used only with automation rules V1. `CONTAINS_WORD` operator is only supported in `GetFindingsV2` , `GetFindingStatisticsV2` , `GetResourcesV2` , and `GetResourceStatisticsV2` APIs. For more information, see [Automation rules](https://docs.aws.amazon.com/securityhub/latest/userguide/automation-rules.html) in the *Security Hub User Guide* .", "title": "Comparison", "type": "string" }, @@ -255594,7 +255582,7 @@ "items": { "$ref": "#/definitions/AWS::SecurityHub::Insight.DateFilter" }, - "markdownDescription": "A timestamp that indicates when the security findings provider created the potential security issue that a finding reflects.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", + "markdownDescription": "A timestamp that indicates when the security findings provider created the potential security issue that a finding reflects.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", "title": "CreatedAt", "type": "array" }, @@ -255674,7 +255662,7 @@ "items": { "$ref": "#/definitions/AWS::SecurityHub::Insight.DateFilter" }, - "markdownDescription": "A timestamp that indicates when the security findings provider first observed the potential security issue that a finding captured.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", + "markdownDescription": "A timestamp that indicates when the security findings provider first observed the potential security issue that a finding captured.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", "title": "FirstObservedAt", "type": "array" }, @@ -255698,7 +255686,7 @@ "items": { "$ref": "#/definitions/AWS::SecurityHub::Insight.DateFilter" }, - "markdownDescription": "A timestamp that indicates when the security findings provider most recently observed a change in the resource that is involved in the finding.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", + "markdownDescription": "A timestamp that indicates when the security findings provider most recently observed a change in the resource that is involved in the finding.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", "title": "LastObservedAt", "type": "array" }, @@ -255850,7 +255838,7 @@ "items": { "$ref": "#/definitions/AWS::SecurityHub::Insight.DateFilter" }, - "markdownDescription": "A timestamp that identifies when the process was launched.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", + "markdownDescription": "A timestamp that identifies when the process was launched.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", "title": "ProcessLaunchedAt", "type": "array" }, @@ -255890,7 +255878,7 @@ "items": { "$ref": "#/definitions/AWS::SecurityHub::Insight.DateFilter" }, - "markdownDescription": "A timestamp that identifies when the process was terminated.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", + "markdownDescription": "A timestamp that identifies when the process was terminated.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", "title": "ProcessTerminatedAt", "type": "array" }, @@ -256114,7 +256102,7 @@ "items": { "$ref": "#/definitions/AWS::SecurityHub::Insight.DateFilter" }, - "markdownDescription": "A timestamp that identifies when the container was started.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", + "markdownDescription": "A timestamp that identifies when the container was started.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", "title": "ResourceContainerLaunchedAt", "type": "array" }, @@ -256210,7 +256198,7 @@ "items": { "$ref": "#/definitions/AWS::SecurityHub::Insight.DateFilter" }, - "markdownDescription": "A timestamp that identifies the last observation of a threat intelligence indicator.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", + "markdownDescription": "A timestamp that identifies the last observation of a threat intelligence indicator.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", "title": "ThreatIntelIndicatorLastObservedAt", "type": "array" }, @@ -256266,7 +256254,7 @@ "items": { "$ref": "#/definitions/AWS::SecurityHub::Insight.DateFilter" }, - "markdownDescription": "A timestamp that indicates when the security findings provider last updated the finding record.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", + "markdownDescription": "A timestamp that indicates when the security findings provider last updated the finding record.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", "title": "UpdatedAt", "type": "array" }, @@ -256344,12 +256332,12 @@ "title": "DateRange" }, "End": { - "markdownDescription": "A timestamp that provides the end date for the date filter.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", + "markdownDescription": "A timestamp that provides the end date for the date filter.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", "title": "End", "type": "string" }, "Start": { - "markdownDescription": "A timestamp that provides the start date for the date filter.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", + "markdownDescription": "A timestamp that provides the start date for the date filter.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", "title": "Start", "type": "string" } @@ -256394,7 +256382,7 @@ "additionalProperties": false, "properties": { "Comparison": { - "markdownDescription": "The condition to apply to the key value when filtering Security Hub findings with a map filter.\n\nTo search for values that have the filter value, use one of the following comparison operators:\n\n- To search for values that include the filter value, use `CONTAINS` . For example, for the `ResourceTags` field, the filter `Department CONTAINS Security` matches findings that include the value `Security` for the `Department` tag. In the same example, a finding with a value of `Security team` for the `Department` tag is a match.\n- To search for values that exactly match the filter value, use `EQUALS` . For example, for the `ResourceTags` field, the filter `Department EQUALS Security` matches findings that have the value `Security` for the `Department` tag.\n\n`CONTAINS` and `EQUALS` filters on the same field are joined by `OR` . A finding matches if it matches any one of those filters. For example, the filters `Department CONTAINS Security OR Department CONTAINS Finance` match a finding that includes either `Security` , `Finance` , or both values.\n\nTo search for values that don't have the filter value, use one of the following comparison operators:\n\n- To search for values that exclude the filter value, use `NOT_CONTAINS` . For example, for the `ResourceTags` field, the filter `Department NOT_CONTAINS Finance` matches findings that exclude the value `Finance` for the `Department` tag.\n- To search for values other than the filter value, use `NOT_EQUALS` . For example, for the `ResourceTags` field, the filter `Department NOT_EQUALS Finance` matches findings that don\u2019t have the value `Finance` for the `Department` tag.\n\n`NOT_CONTAINS` and `NOT_EQUALS` filters on the same field are joined by `AND` . A finding matches only if it matches all of those filters. For example, the filters `Department NOT_CONTAINS Security AND Department NOT_CONTAINS Finance` match a finding that excludes both the `Security` and `Finance` values.\n\n`CONTAINS` filters can only be used with other `CONTAINS` filters. `NOT_CONTAINS` filters can only be used with other `NOT_CONTAINS` filters.\n\nYou can\u2019t have both a `CONTAINS` filter and a `NOT_CONTAINS` filter on the same field. Similarly, you can\u2019t have both an `EQUALS` filter and a `NOT_EQUALS` filter on the same field. Combining filters in this way returns an error.\n\n`CONTAINS` and `NOT_CONTAINS` operators can be used only with automation rules. For more information, see [Automation rules](https://docs.aws.amazon.com/securityhub/latest/userguide/automation-rules.html) in the *AWS Security Hub User Guide* .", + "markdownDescription": "The condition to apply to the key value when filtering Security Hub findings with a map filter.\n\nTo search for values that have the filter value, use one of the following comparison operators:\n\n- To search for values that include the filter value, use `CONTAINS` . For example, for the `ResourceTags` field, the filter `Department CONTAINS Security` matches findings that include the value `Security` for the `Department` tag. In the same example, a finding with a value of `Security team` for the `Department` tag is a match.\n- To search for values that exactly match the filter value, use `EQUALS` . For example, for the `ResourceTags` field, the filter `Department EQUALS Security` matches findings that have the value `Security` for the `Department` tag.\n\n`CONTAINS` and `EQUALS` filters on the same field are joined by `OR` . A finding matches if it matches any one of those filters. For example, the filters `Department CONTAINS Security OR Department CONTAINS Finance` match a finding that includes either `Security` , `Finance` , or both values.\n\nTo search for values that don't have the filter value, use one of the following comparison operators:\n\n- To search for values that exclude the filter value, use `NOT_CONTAINS` . For example, for the `ResourceTags` field, the filter `Department NOT_CONTAINS Finance` matches findings that exclude the value `Finance` for the `Department` tag.\n- To search for values other than the filter value, use `NOT_EQUALS` . For example, for the `ResourceTags` field, the filter `Department NOT_EQUALS Finance` matches findings that don\u2019t have the value `Finance` for the `Department` tag.\n\n`NOT_CONTAINS` and `NOT_EQUALS` filters on the same field are joined by `AND` . A finding matches only if it matches all of those filters. For example, the filters `Department NOT_CONTAINS Security AND Department NOT_CONTAINS Finance` match a finding that excludes both the `Security` and `Finance` values.\n\n`CONTAINS` filters can only be used with other `CONTAINS` filters. `NOT_CONTAINS` filters can only be used with other `NOT_CONTAINS` filters.\n\nYou can\u2019t have both a `CONTAINS` filter and a `NOT_CONTAINS` filter on the same field. Similarly, you can\u2019t have both an `EQUALS` filter and a `NOT_EQUALS` filter on the same field. Combining filters in this way returns an error.\n\n`CONTAINS` and `NOT_CONTAINS` operators can be used only with automation rules. For more information, see [Automation rules](https://docs.aws.amazon.com/securityhub/latest/userguide/automation-rules.html) in the *Security Hub User Guide* .", "title": "Comparison", "type": "string" }, @@ -256441,7 +256429,7 @@ "additionalProperties": false, "properties": { "Comparison": { - "markdownDescription": "The condition to apply to a string value when filtering Security Hub findings.\n\nTo search for values that have the filter value, use one of the following comparison operators:\n\n- To search for values that include the filter value, use `CONTAINS` . For example, the filter `Title CONTAINS CloudFront` matches findings that have a `Title` that includes the string CloudFront.\n- To search for values that exactly match the filter value, use `EQUALS` . For example, the filter `AwsAccountId EQUALS 123456789012` only matches findings that have an account ID of `123456789012` .\n- To search for values that start with the filter value, use `PREFIX` . For example, the filter `ResourceRegion PREFIX us` matches findings that have a `ResourceRegion` that starts with `us` . A `ResourceRegion` that starts with a different value, such as `af` , `ap` , or `ca` , doesn't match.\n\n`CONTAINS` , `EQUALS` , and `PREFIX` filters on the same field are joined by `OR` . A finding matches if it matches any one of those filters. For example, the filters `Title CONTAINS CloudFront OR Title CONTAINS CloudWatch` match a finding that includes either `CloudFront` , `CloudWatch` , or both strings in the title.\n\nTo search for values that don\u2019t have the filter value, use one of the following comparison operators:\n\n- To search for values that exclude the filter value, use `NOT_CONTAINS` . For example, the filter `Title NOT_CONTAINS CloudFront` matches findings that have a `Title` that excludes the string CloudFront.\n- To search for values other than the filter value, use `NOT_EQUALS` . For example, the filter `AwsAccountId NOT_EQUALS 123456789012` only matches findings that have an account ID other than `123456789012` .\n- To search for values that don't start with the filter value, use `PREFIX_NOT_EQUALS` . For example, the filter `ResourceRegion PREFIX_NOT_EQUALS us` matches findings with a `ResourceRegion` that starts with a value other than `us` .\n\n`NOT_CONTAINS` , `NOT_EQUALS` , and `PREFIX_NOT_EQUALS` filters on the same field are joined by `AND` . A finding matches only if it matches all of those filters. For example, the filters `Title NOT_CONTAINS CloudFront AND Title NOT_CONTAINS CloudWatch` match a finding that excludes both `CloudFront` and `CloudWatch` in the title.\n\nYou can\u2019t have both a `CONTAINS` filter and a `NOT_CONTAINS` filter on the same field. Similarly, you can't provide both an `EQUALS` filter and a `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filter on the same field. Combining filters in this way returns an error. `CONTAINS` filters can only be used with other `CONTAINS` filters. `NOT_CONTAINS` filters can only be used with other `NOT_CONTAINS` filters.\n\nYou can combine `PREFIX` filters with `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filters for the same field. Security Hub first processes the `PREFIX` filters, and then the `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filters.\n\nFor example, for the following filters, Security Hub first identifies findings that have resource types that start with either `AwsIam` or `AwsEc2` . It then excludes findings that have a resource type of `AwsIamPolicy` and findings that have a resource type of `AwsEc2NetworkInterface` .\n\n- `ResourceType PREFIX AwsIam`\n- `ResourceType PREFIX AwsEc2`\n- `ResourceType NOT_EQUALS AwsIamPolicy`\n- `ResourceType NOT_EQUALS AwsEc2NetworkInterface`\n\n`CONTAINS` and `NOT_CONTAINS` operators can be used only with automation rules V1. `CONTAINS_WORD` operator is only supported in `GetFindingsV2` , `GetFindingStatisticsV2` , `GetResourcesV2` , and `GetResourceStatisticsV2` APIs. For more information, see [Automation rules](https://docs.aws.amazon.com/securityhub/latest/userguide/automation-rules.html) in the *AWS Security Hub User Guide* .", + "markdownDescription": "The condition to apply to a string value when filtering Security Hub findings.\n\nTo search for values that have the filter value, use one of the following comparison operators:\n\n- To search for values that include the filter value, use `CONTAINS` . For example, the filter `Title CONTAINS CloudFront` matches findings that have a `Title` that includes the string CloudFront.\n- To search for values that exactly match the filter value, use `EQUALS` . For example, the filter `AwsAccountId EQUALS 123456789012` only matches findings that have an account ID of `123456789012` .\n- To search for values that start with the filter value, use `PREFIX` . For example, the filter `ResourceRegion PREFIX us` matches findings that have a `ResourceRegion` that starts with `us` . A `ResourceRegion` that starts with a different value, such as `af` , `ap` , or `ca` , doesn't match.\n\n`CONTAINS` , `EQUALS` , and `PREFIX` filters on the same field are joined by `OR` . A finding matches if it matches any one of those filters. For example, the filters `Title CONTAINS CloudFront OR Title CONTAINS CloudWatch` match a finding that includes either `CloudFront` , `CloudWatch` , or both strings in the title.\n\nTo search for values that don\u2019t have the filter value, use one of the following comparison operators:\n\n- To search for values that exclude the filter value, use `NOT_CONTAINS` . For example, the filter `Title NOT_CONTAINS CloudFront` matches findings that have a `Title` that excludes the string CloudFront.\n- To search for values other than the filter value, use `NOT_EQUALS` . For example, the filter `AwsAccountId NOT_EQUALS 123456789012` only matches findings that have an account ID other than `123456789012` .\n- To search for values that don't start with the filter value, use `PREFIX_NOT_EQUALS` . For example, the filter `ResourceRegion PREFIX_NOT_EQUALS us` matches findings with a `ResourceRegion` that starts with a value other than `us` .\n\n`NOT_CONTAINS` , `NOT_EQUALS` , and `PREFIX_NOT_EQUALS` filters on the same field are joined by `AND` . A finding matches only if it matches all of those filters. For example, the filters `Title NOT_CONTAINS CloudFront AND Title NOT_CONTAINS CloudWatch` match a finding that excludes both `CloudFront` and `CloudWatch` in the title.\n\nYou can\u2019t have both a `CONTAINS` filter and a `NOT_CONTAINS` filter on the same field. Similarly, you can't provide both an `EQUALS` filter and a `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filter on the same field. Combining filters in this way returns an error. `CONTAINS` filters can only be used with other `CONTAINS` filters. `NOT_CONTAINS` filters can only be used with other `NOT_CONTAINS` filters.\n\nYou can combine `PREFIX` filters with `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filters for the same field. Security Hub first processes the `PREFIX` filters, and then the `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filters.\n\nFor example, for the following filters, Security Hub first identifies findings that have resource types that start with either `AwsIam` or `AwsEc2` . It then excludes findings that have a resource type of `AwsIamPolicy` and findings that have a resource type of `AwsEc2NetworkInterface` .\n\n- `ResourceType PREFIX AwsIam`\n- `ResourceType PREFIX AwsEc2`\n- `ResourceType NOT_EQUALS AwsIamPolicy`\n- `ResourceType NOT_EQUALS AwsEc2NetworkInterface`\n\n`CONTAINS` and `NOT_CONTAINS` operators can be used only with automation rules V1. `CONTAINS_WORD` operator is only supported in `GetFindingsV2` , `GetFindingStatisticsV2` , `GetResourcesV2` , and `GetResourceStatisticsV2` APIs. For more information, see [Automation rules](https://docs.aws.amazon.com/securityhub/latest/userguide/automation-rules.html) in the *Security Hub User Guide* .", "title": "Comparison", "type": "string" }, @@ -271961,12 +271949,12 @@ "type": "string" }, "DesktopArn": { - "markdownDescription": "The Amazon Resource Name (ARN) of the desktop to stream from Amazon WorkSpaces, WorkSpaces Secure Browser, or AppStream 2.0.", + "markdownDescription": "The Amazon Resource Name (ARN) of the desktop to stream from Amazon WorkSpaces, WorkSpaces Secure Browser, or WorkSpaces Applications.", "title": "DesktopArn", "type": "string" }, "DesktopEndpoint": { - "markdownDescription": "The URL for the identity provider login (only for environments that use AppStream 2.0).", + "markdownDescription": "The URL for the identity provider login (only for environments that use WorkSpaces Applications).", "title": "DesktopEndpoint", "type": "string" }, diff --git a/schema_source/cloudformation-docs.json b/schema_source/cloudformation-docs.json index 21b8388d6..5c628d324 100644 --- a/schema_source/cloudformation-docs.json +++ b/schema_source/cloudformation-docs.json @@ -232,39 +232,39 @@ "Value": "A list of key-value pairs to associate with the investigation group. You can associate as many as 50 tags with an investigation group. To be able to associate tags when you create the investigation group, you must have the `cloudwatch:TagResource` permission.\n\nTags can help you organize and categorize your resources. You can also use them to scope user permissions by granting a user permission to access or change only resources with certain tag values." }, "AWS::APS::AnomalyDetector": { - "Alias": "", - "Configuration": "", - "EvaluationIntervalInSeconds": "", - "Labels": "", - "MissingDataAction": "", - "Tags": "", + "Alias": "The user-friendly name of the anomaly detector.", + "Configuration": "The algorithm configuration of the anomaly detector.", + "EvaluationIntervalInSeconds": "The frequency, in seconds, at which the anomaly detector evaluates metrics.", + "Labels": "The Amazon Managed Service for Prometheus metric labels associated with the anomaly detector.", + "MissingDataAction": "The action taken when data is missing during evaluation.", + "Tags": "The tags applied to the anomaly detector.", "Workspace": "An Amazon Managed Service for Prometheus workspace is a logical and isolated Prometheus server dedicated to ingesting, storing, and querying your Prometheus-compatible metrics." }, "AWS::APS::AnomalyDetector AnomalyDetectorConfiguration": { - "RandomCutForest": "" + "RandomCutForest": "The Random Cut Forest algorithm configuration for anomaly detection." }, "AWS::APS::AnomalyDetector IgnoreNearExpected": { - "Amount": "", - "Ratio": "" + "Amount": "The absolute amount by which values can differ from expected values before being considered anomalous.", + "Ratio": "The ratio by which values can differ from expected values before being considered anomalous." }, "AWS::APS::AnomalyDetector Label": { - "Key": "", - "Value": "" + "Key": "The key of the label.", + "Value": "The value for this label." }, "AWS::APS::AnomalyDetector MissingDataAction": { - "MarkAsAnomaly": "", - "Skip": "" + "MarkAsAnomaly": "Marks missing data points as anomalies.", + "Skip": "Skips evaluation when data is missing." }, "AWS::APS::AnomalyDetector RandomCutForestConfiguration": { - "IgnoreNearExpectedFromAbove": "", - "IgnoreNearExpectedFromBelow": "", - "Query": "", - "SampleSize": "", - "ShingleSize": "" + "IgnoreNearExpectedFromAbove": "Configuration for ignoring values that are near expected values from above during anomaly detection.", + "IgnoreNearExpectedFromBelow": "Configuration for ignoring values that are near expected values from below during anomaly detection.", + "Query": "The Prometheus query used to retrieve the time-series data for anomaly detection.\n\n> Random Cut Forest queries must be wrapped by a supported PromQL aggregation operator. For more information, see [Aggregation operators](https://docs.aws.amazon.com/https://prometheus.io/docs/prometheus/latest/querying/operators/#aggregation-operators) on the *Prometheus docs* website.\n> \n> *Supported PromQL aggregation operators* : `avg` , `count` , `group` , `max` , `min` , `quantile` , `stddev` , `stdvar` , and `sum` .", + "SampleSize": "The number of data points sampled from the input stream for the Random Cut Forest algorithm. The default number is 256 consecutive data points.", + "ShingleSize": "The number of consecutive data points used to create a shingle for the Random Cut Forest algorithm. The default number is 8 consecutive data points." }, "AWS::APS::AnomalyDetector Tag": { - "Key": "", - "Value": "" + "Key": "The key of the tag. Must not begin with `aws:` .", + "Value": "The value of the tag." }, "AWS::APS::ResourcePolicy": { "PolicyDocument": "The JSON to use as the Resource-based Policy.", @@ -477,10 +477,6 @@ "AWS::ARCRegionSwitch::Plan GlobalAuroraUngraceful": { "Ungraceful": "The settings for ungraceful execution." }, - "AWS::ARCRegionSwitch::Plan HealthCheckState": { - "HealthCheckId": "", - "Region": "" - }, "AWS::ARCRegionSwitch::Plan KubernetesResourceType": { "ApiVersion": "The API version type for the Kubernetes resource.", "Kind": "The kind for the Kubernetes resource." @@ -3312,14 +3308,14 @@ "IdleDisconnectTimeoutInSeconds": "The amount of time that users can be idle (inactive) before they are disconnected from their streaming session and the `DisconnectTimeoutInSeconds` time interval begins. Users are notified before they are disconnected due to inactivity. If they try to reconnect to the streaming session before the time interval specified in `DisconnectTimeoutInSeconds` elapses, they are connected to their previous session. Users are considered idle when they stop providing keyboard or mouse input during their streaming session. File uploads and downloads, audio in, audio out, and pixels changing do not qualify as user activity. If users continue to be idle after the time interval in `IdleDisconnectTimeoutInSeconds` elapses, they are disconnected.\n\nTo prevent users from being disconnected due to inactivity, specify a value of 0. Otherwise, specify a value between 60 and 36000.\n\nIf you enable this feature, we recommend that you specify a value that corresponds exactly to a whole number of minutes (for example, 60, 120, and 180). If you don't do this, the value is rounded to the nearest minute. For example, if you specify a value of 70, users are disconnected after 1 minute of inactivity. If you specify a value that is at the midpoint between two different minutes, the value is rounded up. For example, if you specify a value of 90, users are disconnected after 2 minutes of inactivity.", "ImageArn": "The ARN of the public, private, or shared image to use.", "ImageName": "The name of the image used to create the fleet.", - "InstanceType": "The instance type to use when launching fleet instances. The following instance types are available for non-Elastic fleets:\n\n- stream.standard.small\n- stream.standard.medium\n- stream.standard.large\n- stream.compute.large\n- stream.compute.xlarge\n- stream.compute.2xlarge\n- stream.compute.4xlarge\n- stream.compute.8xlarge\n- stream.memory.large\n- stream.memory.xlarge\n- stream.memory.2xlarge\n- stream.memory.4xlarge\n- stream.memory.8xlarge\n- stream.memory.z1d.large\n- stream.memory.z1d.xlarge\n- stream.memory.z1d.2xlarge\n- stream.memory.z1d.3xlarge\n- stream.memory.z1d.6xlarge\n- stream.memory.z1d.12xlarge\n- stream.graphics-design.large\n- stream.graphics-design.xlarge\n- stream.graphics-design.2xlarge\n- stream.graphics-design.4xlarge\n- stream.graphics-desktop.2xlarge\n- stream.graphics.g4dn.xlarge\n- stream.graphics.g4dn.2xlarge\n- stream.graphics.g4dn.4xlarge\n- stream.graphics.g4dn.8xlarge\n- stream.graphics.g4dn.12xlarge\n- stream.graphics.g4dn.16xlarge\n- stream.graphics-pro.4xlarge\n- stream.graphics-pro.8xlarge\n- stream.graphics-pro.16xlarge\n- stream.graphics.g5.xlarge\n- stream.graphics.g5.2xlarge\n- stream.graphics.g5.4xlarge\n- stream.graphics.g5.8xlarge\n- stream.graphics.g5.16xlarge\n- stream.graphics.g5.12xlarge\n- stream.graphics.g5.24xlarge\n- stream.graphics.g6.xlarge\n- stream.graphics.g6.2xlarge\n- stream.graphics.g6.4xlarge\n- stream.graphics.g6.8xlarge\n- stream.graphics.g6.16xlarge\n- stream.graphics.g6.12xlarge\n- stream.graphics.g6.24xlarge\n- stream.graphics.gr6.4xlarge\n- stream.graphics.gr6.8xlarge\n- stream.graphics.g6f.large\n- stream.graphics.g6f.xlarge\n- stream.graphics.g6f.2xlarge\n- stream.graphics.g6f.4xlarge\n- stream.graphics.gr6f.4xlarge\n\nThe following instance types are available for Elastic fleets:\n\n- stream.standard.small\n- stream.standard.medium", + "InstanceType": "The instance type to use when launching fleet instances. The following instance types are available for non-Elastic fleets:\n\n- stream.standard.small\n- stream.standard.medium\n- stream.standard.large\n- stream.compute.large\n- stream.compute.xlarge\n- stream.compute.2xlarge\n- stream.compute.4xlarge\n- stream.compute.8xlarge\n- stream.memory.large\n- stream.memory.xlarge\n- stream.memory.2xlarge\n- stream.memory.4xlarge\n- stream.memory.8xlarge\n- stream.memory.z1d.large\n- stream.memory.z1d.xlarge\n- stream.memory.z1d.2xlarge\n- stream.memory.z1d.3xlarge\n- stream.memory.z1d.6xlarge\n- stream.memory.z1d.12xlarge\n- stream.graphics-design.large\n- stream.graphics-design.xlarge\n- stream.graphics-design.2xlarge\n- stream.graphics-design.4xlarge\n- stream.graphics.g4dn.xlarge\n- stream.graphics.g4dn.2xlarge\n- stream.graphics.g4dn.4xlarge\n- stream.graphics.g4dn.8xlarge\n- stream.graphics.g4dn.12xlarge\n- stream.graphics.g4dn.16xlarge\n- stream.graphics.g5.xlarge\n- stream.graphics.g5.2xlarge\n- stream.graphics.g5.4xlarge\n- stream.graphics.g5.8xlarge\n- stream.graphics.g5.16xlarge\n- stream.graphics.g5.12xlarge\n- stream.graphics.g5.24xlarge\n- stream.graphics.g6.xlarge\n- stream.graphics.g6.2xlarge\n- stream.graphics.g6.4xlarge\n- stream.graphics.g6.8xlarge\n- stream.graphics.g6.16xlarge\n- stream.graphics.g6.12xlarge\n- stream.graphics.g6.24xlarge\n- stream.graphics.gr6.4xlarge\n- stream.graphics.gr6.8xlarge\n- stream.graphics.g6f.large\n- stream.graphics.g6f.xlarge\n- stream.graphics.g6f.2xlarge\n- stream.graphics.g6f.4xlarge\n- stream.graphics.gr6f.4xlarge\n\nThe following instance types are available for Elastic fleets:\n\n- stream.standard.small\n- stream.standard.medium", "MaxConcurrentSessions": "The maximum number of concurrent sessions that can be run on an Elastic fleet. This setting is required for Elastic fleets, but is not used for other fleet types.", "MaxSessionsPerInstance": "Max number of user sessions on an instance. This is applicable only for multi-session fleets.", "MaxUserDurationInSeconds": "The maximum amount of time that a streaming session can remain active, in seconds. If users are still connected to a streaming instance five minutes before this limit is reached, they are prompted to save any open documents before being disconnected. After this time elapses, the instance is terminated and replaced by a new instance.\n\nSpecify a value between 600 and 432000.", "Name": "A unique name for the fleet.", "Platform": "The platform of the fleet. Platform is a required setting for Elastic fleets, and is not used for other fleet types.", "SessionScriptS3Location": "The S3 location of the session scripts configuration zip file. This only applies to Elastic fleets.", - "StreamView": "The AppStream 2.0 view that is displayed to your users when they stream from the fleet. When `APP` is specified, only the windows of applications opened by users display. When `DESKTOP` is specified, the standard desktop that is provided by the operating system displays.\n\nThe default value is `APP` .", + "StreamView": "The WorkSpaces Applications view that is displayed to your users when they stream from the fleet. When `APP` is specified, only the windows of applications opened by users display. When `DESKTOP` is specified, the standard desktop that is provided by the operating system displays.\n\nThe default value is `APP` .", "Tags": "An array of key-value pairs.", "UsbDeviceFilterStrings": "The USB device filter strings that specify which USB devices a user can redirect to the fleet streaming session, when using the Windows native client. This is allowed but not required for Elastic fleets.", "VpcConfig": "The VPC configuration for the fleet. This is required for Elastic fleets, but not required for other fleet types." @@ -3346,7 +3342,7 @@ }, "AWS::AppStream::ImageBuilder": { "AccessEndpoints": "The list of virtual private cloud (VPC) interface endpoint objects. Administrators can connect to the image builder only through the specified endpoints.", - "AppstreamAgentVersion": "The version of the AppStream 2.0 agent to use for this image builder. To use the latest version of the AppStream 2.0 agent, specify [LATEST].", + "AppstreamAgentVersion": "The version of the WorkSpaces Applications agent to use for this image builder. To use the latest version of the WorkSpaces Applications agent, specify [LATEST].", "Description": "The description to display.", "DisplayName": "The image builder name to display.", "DomainJoinInfo": "The name of the directory and organizational unit (OU) to use to join the image builder to a Microsoft Active Directory domain.", @@ -3354,7 +3350,7 @@ "IamRoleArn": "The ARN of the IAM role that is applied to the image builder. To assume a role, the image builder calls the AWS Security Token Service `AssumeRole` API operation and passes the ARN of the role to use. The operation creates a new session with temporary credentials. AppStream 2.0 retrieves the temporary credentials and creates the *appstream_machine_role* credential profile on the instance.\n\nFor more information, see [Using an IAM Role to Grant Permissions to Applications and Scripts Running on AppStream 2.0 Streaming Instances](https://docs.aws.amazon.com/appstream2/latest/developerguide/using-iam-roles-to-grant-permissions-to-applications-scripts-streaming-instances.html) in the *Amazon AppStream 2.0 Administration Guide* .", "ImageArn": "The ARN of the public, private, or shared image to use.", "ImageName": "The name of the image used to create the image builder.", - "InstanceType": "The instance type to use when launching the image builder. The following instance types are available:\n\n- stream.standard.small\n- stream.standard.medium\n- stream.standard.large\n- stream.compute.large\n- stream.compute.xlarge\n- stream.compute.2xlarge\n- stream.compute.4xlarge\n- stream.compute.8xlarge\n- stream.memory.large\n- stream.memory.xlarge\n- stream.memory.2xlarge\n- stream.memory.4xlarge\n- stream.memory.8xlarge\n- stream.memory.z1d.large\n- stream.memory.z1d.xlarge\n- stream.memory.z1d.2xlarge\n- stream.memory.z1d.3xlarge\n- stream.memory.z1d.6xlarge\n- stream.memory.z1d.12xlarge\n- stream.graphics-design.large\n- stream.graphics-design.xlarge\n- stream.graphics-design.2xlarge\n- stream.graphics-design.4xlarge\n- stream.graphics-desktop.2xlarge\n- stream.graphics.g4dn.xlarge\n- stream.graphics.g4dn.2xlarge\n- stream.graphics.g4dn.4xlarge\n- stream.graphics.g4dn.8xlarge\n- stream.graphics.g4dn.12xlarge\n- stream.graphics.g4dn.16xlarge\n- stream.graphics-pro.4xlarge\n- stream.graphics-pro.8xlarge\n- stream.graphics-pro.16xlarge\n- stream.graphics.g5.xlarge\n- stream.graphics.g5.2xlarge\n- stream.graphics.g5.4xlarge\n- stream.graphics.g5.8xlarge\n- stream.graphics.g5.16xlarge\n- stream.graphics.g5.12xlarge\n- stream.graphics.g5.24xlarge\n- stream.graphics.g6.xlarge\n- stream.graphics.g6.2xlarge\n- stream.graphics.g6.4xlarge\n- stream.graphics.g6.8xlarge\n- stream.graphics.g6.16xlarge\n- stream.graphics.g6.12xlarge\n- stream.graphics.g6.24xlarge\n- stream.graphics.gr6.4xlarge\n- stream.graphics.gr6.8xlarge\n- stream.graphics.g6f.large\n- stream.graphics.g6f.xlarge\n- stream.graphics.g6f.2xlarge\n- stream.graphics.g6f.4xlarge\n- stream.graphics.gr6f.4xlarge", + "InstanceType": "The instance type to use when launching the image builder. The following instance types are available:\n\n- stream.standard.small\n- stream.standard.medium\n- stream.standard.large\n- stream.compute.large\n- stream.compute.xlarge\n- stream.compute.2xlarge\n- stream.compute.4xlarge\n- stream.compute.8xlarge\n- stream.memory.large\n- stream.memory.xlarge\n- stream.memory.2xlarge\n- stream.memory.4xlarge\n- stream.memory.8xlarge\n- stream.memory.z1d.large\n- stream.memory.z1d.xlarge\n- stream.memory.z1d.2xlarge\n- stream.memory.z1d.3xlarge\n- stream.memory.z1d.6xlarge\n- stream.memory.z1d.12xlarge\n- stream.graphics-design.large\n- stream.graphics-design.xlarge\n- stream.graphics-design.2xlarge\n- stream.graphics-design.4xlarge\n- stream.graphics.g4dn.xlarge\n- stream.graphics.g4dn.2xlarge\n- stream.graphics.g4dn.4xlarge\n- stream.graphics.g4dn.8xlarge\n- stream.graphics.g4dn.12xlarge\n- stream.graphics.g4dn.16xlarge\n- stream.graphics.g5.xlarge\n- stream.graphics.g5.2xlarge\n- stream.graphics.g5.4xlarge\n- stream.graphics.g5.8xlarge\n- stream.graphics.g5.16xlarge\n- stream.graphics.g5.12xlarge\n- stream.graphics.g5.24xlarge\n- stream.graphics.g6.xlarge\n- stream.graphics.g6.2xlarge\n- stream.graphics.g6.4xlarge\n- stream.graphics.g6.8xlarge\n- stream.graphics.g6.16xlarge\n- stream.graphics.g6.12xlarge\n- stream.graphics.g6.24xlarge\n- stream.graphics.gr6.4xlarge\n- stream.graphics.gr6.8xlarge\n- stream.graphics.g6f.large\n- stream.graphics.g6f.xlarge\n- stream.graphics.g6f.2xlarge\n- stream.graphics.g6f.4xlarge\n- stream.graphics.gr6f.4xlarge", "Name": "A unique name for the image builder.", "Tags": "An array of key-value pairs.", "VpcConfig": "The VPC configuration for the image builder. You can specify only one subnet." @@ -3376,13 +3372,13 @@ "SubnetIds": "The identifier of the subnet to which a network interface is attached from the image builder instance. An image builder instance can use one subnet." }, "AWS::AppStream::Stack": { - "AccessEndpoints": "The list of virtual private cloud (VPC) interface endpoint objects. Users of the stack can connect to AppStream 2.0 only through the specified endpoints.", + "AccessEndpoints": "The list of virtual private cloud (VPC) interface endpoint objects. Users of the stack can connect to WorkSpaces Applications only through the specified endpoints.", "ApplicationSettings": "The persistent application settings for users of the stack. When these settings are enabled, changes that users make to applications and Windows settings are automatically saved after each session and applied to the next session.", "AttributesToDelete": "The stack attributes to delete.", "DeleteStorageConnectors": "*This parameter has been deprecated.*\n\nDeletes the storage connectors currently enabled for the stack.", "Description": "The description to display.", "DisplayName": "The stack name to display.", - "EmbedHostDomains": "The domains where AppStream 2.0 streaming sessions can be embedded in an iframe. You must approve the domains that you want to host embedded AppStream 2.0 streaming sessions.", + "EmbedHostDomains": "The domains where WorkSpaces Applications streaming sessions can be embedded in an iframe. You must approve the domains that you want to host embedded WorkSpaces Applications streaming sessions.", "FeedbackURL": "The URL that users are redirected to after they click the Send Feedback link. If no URL is specified, no Send Feedback link is displayed.", "Name": "The name of the stack.", "RedirectURL": "The URL that users are redirected to after their streaming session ends.", @@ -5202,7 +5198,7 @@ "ProtectedResourceType": "The type of AWS resource included in a resource testing selection; for example, an Amazon EBS volume or an Amazon RDS database.", "RestoreMetadataOverrides": "You can override certain restore metadata keys by including the parameter `RestoreMetadataOverrides` in the body of `RestoreTestingSelection` . Key values are not case sensitive.\n\nSee the complete list of [restore testing inferred metadata](https://docs.aws.amazon.com/aws-backup/latest/devguide/restore-testing-inferred-metadata.html) .", "RestoreTestingPlanName": "Unique string that is the name of the restore testing plan.\n\nThe name cannot be changed after creation. The name must consist of only alphanumeric characters and underscores. Maximum length is 50.", - "RestoreTestingSelectionName": "The unique name of the restore testing selection that belongs to the related restore testing plan.", + "RestoreTestingSelectionName": "The unique name of the restore testing selection that belongs to the related restore testing plan.\n\nThe name consists of only alphanumeric characters and underscores. Maximum length is 50.", "ValidationWindowHours": "This is amount of hours (1 to 168) available to run a validation script on the data. The data will be deleted upon the completion of the validation script or the end of the specified retention period, whichever comes first." }, "AWS::Backup::RestoreTestingSelection KeyValue": { @@ -5302,6 +5298,7 @@ "Parameters": "Default parameters or parameter substitution placeholders that are set in the job definition. Parameters are specified as a key-value pair mapping. Parameters in a `SubmitJob` request override any corresponding parameter defaults from the job definition. For more information about specifying parameters, see [Job definition parameters](https://docs.aws.amazon.com/batch/latest/userguide/job_definition_parameters.html) in the *AWS Batch User Guide* .", "PlatformCapabilities": "The platform capabilities required by the job definition. If no value is specified, it defaults to `EC2` . Jobs run on Fargate resources specify `FARGATE` .", "PropagateTags": "Specifies whether to propagate the tags from the job or job definition to the corresponding Amazon ECS task. If no value is specified, the tags aren't propagated. Tags can only be propagated to the tasks when the tasks are created. For tags with the same name, job tags are given priority over job definitions tags. If the total number of combined tags from the job and job definition is over 50, the job is moved to the `FAILED` state.", + "ResourceRetentionPolicy": "Specifies the resource retention policy settings for the job definition.", "RetryStrategy": "The retry strategy to use for failed jobs that are submitted with this job definition.", "SchedulingPriority": "The scheduling priority of the job definition. This only affects jobs in job queues with a fair-share policy. Jobs with a higher scheduling priority are scheduled before jobs with a lower scheduling priority.", "Tags": "The tags that are applied to the job definition.", @@ -5552,6 +5549,9 @@ "Type": "The type of resource to assign to a container. The supported resources include `GPU` , `MEMORY` , and `VCPU` .", "Value": "The quantity of the specified resource to reserve for the container. The values vary based on the `type` specified.\n\n- **type=\"GPU\"** - The number of physical GPUs to reserve for the container. Make sure that the number of GPUs reserved for all containers in a job doesn't exceed the number of available GPUs on the compute resource that the job is launched on.\n\n> GPUs aren't available for jobs that are running on Fargate resources.\n- **type=\"MEMORY\"** - The memory hard limit (in MiB) present to the container. This parameter is supported for jobs that are running on Amazon EC2 resources. If your container attempts to exceed the memory specified, the container is terminated. This parameter maps to `Memory` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/) and the `--memory` option to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/) . You must specify at least 4 MiB of memory for a job. This is required but can be specified in several places for multi-node parallel (MNP) jobs. It must be specified for each node at least once. This parameter maps to `Memory` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/) and the `--memory` option to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/) .\n\n> If you're trying to maximize your resource utilization by providing your jobs as much memory as possible for a particular instance type, see [Memory management](https://docs.aws.amazon.com/batch/latest/userguide/memory-management.html) in the *AWS Batch User Guide* . \n\nFor jobs that are running on Fargate resources, then `value` is the hard limit (in MiB), and must match one of the supported values and the `VCPU` values must be one of the values supported for that memory value.\n\n- **value = 512** - `VCPU` = 0.25\n- **value = 1024** - `VCPU` = 0.25 or 0.5\n- **value = 2048** - `VCPU` = 0.25, 0.5, or 1\n- **value = 3072** - `VCPU` = 0.5, or 1\n- **value = 4096** - `VCPU` = 0.5, 1, or 2\n- **value = 5120, 6144, or 7168** - `VCPU` = 1 or 2\n- **value = 8192** - `VCPU` = 1, 2, or 4\n- **value = 9216, 10240, 11264, 12288, 13312, 14336, or 15360** - `VCPU` = 2 or 4\n- **value = 16384** - `VCPU` = 2, 4, or 8\n- **value = 17408, 18432, 19456, 21504, 22528, 23552, 25600, 26624, 27648, 29696, or 30720** - `VCPU` = 4\n- **value = 20480, 24576, or 28672** - `VCPU` = 4 or 8\n- **value = 36864, 45056, 53248, or 61440** - `VCPU` = 8\n- **value = 32768, 40960, 49152, or 57344** - `VCPU` = 8 or 16\n- **value = 65536, 73728, 81920, 90112, 98304, 106496, 114688, or 122880** - `VCPU` = 16\n- **type=\"VCPU\"** - The number of vCPUs reserved for the container. This parameter maps to `CpuShares` in the [Create a container](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/#create-a-container) section of the [Docker Remote API](https://docs.aws.amazon.com/https://docs.docker.com/engine/api/v1.23/) and the `--cpu-shares` option to [docker run](https://docs.aws.amazon.com/https://docs.docker.com/engine/reference/run/) . Each vCPU is equivalent to 1,024 CPU shares. For Amazon EC2 resources, you must specify at least one vCPU. This is required but can be specified in several places; it must be specified for each node at least once.\n\nThe default for the Fargate On-Demand vCPU resource count quota is 6 vCPUs. For more information about Fargate quotas, see [AWS Fargate quotas](https://docs.aws.amazon.com/general/latest/gr/ecs-service.html#service-quotas-fargate) in the *AWS General Reference* .\n\nFor jobs that are running on Fargate resources, then `value` must match one of the supported values and the `MEMORY` values must be one of the values supported for that `VCPU` value. The supported values are 0.25, 0.5, 1, 2, 4, 8, and 16\n\n- **value = 0.25** - `MEMORY` = 512, 1024, or 2048\n- **value = 0.5** - `MEMORY` = 1024, 2048, 3072, or 4096\n- **value = 1** - `MEMORY` = 2048, 3072, 4096, 5120, 6144, 7168, or 8192\n- **value = 2** - `MEMORY` = 4096, 5120, 6144, 7168, 8192, 9216, 10240, 11264, 12288, 13312, 14336, 15360, or 16384\n- **value = 4** - `MEMORY` = 8192, 9216, 10240, 11264, 12288, 13312, 14336, 15360, 16384, 17408, 18432, 19456, 20480, 21504, 22528, 23552, 24576, 25600, 26624, 27648, 28672, 29696, or 30720\n- **value = 8** - `MEMORY` = 16384, 20480, 24576, 28672, 32768, 36864, 40960, 45056, 49152, 53248, 57344, or 61440\n- **value = 16** - `MEMORY` = 32768, 40960, 49152, 57344, 65536, 73728, 81920, 90112, 98304, 106496, 114688, or 122880" }, + "AWS::Batch::JobDefinition ResourceRetentionPolicy": { + "SkipDeregisterOnUpdate": "Specifies whether the previous revision of the job definition is retained in an active status after UPDATE events for the resource. The default value is `false` . When the property is set to `false` , the previous revision of the job definition is de-registered after a new revision is created. When the property is set to `true` , the previous revision of the job definition is not de-registered." + }, "AWS::Batch::JobDefinition RetryStrategy": { "Attempts": "The number of times to move a job to the `RUNNABLE` status. You can specify between 1 and 10 attempts. If the value of `attempts` is greater than one, the job is retried on failure the same number of attempts as the value.", "EvaluateOnExit": "Array of up to 5 objects that specify the conditions where jobs are retried or failed. If this parameter is specified, then the `attempts` parameter must also be specified. If none of the listed conditions match, then the job is retried." @@ -5796,6 +5796,8 @@ }, "AWS::Bedrock::AutomatedReasoningPolicy": { "Description": "The description of the policy.", + "ForceDelete": "", + "KmsKeyId": "", "Name": "The name of the policy.", "PolicyDefinition": "The complete policy definition generated by the build workflow, containing all rules, variables, and custom types extracted from the source documents.", "Tags": "The tags associated with the Automated Reasoning policy." @@ -7394,6 +7396,15 @@ "Name": "The name of the AgentCore Runtime endpoint.", "Tags": "The tags for the AgentCore Runtime endpoint." }, + "AWS::BedrockAgentCore::WorkloadIdentity": { + "AllowedResourceOauth2ReturnUrls": "The list of allowed OAuth2 return URLs for resources associated with this workload identity.", + "Name": "The name of the workload identity. The name must be unique within your account.", + "Tags": "The tags for the workload identity." + }, + "AWS::BedrockAgentCore::WorkloadIdentity Tag": { + "Key": "The key name of the tag.", + "Value": "The value for the tag." + }, "AWS::Billing::BillingView": { "DataFilterExpression": "See [Expression](https://docs.aws.amazon.com/aws-cost-management/latest/APIReference/API_billing_Expression.html) . Billing view only supports `LINKED_ACCOUNT` and `Tags` .", "Description": "The description of the billing view.", @@ -7440,9 +7451,11 @@ "AccountId": "The AWS account in which this custom line item will be applied to.", "BillingGroupArn": "The Amazon Resource Name (ARN) that references the billing group where the custom line item applies to.", "BillingPeriodRange": "A time range for which the custom line item is effective.", + "ComputationRule": "", "CustomLineItemChargeDetails": "The charge details of a custom line item. It should contain only one of `Flat` or `Percentage` .", "Description": "The custom line item's description. This is shown on the Bills page in association with the charge value.", "Name": "The custom line item's name.", + "PresentationDetails": "", "Tags": "A map that contains tag keys and tag values that are attached to a custom line item." }, "AWS::BillingConductor::CustomLineItem BillingPeriodRange": { @@ -7467,6 +7480,9 @@ "MatchOption": "The match criteria of the line item filter. This parameter specifies whether not to include the resource value from the billing group total cost.", "Values": "The values of the line item filter. This specifies the values to filter on. Currently, you can only exclude Savings Plans discounts." }, + "AWS::BillingConductor::CustomLineItem PresentationDetails": { + "Service": "" + }, "AWS::BillingConductor::CustomLineItem Tag": { "Key": "The key in a key-value pair.", "Value": "The value in a key-value pair of a tag." @@ -7926,6 +7942,7 @@ "Value": "The value of the tag." }, "AWS::CleanRooms::Collaboration": { + "AllowedResultRegions": "The AWS Regions where collaboration query results can be stored. Returns the list of Region identifiers that were specified when the collaboration was created. This list is used to enforce regional storage policies and compliance requirements.", "AnalyticsEngine": "The analytics engine for the collaboration.\n\n> After July 16, 2025, the `CLEAN_ROOMS_SQL` parameter will no longer be available.", "AutoApprovedChangeTypes": "The types of change requests that are automatically approved for this collaboration.", "CreatorDisplayName": "A display name of the collaboration creator.", @@ -7982,7 +7999,7 @@ "Value": "The value of the tag." }, "AWS::CleanRooms::ConfiguredTable": { - "AllowedColumns": "The columns within the underlying AWS Glue table that can be utilized within collaborations.", + "AllowedColumns": "The columns within the underlying AWS Glue table that can be used within collaborations.", "AnalysisMethod": "The analysis method for the configured table.\n\n`DIRECT_QUERY` allows SQL queries to be run directly on this table.\n\n`DIRECT_JOB` allows PySpark jobs to be run directly on this table.\n\n`MULTIPLE` allows both SQL queries and PySpark jobs to be run directly on this table.", "AnalysisRules": "The analysis rule that was created for the configured table.", "Description": "A description for the configured table.", @@ -8030,6 +8047,7 @@ "AWS::CleanRooms::ConfiguredTable AthenaTableReference": { "DatabaseName": "The database name.", "OutputLocation": "The output location for the Athena table.", + "Region": "The AWS Region where the Athena table is located. This parameter is required to uniquely identify and access tables across different Regions.", "TableName": "The table reference.", "WorkGroup": "The workgroup of the Athena table reference." }, @@ -8049,6 +8067,7 @@ }, "AWS::CleanRooms::ConfiguredTable GlueTableReference": { "DatabaseName": "The name of the database the AWS Glue table belongs to.", + "Region": "The AWS Region where the AWS Glue table is located. This parameter is required to uniquely identify and access tables across different Regions.", "TableName": "The name of the AWS Glue table." }, "AWS::CleanRooms::ConfiguredTable SnowflakeTableReference": { @@ -8223,8 +8242,15 @@ "PrivacyBudgetType": "Specifies the type of the privacy budget template.", "Tags": "An optional label that you can assign to a resource when you create it. Each tag consists of a key and an optional value, both of which you define. When you use tagging, you can also use tag-based access control in IAM policies to control access to this resource." }, + "AWS::CleanRooms::PrivacyBudgetTemplate BudgetParameter": { + "AutoRefresh": "Whether this individual budget parameter automatically refreshes when the budget period resets.", + "Budget": "The budget allocation amount for this specific parameter.", + "Type": "The type of budget parameter being configured." + }, "AWS::CleanRooms::PrivacyBudgetTemplate Parameters": { + "BudgetParameters": "", "Epsilon": "The epsilon value that you want to use.", + "ResourceArn": "", "UsersNoisePerQuery": "Noise added per query is measured in terms of the number of users whose contributions you want to obscure. This value governs the rate at which the privacy budget is depleted." }, "AWS::CleanRooms::PrivacyBudgetTemplate Tag": { @@ -8530,7 +8556,7 @@ }, "AWS::CloudFormation::WaitConditionHandle": {}, "AWS::CloudFront::AnycastIpList": { - "IpAddressType": "", + "IpAddressType": "The IP address type for the Anycast static IP list.", "IpCount": "The number of IP addresses in the Anycast static IP list.", "Name": "The name of the Anycast static IP list.", "Tags": "A complex type that contains zero or more `Tag` elements." @@ -8539,7 +8565,7 @@ "AnycastIps": "The static IP addresses that are allocated to the Anycast static IP list.", "Arn": "The Amazon Resource Name (ARN) of the Anycast static IP list.", "Id": "The ID of the Anycast static IP list.", - "IpAddressType": "", + "IpAddressType": "The IP address type for the Anycast static IP list.", "IpCount": "The number of IP addresses in the Anycast static IP list.", "LastModifiedTime": "The last time the Anycast static IP list was modified.", "Name": "The name of the Anycast static IP list.", @@ -8849,7 +8875,7 @@ "AWS::CloudFront::Distribution VpcOriginConfig": { "OriginKeepaliveTimeout": "Specifies how long, in seconds, CloudFront persists its connection to the origin. The minimum timeout is 1 second, the maximum is 120 seconds, and the default (if you don't specify otherwise) is 5 seconds.\n\nFor more information, see [Keep-alive timeout (custom origins only)](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistValuesOrigin.html#DownloadDistValuesOriginKeepaliveTimeout) in the *Amazon CloudFront Developer Guide* .", "OriginReadTimeout": "Specifies how long, in seconds, CloudFront waits for a response from the origin. This is also known as the *origin response timeout* . The minimum timeout is 1 second, the maximum is 120 seconds, and the default (if you don't specify otherwise) is 30 seconds.\n\nFor more information, see [Response timeout](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DownloadDistValuesOrigin.html#DownloadDistValuesOriginResponseTimeout) in the *Amazon CloudFront Developer Guide* .", - "OwnerAccountId": "", + "OwnerAccountId": "The account ID of the AWS account that owns the VPC origin.", "VpcOriginId": "The VPC origin ID." }, "AWS::CloudFront::DistributionTenant": { @@ -10745,7 +10771,7 @@ "Name": "The name of the configuration recorder. AWS Config automatically assigns the name of \"default\" when creating the configuration recorder.\n\nYou cannot change the name of the configuration recorder after it has been created. To change the configuration recorder name, you must delete it and create a new configuration recorder with a new name.", "RecordingGroup": "Specifies which resource types AWS Config records for configuration changes.\n\n> *High Number of AWS Config Evaluations*\n> \n> You may notice increased activity in your account during your initial month recording with AWS Config when compared to subsequent months. During the initial bootstrapping process, AWS Config runs evaluations on all the resources in your account that you have selected for AWS Config to record.\n> \n> If you are running ephemeral workloads, you may see increased activity from AWS Config as it records configuration changes associated with creating and deleting these temporary resources. An *ephemeral workload* is a temporary use of computing resources that are loaded and run when needed. Examples include Amazon Elastic Compute Cloud ( Amazon EC2 ) Spot Instances, Amazon EMR jobs, and AWS Auto Scaling . If you want to avoid the increased activity from running ephemeral workloads, you can run these types of workloads in a separate account with AWS Config turned off to avoid increased configuration recording and rule evaluations.", "RecordingMode": "Specifies the default recording frequency for the configuration recorder. AWS Config supports *Continuous recording* and *Daily recording* .\n\n- Continuous recording allows you to record configuration changes continuously whenever a change occurs.\n- Daily recording allows you to receive a configuration item (CI) representing the most recent state of your resources over the last 24-hour period, only if it\u2019s different from the previous CI recorded.\n\n> *Some resource types require continuous recording*\n> \n> AWS Firewall Manager depends on continuous recording to monitor your resources. If you are using Firewall Manager, it is recommended that you set the recording frequency to Continuous. \n\nYou can also override the recording frequency for specific resource types.", - "RoleARN": "Amazon Resource Name (ARN) of the IAM role assumed by AWS Config and used by the configuration recorder. For more information, see [Permissions for the IAM Role Assigned](https://docs.aws.amazon.com/config/latest/developerguide/iamrole-permissions.html) to AWS Config in the AWS Config Developer Guide.\n\n> *Pre-existing AWS Config role*\n> \n> If you have used an AWS service that uses AWS Config , such as AWS Security Hub or AWS Control Tower , and an AWS Config role has already been created, make sure that the IAM role that you use when setting up AWS Config keeps the same minimum permissions as the already created AWS Config role. You must do this so that the other AWS service continues to run as expected.\n> \n> For example, if AWS Control Tower has an IAM role that allows AWS Config to read Amazon Simple Storage Service ( Amazon S3 ) objects, make sure that the same permissions are granted within the IAM role you use when setting up AWS Config . Otherwise, it may interfere with how AWS Control Tower operates. For more information about IAM roles for AWS Config , see [*Identity and Access Management for AWS Config*](https://docs.aws.amazon.com/config/latest/developerguide/security-iam.html) in the *AWS Config Developer Guide* ." + "RoleARN": "Amazon Resource Name (ARN) of the IAM role assumed by AWS Config and used by the configuration recorder. For more information, see [Permissions for the IAM Role Assigned](https://docs.aws.amazon.com/config/latest/developerguide/iamrole-permissions.html) to AWS Config in the AWS Config Developer Guide.\n\n> *Pre-existing AWS Config role*\n> \n> If you have used an AWS service that uses AWS Config , such as Security Hub or AWS Control Tower , and an AWS Config role has already been created, make sure that the IAM role that you use when setting up AWS Config keeps the same minimum permissions as the already created AWS Config role. You must do this so that the other AWS service continues to run as expected.\n> \n> For example, if AWS Control Tower has an IAM role that allows AWS Config to read Amazon Simple Storage Service ( Amazon S3 ) objects, make sure that the same permissions are granted within the IAM role you use when setting up AWS Config . Otherwise, it may interfere with how AWS Control Tower operates. For more information about IAM roles for AWS Config , see [*Identity and Access Management for AWS Config*](https://docs.aws.amazon.com/config/latest/developerguide/security-iam.html) in the *AWS Config Developer Guide* ." }, "AWS::Config::ConfigurationRecorder ExclusionByResourceTypes": { "ResourceTypes": "A comma-separated list of resource types to exclude from recording by the configuration recorder." @@ -10947,7 +10973,7 @@ "Value": "The value for the tag. You can specify a value that is 0 to 256 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -" }, "AWS::Connect::EvaluationForm": { - "AutoEvaluationConfiguration": "", + "AutoEvaluationConfiguration": "The automatic evaluation configuration of an evaluation form.", "Description": "The description of the evaluation form.\n\n*Length Constraints* : Minimum length of 0. Maximum length of 1024.", "InstanceArn": "The identifier of the Amazon Connect instance.", "Items": "Items that are part of the evaluation form. The total number of sections and questions must not exceed 100 each. Questions must be contained in a section.\n\n*Minimum size* : 1\n\n*Maximum size* : 100", @@ -10960,7 +10986,7 @@ "Enabled": "" }, "AWS::Connect::EvaluationForm AutomaticFailConfiguration": { - "TargetSection": "" + "TargetSection": "The referenceId of the target section for auto failure." }, "AWS::Connect::EvaluationForm EvaluationFormBaseItem": { "Section": "A subsection or inner section of an item." @@ -10970,37 +10996,37 @@ "Section": "The information of the section." }, "AWS::Connect::EvaluationForm EvaluationFormItemEnablementCondition": { - "Operands": "", - "Operator": "" + "Operands": "Operands of the enablement condition.", + "Operator": "The operator to be used to be applied to operands if more than one provided." }, "AWS::Connect::EvaluationForm EvaluationFormItemEnablementConditionOperand": { - "Expression": "" + "Expression": "An expression of the enablement condition." }, "AWS::Connect::EvaluationForm EvaluationFormItemEnablementConfiguration": { - "Action": "", - "Condition": "", - "DefaultAction": "" + "Action": "An enablement action that if condition is satisfied.", + "Condition": "A condition for item enablement configuration.", + "DefaultAction": "An enablement action that if condition is not satisfied." }, "AWS::Connect::EvaluationForm EvaluationFormItemEnablementExpression": { - "Comparator": "", - "Source": "", - "Values": "" + "Comparator": "A comparator to be used against list of values.", + "Source": "A source item of enablement expression.", + "Values": "A list of values from source item." }, "AWS::Connect::EvaluationForm EvaluationFormItemEnablementSource": { - "RefId": "", - "Type": "" + "RefId": "A referenceId of the source item.", + "Type": "A type of source item." }, "AWS::Connect::EvaluationForm EvaluationFormItemEnablementSourceValue": { - "RefId": "", - "Type": "" + "RefId": "A referenceId of the source value.", + "Type": "A type of source item value." }, "AWS::Connect::EvaluationForm EvaluationFormNumericQuestionAutomation": { - "AnswerSource": "", + "AnswerSource": "A source of automation answer for numeric question.", "PropertyValue": "The property value of the automation." }, "AWS::Connect::EvaluationForm EvaluationFormNumericQuestionOption": { "AutomaticFail": "The flag to mark the option as automatic fail. If an automatic fail answer is provided, the overall evaluation gets a score of 0.", - "AutomaticFailConfiguration": "", + "AutomaticFailConfiguration": "A configuration for automatic fail.", "MaxValue": "The maximum answer value of the range option.", "MinValue": "The minimum answer value of the range option.", "Score": "The score assigned to answer values within the range option.\n\n*Minimum* : 0\n\n*Maximum* : 10" @@ -11012,7 +11038,7 @@ "Options": "The scoring options of the numeric question." }, "AWS::Connect::EvaluationForm EvaluationFormQuestion": { - "Enablement": "", + "Enablement": "A question conditional enablement.", "Instructions": "The instructions of the section.\n\n*Length Constraints* : Minimum length of 0. Maximum length of 1024.", "NotApplicableEnabled": "The flag to enable not applicable answers to the question.", "QuestionType": "The type of the question.\n\n*Allowed values* : `NUMERIC` | `SINGLESELECT` | `TEXT`", @@ -11022,12 +11048,12 @@ "Weight": "The scoring weight of the section.\n\n*Minimum* : 0\n\n*Maximum* : 100" }, "AWS::Connect::EvaluationForm EvaluationFormQuestionAutomationAnswerSource": { - "SourceType": "" + "SourceType": "The automation answer source type." }, "AWS::Connect::EvaluationForm EvaluationFormQuestionTypeProperties": { "Numeric": "The properties of the numeric question.", "SingleSelect": "The properties of the numeric question.", - "Text": "" + "Text": "The properties of the text question." }, "AWS::Connect::EvaluationForm EvaluationFormSection": { "Instructions": "The instructions of the section.", @@ -11037,7 +11063,7 @@ "Weight": "The scoring weight of the section.\n\n*Minimum* : 0\n\n*Maximum* : 100" }, "AWS::Connect::EvaluationForm EvaluationFormSingleSelectQuestionAutomation": { - "AnswerSource": "", + "AnswerSource": "Automation answer source.", "DefaultOptionRefId": "The identifier of the default answer option, when none of the automation options match the criteria.\n\n*Length Constraints* : Minimum length of 1. Maximum length of 40.", "Options": "The automation options of the single select question.\n\n*Minimum* : 1\n\n*Maximum* : 20" }, @@ -11046,7 +11072,7 @@ }, "AWS::Connect::EvaluationForm EvaluationFormSingleSelectQuestionOption": { "AutomaticFail": "The flag to mark the option as automatic fail. If an automatic fail answer is provided, the overall evaluation gets a score of 0.", - "AutomaticFailConfiguration": "", + "AutomaticFailConfiguration": "Whether automatic fail is configured on a single select question.", "RefId": "The identifier of the answer option. An identifier must be unique within the question.\n\n*Length Constraints* : Minimum length of 1. Maximum length of 40.", "Score": "The score assigned to the answer option.\n\n*Minimum* : 0\n\n*Maximum* : 10", "Text": "The title of the answer option.\n\n*Length Constraints* : Minimum length of 1. Maximum length of 128." @@ -11057,10 +11083,10 @@ "Options": "The answer options of the single select question.\n\n*Minimum* : 2\n\n*Maximum* : 256" }, "AWS::Connect::EvaluationForm EvaluationFormTextQuestionAutomation": { - "AnswerSource": "" + "AnswerSource": "Automation answer source." }, "AWS::Connect::EvaluationForm EvaluationFormTextQuestionProperties": { - "Automation": "" + "Automation": "The automation properties of the text question." }, "AWS::Connect::EvaluationForm NumericQuestionPropertyValueAutomation": { "Label": "The property label of the automation." @@ -11654,6 +11680,11 @@ "AWS::ConnectCampaignsV2::Campaign PredictiveConfig": { "BandwidthAllocation": "Bandwidth allocation for the predictive outbound mode." }, + "AWS::ConnectCampaignsV2::Campaign PreviewConfig": { + "AgentActions": "Agent actions for the preview outbound mode.", + "BandwidthAllocation": "Bandwidth allocation for the preview outbound mode.", + "TimeoutConfig": "Countdown timer configuration for preview outbound mode." + }, "AWS::ConnectCampaignsV2::Campaign ProgressiveConfig": { "BandwidthAllocation": "Bandwidth allocation for the progressive outbound mode." }, @@ -11704,6 +11735,7 @@ "AWS::ConnectCampaignsV2::Campaign TelephonyOutboundMode": { "AgentlessConfig": "The agentless outbound mode configuration for telephony.", "PredictiveConfig": "Contains predictive outbound mode configuration.", + "PreviewConfig": "", "ProgressiveConfig": "Contains progressive telephony outbound mode configuration." }, "AWS::ConnectCampaignsV2::Campaign TimeRange": { @@ -11714,6 +11746,9 @@ "OpenHours": "The open hours configuration.", "RestrictedPeriods": "The restricted periods configuration." }, + "AWS::ConnectCampaignsV2::Campaign TimeoutConfig": { + "DurationInSeconds": "Duration in seconds for the countdown timer." + }, "AWS::ControlTower::EnabledBaseline": { "BaselineIdentifier": "The specific `Baseline` enabled as part of the `EnabledBaseline` resource.", "BaselineVersion": "The enabled version of the `Baseline` .", @@ -13693,9 +13728,17 @@ "AwsLocation": "The location where the connection is created.", "Description": "Connection description.", "DomainIdentifier": "The ID of the domain where the connection is created.", + "EnableTrustedIdentityPropagation": "", "EnvironmentIdentifier": "The ID of the environment where the connection is created.", "Name": "The name of the connection.", - "Props": "Connection props." + "ProjectIdentifier": "", + "Props": "Connection props.", + "Scope": "The scope of the connection." + }, + "AWS::DataZone::Connection AmazonQPropertiesInput": { + "AuthMode": "", + "IsEnabled": "", + "ProfileArn": "" }, "AWS::DataZone::Connection AthenaPropertiesInput": { "WorkgroupName": "The Amazon Athena workgroup name of a connection." @@ -13723,11 +13766,13 @@ "UserName": "The user name for the connecion." }, "AWS::DataZone::Connection ConnectionPropertiesInput": { + "AmazonQProperties": "", "AthenaProperties": "The Amazon Athena properties of a connection.", "GlueProperties": "The AWS Glue properties of a connection.", "HyperPodProperties": "The hyper pod properties of a connection.", "IamProperties": "The IAM properties of a connection.", "RedshiftProperties": "The Amazon Redshift properties of a connection.", + "S3Properties": "", "SparkEmrProperties": "The Spark EMR properties of a connection.", "SparkGlueProperties": "The Spark AWS Glue properties of a connection." }, @@ -13801,6 +13846,10 @@ "ClusterName": "The cluster name in the Amazon Redshift storage properties.", "WorkgroupName": "The workgroup name in the Amazon Redshift storage properties." }, + "AWS::DataZone::Connection S3PropertiesInput": { + "S3AccessGrantLocationId": "", + "S3Uri": "" + }, "AWS::DataZone::Connection SparkEmrPropertiesInput": { "ComputeArn": "The compute ARN of Spark EMR.", "InstanceProfileArn": "The instance profile ARN of Spark EMR.", @@ -14917,6 +14966,17 @@ "ReadUnitsPerSecond": "Represents the number of read operations your base table can instantaneously support.", "WriteUnitsPerSecond": "Represents the number of write operations your base table can instantaneously support." }, + "AWS::EC2::CapacityManagerDataExport": { + "OutputFormat": "The file format of the exported data.", + "S3BucketName": "The name of the S3 bucket where export files are delivered.", + "S3BucketPrefix": "The S3 key prefix used for organizing export files within the bucket.", + "Schedule": "The frequency at which data exports are generated.", + "Tags": "The tags associated with the data export configuration." + }, + "AWS::EC2::CapacityManagerDataExport Tag": { + "Key": "The key of the tag.\n\nConstraints: Tag keys are case-sensitive and accept a maximum of 127 Unicode characters. May not begin with `aws:` .", + "Value": "The value of the tag.\n\nConstraints: Tag values are case-sensitive and accept a maximum of 256 Unicode characters." + }, "AWS::EC2::CapacityReservation": { "AvailabilityZone": "The Availability Zone in which to create the Capacity Reservation.", "AvailabilityZoneId": "The ID of the Availability Zone in which the capacity is reserved.", @@ -17254,6 +17314,7 @@ "AWS::EC2::Volume": { "AutoEnableIO": "Indicates whether the volume is auto-enabled for I/O operations. By default, Amazon EBS disables I/O to the volume from attached EC2 instances when it determines that a volume's data is potentially inconsistent. If the consistency of the volume is not a concern, and you prefer that the volume be made available immediately if it's impaired, you can configure the volume to automatically enable I/O.", "AvailabilityZone": "The ID of the Availability Zone in which to create the volume. For example, `us-east-1a` .\n\nEither `AvailabilityZone` or `AvailabilityZoneId` must be specified, but not both.", + "AvailabilityZoneId": "The ID of the Availability Zone for the volume.", "Encrypted": "Indicates whether the volume should be encrypted. The effect of setting the encryption state to `true` depends on the volume origin (new or from a snapshot), starting encryption state, ownership, and whether encryption by default is enabled. For more information, see [Encryption by default](https://docs.aws.amazon.com/ebs/latest/userguide/work-with-ebs-encr.html#encryption-by-default) in the *Amazon EBS User Guide* .\n\nEncrypted Amazon EBS volumes must be attached to instances that support Amazon EBS encryption. For more information, see [Supported instance types](https://docs.aws.amazon.com/ebs/latest/userguide/ebs-encryption-requirements.html#ebs-encryption_supported_instances) .", "Iops": "The number of I/O operations per second (IOPS) to provision for the volume. Required for `io1` and `io2` volumes. Optional for `gp3` volumes. Omit for all other volume types.\n\nValid ranges:\n\n- gp3: `3,000` ( *default* ) `- 80,000` IOPS\n- io1: `100 - 64,000` IOPS\n- io2: `100 - 256,000` IOPS\n\n> [Instances built on the Nitro System](https://docs.aws.amazon.com/ec2/latest/instancetypes/ec2-nitro-instances.html) can support up to 256,000 IOPS. Other instances can support up to 32,000 IOPS.", "KmsKeyId": "The identifier of the AWS KMS key to use for Amazon EBS encryption. If `KmsKeyId` is specified, the encrypted state must be `true` .\n\nIf you omit this property and your account is enabled for encryption by default, or *Encrypted* is set to `true` , then the volume is encrypted using the default key specified for your account. If your account does not have a default key, then the volume is encrypted using the AWS managed key .\n\nAlternatively, if you want to specify a different key, you can specify one of the following:\n\n- Key ID. For example, 1234abcd-12ab-34cd-56ef-1234567890ab.\n- Key alias. Specify the alias for the key, prefixed with `alias/` . For example, for a key with the alias `my_cmk` , use `alias/my_cmk` . Or to specify the AWS managed key , use `alias/aws/ebs` .\n- Key ARN. For example, arn:aws:kms:us-east-1:012345678910:key/1234abcd-12ab-34cd-56ef-1234567890ab.\n- Alias ARN. For example, arn:aws:kms:us-east-1:012345678910:alias/ExampleAlias.", @@ -17261,6 +17322,7 @@ "OutpostArn": "The Amazon Resource Name (ARN) of the Outpost.", "Size": "The size of the volume, in GiBs. You must specify either a snapshot ID or a volume size. If you specify a snapshot, the default is the snapshot size, and you can specify a volume size that is equal to or larger than the snapshot size.\n\nValid sizes:\n\n- gp2: `1 - 16,384` GiB\n- gp3: `1 - 65,536` GiB\n- io1: `4 - 16,384` GiB\n- io2: `4 - 65,536` GiB\n- st1 and sc1: `125 - 16,384` GiB\n- standard: `1 - 1024` GiB", "SnapshotId": "The snapshot from which to create the volume. You must specify either a snapshot ID or a volume size.", + "SourceVolumeId": "The ID of the source volume from which the volume copy was created. Only for volume copies.", "Tags": "The tags to apply to the volume during creation.", "Throughput": "The throughput to provision for a volume, with a maximum of 1,000 MiB/s.\n\nThis parameter is valid only for `gp3` volumes. The default value is 125.\n\nValid Range: Minimum value of 125. Maximum value of 1000.", "VolumeInitializationRate": "Specifies the Amazon EBS Provisioned Rate for Volume Initialization (volume initialization rate), in MiB/s, at which to download the snapshot blocks from Amazon S3 to the volume. This is also known as *volume initialization* . Specifying a volume initialization rate ensures that the volume is initialized at a predictable and consistent rate after creation.\n\nThis parameter is supported only for volumes created from snapshots. Omit this parameter if:\n\n- You want to create the volume using fast snapshot restore. You must specify a snapshot that is enabled for fast snapshot restore. In this case, the volume is fully initialized at creation.\n\n> If you specify a snapshot that is enabled for fast snapshot restore and a volume initialization rate, the volume will be initialized at the specified rate instead of fast snapshot restore.\n- You want to create a volume that is initialized at the default rate.\n\nFor more information, see [Initialize Amazon EBS volumes](https://docs.aws.amazon.com/ebs/latest/userguide/initalize-volume.html) in the *Amazon EC2 User Guide* .\n\nValid range: 100 - 300 MiB/s", @@ -17502,9 +17564,9 @@ "Tags": "The metadata that you apply to the cluster to help you categorize and organize them. Each tag consists of a key and an optional value. You define both.\n\nThe following basic restrictions apply to tags:\n\n- Maximum number of tags per resource - 50\n- For each resource, each tag key must be unique, and each tag key can have only one value.\n- Maximum key length - 128 Unicode characters in UTF-8\n- Maximum value length - 256 Unicode characters in UTF-8\n- If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.\n- Tag keys and values are case-sensitive.\n- Do not use `aws:` , `AWS:` , or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for AWS use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit." }, "AWS::ECS::Cluster CapacityProviderStrategyItem": { - "Base": "The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of `0` is used.\n\nBase value characteristics:\n\n- Only one capacity provider in a strategy can have a base defined\n- Default value is `0` if not specified\n- Valid range: 0 to 100,000\n- Base requirements are satisfied first before weight distribution", - "CapacityProvider": "The short name of the capacity provider.", - "Weight": "The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The `weight` value is taken into consideration after the `base` value, if defined, is satisfied.\n\nIf no `weight` value is specified, the default value of `0` is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of `0` can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of `0` , any `RunTask` or `CreateService` actions using the capacity provider strategy will fail.\n\nWeight value characteristics:\n\n- Weight is considered after the base value is satisfied\n- Default value is `0` if not specified\n- Valid range: 0 to 1,000\n- At least one capacity provider must have a weight greater than zero\n- Capacity providers with weight of `0` cannot place tasks\n\nTask distribution logic:\n\n- Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider\n- Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios\n\nExamples:\n\nEqual Distribution: Two capacity providers both with weight `1` will split tasks evenly after base requirements are met.\n\nWeighted Distribution: If capacityProviderA has weight `1` and capacityProviderB has weight `4` , then for every 1 task on A, 4 tasks will run on B." + "Base": "The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of `0` is used.\n\nBase value characteristics:\n\n- Only one capacity provider in a strategy can have a base defined\n- The default value is `0` if not specified\n- The valid range is 0 to 100,000\n- Base requirements are satisfied first before weight distribution", + "CapacityProvider": "The short name of the capacity provider. This can be either an AWS managed capacity provider ( `FARGATE` or `FARGATE_SPOT` ) or the name of a custom capacity provider that you created.", + "Weight": "The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The `weight` value is taken into consideration after the `base` value, if defined, is satisfied.\n\nIf no `weight` value is specified, the default value of `0` is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of `0` can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of `0` , any `RunTask` or `CreateService` actions using the capacity provider strategy will fail.\n\nWeight value characteristics:\n\n- Weight is considered after the base value is satisfied\n- The default value is `0` if not specified\n- The valid range is 0 to 1,000\n- At least one capacity provider must have a weight greater than zero\n- Capacity providers with weight of `0` cannot place tasks\n\nTask distribution logic:\n\n- Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider\n- Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios\n\nExamples:\n\nEqual Distribution: Two capacity providers both with weight `1` will split tasks evenly after base requirements are met.\n\nWeighted Distribution: If capacityProviderA has weight `1` and capacityProviderB has weight `4` , then for every 1 task on A, 4 tasks will run on B." }, "AWS::ECS::Cluster ClusterConfiguration": { "ExecuteCommandConfiguration": "The details of the execute command configuration.", @@ -17543,9 +17605,9 @@ "DefaultCapacityProviderStrategy": "The default capacity provider strategy to associate with the cluster." }, "AWS::ECS::ClusterCapacityProviderAssociations CapacityProviderStrategy": { - "Base": "The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of `0` is used.\n\nBase value characteristics:\n\n- Only one capacity provider in a strategy can have a base defined\n- Default value is `0` if not specified\n- Valid range: 0 to 100,000\n- Base requirements are satisfied first before weight distribution", - "CapacityProvider": "The short name of the capacity provider.", - "Weight": "The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The `weight` value is taken into consideration after the `base` value, if defined, is satisfied.\n\nIf no `weight` value is specified, the default value of `0` is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of `0` can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of `0` , any `RunTask` or `CreateService` actions using the capacity provider strategy will fail.\n\nWeight value characteristics:\n\n- Weight is considered after the base value is satisfied\n- Default value is `0` if not specified\n- Valid range: 0 to 1,000\n- At least one capacity provider must have a weight greater than zero\n- Capacity providers with weight of `0` cannot place tasks\n\nTask distribution logic:\n\n- Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider\n- Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios\n\nExamples:\n\nEqual Distribution: Two capacity providers both with weight `1` will split tasks evenly after base requirements are met.\n\nWeighted Distribution: If capacityProviderA has weight `1` and capacityProviderB has weight `4` , then for every 1 task on A, 4 tasks will run on B." + "Base": "The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of `0` is used.\n\nBase value characteristics:\n\n- Only one capacity provider in a strategy can have a base defined\n- The default value is `0` if not specified\n- The valid range is 0 to 100,000\n- Base requirements are satisfied first before weight distribution", + "CapacityProvider": "The short name of the capacity provider. This can be either an AWS managed capacity provider ( `FARGATE` or `FARGATE_SPOT` ) or the name of a custom capacity provider that you created.", + "Weight": "The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The `weight` value is taken into consideration after the `base` value, if defined, is satisfied.\n\nIf no `weight` value is specified, the default value of `0` is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of `0` can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of `0` , any `RunTask` or `CreateService` actions using the capacity provider strategy will fail.\n\nWeight value characteristics:\n\n- Weight is considered after the base value is satisfied\n- The default value is `0` if not specified\n- The valid range is 0 to 1,000\n- At least one capacity provider must have a weight greater than zero\n- Capacity providers with weight of `0` cannot place tasks\n\nTask distribution logic:\n\n- Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider\n- Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios\n\nExamples:\n\nEqual Distribution: Two capacity providers both with weight `1` will split tasks evenly after base requirements are met.\n\nWeighted Distribution: If capacityProviderA has weight `1` and capacityProviderB has weight `4` , then for every 1 task on A, 4 tasks will run on B." }, "AWS::ECS::PrimaryTaskSet": { "Cluster": "The short name or full Amazon Resource Name (ARN) of the cluster that hosts the service that the task set exists in.", @@ -17591,10 +17653,14 @@ "SecurityGroups": "The IDs of the security groups associated with the task or service. If you don't specify a security group, the default security group for the VPC is used. There's a limit of 5 security groups that can be specified.\n\n> All specified security groups must be from the same VPC.", "Subnets": "The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified.\n\n> All specified subnets must be from the same VPC." }, + "AWS::ECS::Service CanaryConfiguration": { + "CanaryBakeTimeInMinutes": "The amount of time in minutes to wait during the canary phase before shifting the remaining production traffic to the new service revision. Valid values are 0 to 1440 minutes (24 hours). The default value is 10.", + "CanaryPercent": "The percentage of production traffic to shift to the new service revision during the canary phase. Valid values are multiples of 0.1 from 0.1 to 100.0. The default value is 5.0." + }, "AWS::ECS::Service CapacityProviderStrategyItem": { - "Base": "The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of `0` is used.\n\nBase value characteristics:\n\n- Only one capacity provider in a strategy can have a base defined\n- Default value is `0` if not specified\n- Valid range: 0 to 100,000\n- Base requirements are satisfied first before weight distribution", - "CapacityProvider": "The short name of the capacity provider.", - "Weight": "The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The `weight` value is taken into consideration after the `base` value, if defined, is satisfied.\n\nIf no `weight` value is specified, the default value of `0` is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of `0` can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of `0` , any `RunTask` or `CreateService` actions using the capacity provider strategy will fail.\n\nWeight value characteristics:\n\n- Weight is considered after the base value is satisfied\n- Default value is `0` if not specified\n- Valid range: 0 to 1,000\n- At least one capacity provider must have a weight greater than zero\n- Capacity providers with weight of `0` cannot place tasks\n\nTask distribution logic:\n\n- Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider\n- Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios\n\nExamples:\n\nEqual Distribution: Two capacity providers both with weight `1` will split tasks evenly after base requirements are met.\n\nWeighted Distribution: If capacityProviderA has weight `1` and capacityProviderB has weight `4` , then for every 1 task on A, 4 tasks will run on B." + "Base": "The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of `0` is used.\n\nBase value characteristics:\n\n- Only one capacity provider in a strategy can have a base defined\n- The default value is `0` if not specified\n- The valid range is 0 to 100,000\n- Base requirements are satisfied first before weight distribution", + "CapacityProvider": "The short name of the capacity provider. This can be either an AWS managed capacity provider ( `FARGATE` or `FARGATE_SPOT` ) or the name of a custom capacity provider that you created.", + "Weight": "The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The `weight` value is taken into consideration after the `base` value, if defined, is satisfied.\n\nIf no `weight` value is specified, the default value of `0` is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of `0` can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of `0` , any `RunTask` or `CreateService` actions using the capacity provider strategy will fail.\n\nWeight value characteristics:\n\n- Weight is considered after the base value is satisfied\n- The default value is `0` if not specified\n- The valid range is 0 to 1,000\n- At least one capacity provider must have a weight greater than zero\n- Capacity providers with weight of `0` cannot place tasks\n\nTask distribution logic:\n\n- Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider\n- Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios\n\nExamples:\n\nEqual Distribution: Two capacity providers both with weight `1` will split tasks evenly after base requirements are met.\n\nWeighted Distribution: If capacityProviderA has weight `1` and capacityProviderB has weight `4` , then for every 1 task on A, 4 tasks will run on B." }, "AWS::ECS::Service DeploymentAlarms": { "AlarmNames": "One or more CloudWatch alarm names. Use a \",\" to separate the alarms.", @@ -17608,10 +17674,10 @@ "AWS::ECS::Service DeploymentConfiguration": { "Alarms": "Information about the CloudWatch alarms.", "BakeTimeInMinutes": "The duration when both blue and green service revisions are running simultaneously after the production traffic has shifted.\n\nThe following rules apply when you don't specify a value:\n\n- For rolling deployments, the value is set to 3 hours (180 minutes).\n- When you use an external deployment controller ( `EXTERNAL` ), or the CodeDeploy blue/green deployment controller ( `CODE_DEPLOY` ), the value is set to 3 hours (180 minutes).\n- For all other cases, the value is set to 36 hours (2160 minutes).", - "CanaryConfiguration": "", + "CanaryConfiguration": "Configuration for canary deployment strategy. Only valid when the deployment strategy is `CANARY` . This configuration enables shifting a fixed percentage of traffic for testing, followed by shifting the remaining traffic after a bake period.", "DeploymentCircuitBreaker": "> The deployment circuit breaker can only be used for services using the rolling update ( `ECS` ) deployment type. \n\nThe *deployment circuit breaker* determines whether a service deployment will fail if the service can't reach a steady state. If you use the deployment circuit breaker, a service deployment will transition to a failed state and stop launching new tasks. If you use the rollback option, when a service deployment fails, the service is rolled back to the last deployment that completed successfully. For more information, see [Rolling update](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/deployment-type-ecs.html) in the *Amazon Elastic Container Service Developer Guide*", "LifecycleHooks": "An array of deployment lifecycle hook objects to run custom logic at specific stages of the deployment lifecycle.", - "LinearConfiguration": "", + "LinearConfiguration": "Configuration for linear deployment strategy. Only valid when the deployment strategy is `LINEAR` . This configuration enables progressive traffic shifting in equal percentage increments with configurable bake times between each step.", "MaximumPercent": "If a service is using the rolling update ( `ECS` ) deployment type, the `maximumPercent` parameter represents an upper limit on the number of your service's tasks that are allowed in the `RUNNING` or `PENDING` state during a deployment, as a percentage of the `desiredCount` (rounded down to the nearest integer). This parameter enables you to define the deployment batch size. For example, if your service is using the `REPLICA` service scheduler and has a `desiredCount` of four tasks and a `maximumPercent` value of 200%, the scheduler may start four new tasks before stopping the four older tasks (provided that the cluster resources required to do this are available). The default `maximumPercent` value for a service using the `REPLICA` service scheduler is 200%.\n\nThe Amazon ECS scheduler uses this parameter to replace unhealthy tasks by starting replacement tasks first and then stopping the unhealthy tasks, as long as cluster resources for starting replacement tasks are available. For more information about how the scheduler replaces unhealthy tasks, see [Amazon ECS services](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html) .\n\nIf a service is using either the blue/green ( `CODE_DEPLOY` ) or `EXTERNAL` deployment types, and tasks in the service use the EC2 launch type, the *maximum percent* value is set to the default value. The *maximum percent* value is used to define the upper limit on the number of the tasks in the service that remain in the `RUNNING` state while the container instances are in the `DRAINING` state.\n\n> You can't specify a custom `maximumPercent` value for a service that uses either the blue/green ( `CODE_DEPLOY` ) or `EXTERNAL` deployment types and has tasks that use the EC2 launch type. \n\nIf the service uses either the blue/green ( `CODE_DEPLOY` ) or `EXTERNAL` deployment types, and the tasks in the service use the Fargate launch type, the maximum percent value is not used. The value is still returned when describing your service.", "MinimumHealthyPercent": "If a service is using the rolling update ( `ECS` ) deployment type, the `minimumHealthyPercent` represents a lower limit on the number of your service's tasks that must remain in the `RUNNING` state during a deployment, as a percentage of the `desiredCount` (rounded up to the nearest integer). This parameter enables you to deploy without using additional cluster capacity. For example, if your service has a `desiredCount` of four tasks and a `minimumHealthyPercent` of 50%, the service scheduler may stop two existing tasks to free up cluster capacity before starting two new tasks.\n\nIf any tasks are unhealthy and if `maximumPercent` doesn't allow the Amazon ECS scheduler to start replacement tasks, the scheduler stops the unhealthy tasks one-by-one \u2014 using the `minimumHealthyPercent` as a constraint \u2014 to clear up capacity to launch replacement tasks. For more information about how the scheduler replaces unhealthy tasks, see [Amazon ECS services](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html) .\n\nFor services that *do not* use a load balancer, the following should be noted:\n\n- A service is considered healthy if all essential containers within the tasks in the service pass their health checks.\n- If a task has no essential containers with a health check defined, the service scheduler will wait for 40 seconds after a task reaches a `RUNNING` state before the task is counted towards the minimum healthy percent total.\n- If a task has one or more essential containers with a health check defined, the service scheduler will wait for the task to reach a healthy status before counting it towards the minimum healthy percent total. A task is considered healthy when all essential containers within the task have passed their health checks. The amount of time the service scheduler can wait for is determined by the container health check settings.\n\nFor services that *do* use a load balancer, the following should be noted:\n\n- If a task has no essential containers with a health check defined, the service scheduler will wait for the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total.\n- If a task has an essential container with a health check defined, the service scheduler will wait for both the task to reach a healthy status and the load balancer target group health check to return a healthy status before counting the task towards the minimum healthy percent total.\n\nThe default value for a replica service for `minimumHealthyPercent` is 100%. The default `minimumHealthyPercent` value for a service using the `DAEMON` service schedule is 0% for the AWS CLI , the AWS SDKs, and the APIs and 50% for the AWS Management Console.\n\nThe minimum number of healthy tasks during a deployment is the `desiredCount` multiplied by the `minimumHealthyPercent` /100, rounded up to the nearest integer value.\n\nIf a service is using either the blue/green ( `CODE_DEPLOY` ) or `EXTERNAL` deployment types and is running tasks that use the EC2 launch type, the *minimum healthy percent* value is set to the default value. The *minimum healthy percent* value is used to define the lower limit on the number of the tasks in the service that remain in the `RUNNING` state while the container instances are in the `DRAINING` state.\n\n> You can't specify a custom `minimumHealthyPercent` value for a service that uses either the blue/green ( `CODE_DEPLOY` ) or `EXTERNAL` deployment types and has tasks that use the EC2 launch type. \n\nIf a service is using either the blue/green ( `CODE_DEPLOY` ) or `EXTERNAL` deployment types and is running tasks that use the Fargate launch type, the minimum healthy percent value is not used, although it is returned when describing your service.", "Strategy": "The deployment strategy for the service. Choose from these valid values:\n\n- `ROLLING` - When you create a service which uses the rolling update ( `ROLLING` ) deployment strategy, the Amazon ECS service scheduler replaces the currently running tasks with new tasks. The number of tasks that Amazon ECS adds or removes from the service during a rolling update is controlled by the service deployment configuration.\n- `BLUE_GREEN` - A blue/green deployment strategy ( `BLUE_GREEN` ) is a release methodology that reduces downtime and risk by running two identical production environments called blue and green. With Amazon ECS blue/green deployments, you can validate new service revisions before directing production traffic to them. This approach provides a safer way to deploy changes with the ability to quickly roll back if needed." @@ -17634,6 +17700,10 @@ "EnableForceNewDeployment": "Determines whether to force a new deployment of the service. By default, deployments aren't forced. You can use this option to start a new deployment with no service definition changes. For example, you can update a service's tasks to use a newer Docker image with the same image/tag combination ( `my_image:latest` ) or to roll Fargate tasks onto a newer platform version.", "ForceNewDeploymentNonce": "When you change the `ForceNewDeploymentNonce` value in your template, it signals Amazon ECS to start a new deployment even though no other service parameters have changed. The value must be a unique, time- varying value like a timestamp, random string, or sequence number. Use this property when you want to ensure your tasks pick up the latest version of a Docker image that uses the same tag but has been updated in the registry." }, + "AWS::ECS::Service LinearConfiguration": { + "StepBakeTimeInMinutes": "The amount of time in minutes to wait between each traffic shifting step during a linear deployment. Valid values are 0 to 1440 minutes (24 hours). The default value is 6. This bake time is not applied after reaching 100 percent traffic.", + "StepPercent": "The percentage of production traffic to shift in each step during a linear deployment. Valid values are multiples of 0.1 from 3.0 to 100.0. The default value is 10.0." + }, "AWS::ECS::Service LoadBalancer": { "AdvancedConfiguration": "The advanced settings for the load balancer used in blue/green deployments. Specify the alternate target group, listener rules, and IAM role required for traffic shifting during blue/green deployments.", "ContainerName": "The name of the container (as it appears in a container definition) to associate with the load balancer.\n\nYou need to specify the container name when configuring the target group for an Amazon ECS load balancer.", @@ -17661,12 +17731,17 @@ "Name": "The name of the secret.", "ValueFrom": "The secret to expose to the container. The supported values are either the full ARN of the AWS Secrets Manager secret or the full ARN of the parameter in the SSM Parameter Store.\n\nFor information about the require AWS Identity and Access Management permissions, see [Required IAM permissions for Amazon ECS secrets](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data-secrets.html#secrets-iam) (for Secrets Manager) or [Required IAM permissions for Amazon ECS secrets](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/specifying-sensitive-data-parameters.html) (for Systems Manager Parameter store) in the *Amazon Elastic Container Service Developer Guide* .\n\n> If the SSM Parameter Store parameter exists in the same Region as the task you're launching, then you can use either the full ARN or name of the parameter. If the parameter exists in a different Region, then the full ARN must be specified." }, + "AWS::ECS::Service ServiceConnectAccessLogConfiguration": { + "Format": "The format for Service Connect access log output. Choose TEXT for human-readable logs or JSON for structured data that integrates well with log analysis tools.", + "IncludeQueryParameters": "Specifies whether to include query parameters in Service Connect access logs.\n\nWhen enabled, query parameters from HTTP requests are included in the access logs. Consider security and privacy implications when enabling this feature, as query parameters may contain sensitive information such as request IDs and tokens. By default, this parameter is `DISABLED` ." + }, "AWS::ECS::Service ServiceConnectClientAlias": { "DnsName": "The `dnsName` is the name that you use in the applications of client tasks to connect to this service. The name must be a valid DNS name but doesn't need to be fully-qualified. The name can include up to 127 characters. The name can include lowercase letters, numbers, underscores (_), hyphens (-), and periods (.). The name can't start with a hyphen.\n\nIf this parameter isn't specified, the default value of `discoveryName.namespace` is used. If the `discoveryName` isn't specified, the port mapping name from the task definition is used in `portName.namespace` .\n\nTo avoid changing your applications in client Amazon ECS services, set this to the same name that the client application uses by default. For example, a few common names are `database` , `db` , or the lowercase name of a database, such as `mysql` or `redis` . For more information, see [Service Connect](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-connect.html) in the *Amazon Elastic Container Service Developer Guide* .", "Port": "The listening port number for the Service Connect proxy. This port is available inside of all of the tasks within the same namespace.\n\nTo avoid changing your applications in client Amazon ECS services, set this to the same port that the client application uses by default. For more information, see [Service Connect](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-connect.html) in the *Amazon Elastic Container Service Developer Guide* .", "TestTrafficRules": "The configuration for test traffic routing rules used during blue/green deployments with Amazon ECS Service Connect. This allows you to route a portion of traffic to the new service revision of your service for testing before shifting all production traffic." }, "AWS::ECS::Service ServiceConnectConfiguration": { + "AccessLogConfiguration": "The configuration for Service Connect access logging. Access logs capture detailed information about requests made to your service, including request patterns, response codes, and timing data. They can be useful for debugging connectivity issues, monitoring service performance, and auditing service-to-service communication for security and compliance purposes.\n\n> To enable access logs, you must also specify a `logConfiguration` in the `serviceConnectConfiguration` .", "Enabled": "Specifies whether to use Service Connect with this service.", "LogConfiguration": "The log configuration for the container. This parameter maps to `LogConfig` in the docker container create command and the `--log-driver` option to docker run.\n\nBy default, containers use the same logging driver that the Docker daemon uses. However, the container might use a different logging driver than the Docker daemon by specifying a log driver configuration in the container definition.\n\nUnderstand the following when specifying a log configuration for your containers.\n\n- Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon. Additional log drivers may be available in future releases of the Amazon ECS container agent.\n\nFor tasks on AWS Fargate , the supported log drivers are `awslogs` , `splunk` , and `awsfirelens` .\n\nFor tasks hosted on Amazon EC2 instances, the supported log drivers are `awslogs` , `fluentd` , `gelf` , `json-file` , `journald` , `syslog` , `splunk` , and `awsfirelens` .\n- This parameter requires version 1.18 of the Docker Remote API or greater on your container instance.\n- For tasks that are hosted on Amazon EC2 instances, the Amazon ECS container agent must register the available logging drivers with the `ECS_AVAILABLE_LOGGING_DRIVERS` environment variable before containers placed on that instance can use these log configuration options. For more information, see [Amazon ECS container agent configuration](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-config.html) in the *Amazon Elastic Container Service Developer Guide* .\n- For tasks that are on AWS Fargate , because you don't have access to the underlying infrastructure your tasks are hosted on, any additional software needed must be installed outside of the task. For example, the Fluentd output aggregators or a remote host running Logstash to send Gelf logs to.", "Namespace": "The namespace name or full Amazon Resource Name (ARN) of the AWS Cloud Map namespace for use with Service Connect. The namespace must be in the same AWS Region as the Amazon ECS service and cluster. The type of namespace doesn't affect Service Connect. For more information about AWS Cloud Map , see [Working with Services](https://docs.aws.amazon.com/cloud-map/latest/dg/working-with-services.html) in the *AWS Cloud Map Developer Guide* .", @@ -17744,11 +17819,11 @@ "IpcMode": "The IPC resource namespace to use for the containers in the task. The valid values are `host` , `task` , or `none` . If `host` is specified, then all containers within the tasks that specified the `host` IPC mode on the same container instance share the same IPC resources with the host Amazon EC2 instance. If `task` is specified, all containers within the specified task share the same IPC resources. If `none` is specified, then IPC resources within the containers of a task are private and not shared with other containers in a task or on the container instance. If no value is specified, then the IPC resource namespace sharing depends on the Docker daemon setting on the container instance.\n\nIf the `host` IPC mode is used, be aware that there is a heightened risk of undesired IPC namespace expose.\n\nIf you are setting namespaced kernel parameters using `systemControls` for the containers in the task, the following will apply to your IPC resource namespace. For more information, see [System Controls](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html) in the *Amazon Elastic Container Service Developer Guide* .\n\n- For tasks that use the `host` IPC mode, IPC namespace related `systemControls` are not supported.\n- For tasks that use the `task` IPC mode, IPC namespace related `systemControls` will apply to all containers within a task.\n\n> This parameter is not supported for Windows containers or tasks run on AWS Fargate .", "Memory": "The amount (in MiB) of memory used by the task.\n\nIf your tasks runs on Amazon EC2 instances, you must specify either a task-level memory value or a container-level memory value. This field is optional and any value can be used. If a task-level memory value is specified, the container-level memory value is optional. For more information regarding container-level memory and memory reservation, see [ContainerDefinition](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_ContainerDefinition.html) .\n\nIf your tasks runs on AWS Fargate , this field is required. You must use one of the following values. The value you choose determines your range of valid values for the `cpu` parameter.\n\n- 512 (0.5 GB), 1024 (1 GB), 2048 (2 GB) - Available `cpu` values: 256 (.25 vCPU)\n- 1024 (1 GB), 2048 (2 GB), 3072 (3 GB), 4096 (4 GB) - Available `cpu` values: 512 (.5 vCPU)\n- 2048 (2 GB), 3072 (3 GB), 4096 (4 GB), 5120 (5 GB), 6144 (6 GB), 7168 (7 GB), 8192 (8 GB) - Available `cpu` values: 1024 (1 vCPU)\n- Between 4096 (4 GB) and 16384 (16 GB) in increments of 1024 (1 GB) - Available `cpu` values: 2048 (2 vCPU)\n- Between 8192 (8 GB) and 30720 (30 GB) in increments of 1024 (1 GB) - Available `cpu` values: 4096 (4 vCPU)\n- Between 16 GB and 60 GB in 4 GB increments - Available `cpu` values: 8192 (8 vCPU)\n\nThis option requires Linux platform `1.4.0` or later.\n- Between 32GB and 120 GB in 8 GB increments - Available `cpu` values: 16384 (16 vCPU)\n\nThis option requires Linux platform `1.4.0` or later.", "NetworkMode": "The Docker networking mode to use for the containers in the task. The valid values are `none` , `bridge` , `awsvpc` , and `host` . If no network mode is specified, the default is `bridge` .\n\nFor Amazon ECS tasks on Fargate, the `awsvpc` network mode is required. For Amazon ECS tasks on Amazon EC2 Linux instances, any network mode can be used. For Amazon ECS tasks on Amazon EC2 Windows instances, `` or `awsvpc` can be used. If the network mode is set to `none` , you cannot specify port mappings in your container definitions, and the tasks containers do not have external connectivity. The `host` and `awsvpc` network modes offer the highest networking performance for containers because they use the EC2 network stack instead of the virtualized network stack provided by the `bridge` mode.\n\nWith the `host` and `awsvpc` network modes, exposed container ports are mapped directly to the corresponding host port (for the `host` network mode) or the attached elastic network interface port (for the `awsvpc` network mode), so you cannot take advantage of dynamic host port mappings.\n\n> When using the `host` network mode, you should not run containers using the root user (UID 0). It is considered best practice to use a non-root user. \n\nIf the network mode is `awsvpc` , the task is allocated an elastic network interface, and you must specify a [NetworkConfiguration](https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_NetworkConfiguration.html) value when you create a service or run a task with the task definition. For more information, see [Task Networking](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-networking.html) in the *Amazon Elastic Container Service Developer Guide* .\n\nIf the network mode is `host` , you cannot run multiple instantiations of the same task on a single container instance when port mappings are used.", - "PidMode": "The process namespace to use for the containers in the task. The valid values are `host` or `task` . On Fargate for Linux containers, the only valid value is `task` . For example, monitoring sidecars might need `pidMode` to access information about other containers running in the same task.\n\nIf `host` is specified, all containers within the tasks that specified the `host` PID mode on the same container instance share the same process namespace with the host Amazon EC2 instance.\n\nIf `task` is specified, all containers within the specified task share the same process namespace.\n\nIf no value is specified, the default is a private namespace for each container.\n\nIf the `host` PID mode is used, there's a heightened risk of undesired process namespace exposure.\n\n> This parameter is not supported for Windows containers. > This parameter is only supported for tasks that are hosted on AWS Fargate if the tasks are using platform version `1.4.0` or later (Linux). This isn't supported for Windows containers on Fargate.", + "PidMode": "The process namespace to use for the containers in the task. The valid values are `host` or `task` . On Fargate for Linux containers, the only valid value is `task` . For example, monitoring sidecars might need `pidMode` to access information about other containers running in the same task.\n\nIf `host` is specified, all containers within the tasks that specified the `host` PID mode on the same container instance share the same process namespace with the host Amazon EC2 instance.\n\nIf `task` is specified, all containers within the specified task share the same process namespace.\n\nIf no value is specified, the The default is a private namespace for each container.\n\nIf the `host` PID mode is used, there's a heightened risk of undesired process namespace exposure.\n\n> This parameter is not supported for Windows containers. > This parameter is only supported for tasks that are hosted on AWS Fargate if the tasks are using platform version `1.4.0` or later (Linux). This isn't supported for Windows containers on Fargate.", "PlacementConstraints": "An array of placement constraint objects to use for tasks.\n\n> This parameter isn't supported for tasks run on AWS Fargate .", "ProxyConfiguration": "The configuration details for the App Mesh proxy.\n\nYour Amazon ECS container instances require at least version 1.26.0 of the container agent and at least version 1.26.0-1 of the `ecs-init` package to use a proxy configuration. If your container instances are launched from the Amazon ECS optimized AMI version `20190301` or later, they contain the required versions of the container agent and `ecs-init` . For more information, see [Amazon ECS-optimized Linux AMI](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html) in the *Amazon Elastic Container Service Developer Guide* .", "RequiresCompatibilities": "The task launch types the task definition was validated against. The valid values are `MANAGED_INSTANCES` , `EC2` , `FARGATE` , and `EXTERNAL` . For more information, see [Amazon ECS launch types](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_types.html) in the *Amazon Elastic Container Service Developer Guide* .", - "RuntimePlatform": "The operating system that your tasks definitions run on. A platform family is specified only for tasks using the Fargate launch type.", + "RuntimePlatform": "The operating system that your tasks definitions run on.", "Tags": "The metadata that you apply to the task definition to help you categorize and organize them. Each tag consists of a key and an optional value. You define both of them.\n\nThe following basic restrictions apply to tags:\n\n- Maximum number of tags per resource - 50\n- For each resource, each tag key must be unique, and each tag key can have only one value.\n- Maximum key length - 128 Unicode characters in UTF-8\n- Maximum value length - 256 Unicode characters in UTF-8\n- If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.\n- Tag keys and values are case-sensitive.\n- Do not use `aws:` , `AWS:` , or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for AWS use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.", "TaskRoleArn": "The short name or full Amazon Resource Name (ARN) of the AWS Identity and Access Management role that grants containers in the task permission to call AWS APIs on your behalf. For more information, see [Amazon ECS Task Role](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html) in the *Amazon Elastic Container Service Developer Guide* .\n\nIAM roles for tasks on Windows require that the `-EnableTaskIAMRole` option is set when you launch the Amazon ECS-optimized Windows AMI. Your containers must also run some configuration code to use the feature. For more information, see [Windows IAM roles for tasks](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/windows_task_IAM_roles.html) in the *Amazon Elastic Container Service Developer Guide* .\n\n> String validation is done on the ECS side. If an invalid string value is given for `TaskRoleArn` , it may cause the Cloudformation job to hang.", "Volumes": "The list of data volume definitions for the task. For more information, see [Using data volumes in tasks](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/using_data_volumes.html) in the *Amazon Elastic Container Service Developer Guide* .\n\n> The `host` and `sourcePath` parameters aren't supported for tasks run on AWS Fargate ." @@ -17759,7 +17834,7 @@ }, "AWS::ECS::TaskDefinition ContainerDefinition": { "Command": "The command that's passed to the container. This parameter maps to `Cmd` in the docker container create command and the `COMMAND` parameter to docker run. If there are multiple arguments, each argument is a separated string in the array.", - "Cpu": "The number of `cpu` units reserved for the container. This parameter maps to `CpuShares` in the docker container create commandand the `--cpu-shares` option to docker run.\n\nThis field is optional for tasks using the Fargate launch type, and the only requirement is that the total amount of CPU reserved for all containers within a task be lower than the task-level `cpu` value.\n\n> You can determine the number of CPU units that are available per EC2 instance type by multiplying the vCPUs listed for that instance type on the [Amazon EC2 Instances](https://docs.aws.amazon.com/ec2/instance-types/) detail page by 1,024. \n\nLinux containers share unallocated CPU units with other containers on the container instance with the same ratio as their allocated amount. For example, if you run a single-container task on a single-core instance type with 512 CPU units specified for that container, and that's the only task running on the container instance, that container could use the full 1,024 CPU unit share at any given time. However, if you launched another copy of the same task on that container instance, each task is guaranteed a minimum of 512 CPU units when needed. Moreover, each container could float to higher CPU usage if the other container was not using it. If both tasks were 100% active all of the time, they would be limited to 512 CPU units.\n\nOn Linux container instances, the Docker daemon on the container instance uses the CPU value to calculate the relative CPU share ratios for running containers. The minimum valid CPU share value that the Linux kernel allows is 2, and the maximum valid CPU share value that the Linux kernel allows is 262144. However, the CPU parameter isn't required, and you can use CPU values below 2 or above 262144 in your container definitions. For CPU values below 2 (including null) or above 262144, the behavior varies based on your Amazon ECS container agent version:\n\n- *Agent versions less than or equal to 1.1.0:* Null and zero CPU values are passed to Docker as 0, which Docker then converts to 1,024 CPU shares. CPU values of 1 are passed to Docker as 1, which the Linux kernel converts to two CPU shares.\n- *Agent versions greater than or equal to 1.2.0:* Null, zero, and CPU values of 1 are passed to Docker as 2.\n- *Agent versions greater than or equal to 1.84.0:* CPU values greater than 256 vCPU are passed to Docker as 256, which is equivalent to 262144 CPU shares.\n\nOn Windows container instances, the CPU limit is enforced as an absolute limit, or a quota. Windows containers only have access to the specified amount of CPU that's described in the task definition. A null or zero CPU value is passed to Docker as `0` , which Windows interprets as 1% of one CPU.", + "Cpu": "The number of `cpu` units reserved for the container. This parameter maps to `CpuShares` in the docker container create command and the `--cpu-shares` option to docker run.\n\nThis field is optional for tasks using the Fargate launch type, and the only requirement is that the total amount of CPU reserved for all containers within a task be lower than the task-level `cpu` value.\n\n> You can determine the number of CPU units that are available per EC2 instance type by multiplying the vCPUs listed for that instance type on the [Amazon EC2 Instances](https://docs.aws.amazon.com/ec2/instance-types/) detail page by 1,024. \n\nLinux containers share unallocated CPU units with other containers on the container instance with the same ratio as their allocated amount. For example, if you run a single-container task on a single-core instance type with 512 CPU units specified for that container, and that's the only task running on the container instance, that container could use the full 1,024 CPU unit share at any given time. However, if you launched another copy of the same task on that container instance, each task is guaranteed a minimum of 512 CPU units when needed. Moreover, each container could float to higher CPU usage if the other container was not using it. If both tasks were 100% active all of the time, they would be limited to 512 CPU units.\n\nOn Linux container instances, the Docker daemon on the container instance uses the CPU value to calculate the relative CPU share ratios for running containers. The minimum valid CPU share value that the Linux kernel allows is 2, and the maximum valid CPU share value that the Linux kernel allows is 262144. However, the CPU parameter isn't required, and you can use CPU values below 2 or above 262144 in your container definitions. For CPU values below 2 (including null) or above 262144, the behavior varies based on your Amazon ECS container agent version:\n\n- *Agent versions less than or equal to 1.1.0:* Null and zero CPU values are passed to Docker as 0, which Docker then converts to 1,024 CPU shares. CPU values of 1 are passed to Docker as 1, which the Linux kernel converts to two CPU shares.\n- *Agent versions greater than or equal to 1.2.0:* Null, zero, and CPU values of 1 are passed to Docker as 2.\n- *Agent versions greater than or equal to 1.84.0:* CPU values greater than 256 vCPU are passed to Docker as 256, which is equivalent to 262144 CPU shares.\n\nOn Windows container instances, the CPU limit is enforced as an absolute limit, or a quota. Windows containers only have access to the specified amount of CPU that's described in the task definition. A null or zero CPU value is passed to Docker as `0` , which Windows interprets as 1% of one CPU.", "CredentialSpecs": "A list of ARNs in SSM or Amazon S3 to a credential spec ( `CredSpec` ) file that configures the container for Active Directory authentication. We recommend that you use this parameter instead of the `dockerSecurityOptions` . The maximum number of ARNs is 1.\n\nThere are two formats for each ARN.\n\n- **credentialspecdomainless:MyARN** - You use `credentialspecdomainless:MyARN` to provide a `CredSpec` with an additional section for a secret in AWS Secrets Manager . You provide the login credentials to the domain in the secret.\n\nEach task that runs on any container instance can join different domains.\n\nYou can use this format without joining the container instance to a domain.\n- **credentialspec:MyARN** - You use `credentialspec:MyARN` to provide a `CredSpec` for a single domain.\n\nYou must join the container instance to the domain before you start any tasks that use this task definition.\n\nIn both formats, replace `MyARN` with the ARN in SSM or Amazon S3.\n\nIf you provide a `credentialspecdomainless:MyARN` , the `credspec` must provide a ARN in AWS Secrets Manager for a secret containing the username, password, and the domain to connect to. For better security, the instance isn't joined to the domain for domainless authentication. Other applications on the instance can't use the domainless credentials. You can use this parameter to run tasks on the same instance, even it the tasks need to join different domains. For more information, see [Using gMSAs for Windows Containers](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/windows-gmsa.html) and [Using gMSAs for Linux Containers](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/linux-gmsa.html) .", "DependsOn": "The dependencies defined for container startup and shutdown. A container can contain multiple dependencies. When a dependency is defined for container startup, for container shutdown it is reversed.\n\nFor tasks using the EC2 launch type, the container instances require at least version 1.26.0 of the container agent to turn on container dependencies. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see [Updating the Amazon ECS Container Agent](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-agent-update.html) in the *Amazon Elastic Container Service Developer Guide* . If you're using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the `ecs-init` package. If your container instances are launched from version `20190301` or later, then they contain the required versions of the container agent and `ecs-init` . For more information, see [Amazon ECS-optimized Linux AMI](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs-optimized_AMI.html) in the *Amazon Elastic Container Service Developer Guide* .\n\nFor tasks using the Fargate launch type, the task or service requires the following platforms:\n\n- Linux platform version `1.3.0` or later.\n- Windows platform version `1.0.0` or later.\n\nIf the task definition is used in a blue/green deployment that uses [AWS::CodeDeploy::DeploymentGroup BlueGreenDeploymentConfiguration](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-codedeploy-deploymentgroup-bluegreendeploymentconfiguration.html) , the `dependsOn` parameter is not supported.", "DisableNetworking": "When this parameter is true, networking is off within the container. This parameter maps to `NetworkDisabled` in the docker container create command.\n\n> This parameter is not supported for Windows containers.", @@ -17911,7 +17986,7 @@ "RestartAttemptPeriod": "A period of time (in seconds) that the container must run for before a restart can be attempted. A container can be restarted only once every `restartAttemptPeriod` seconds. If a container isn't able to run for this time period and exits early, it will not be restarted. You can set a minimum `restartAttemptPeriod` of 60 seconds and a maximum `restartAttemptPeriod` of 1800 seconds. By default, a container must run for 300 seconds before it can be restarted." }, "AWS::ECS::TaskDefinition RuntimePlatform": { - "CpuArchitecture": "The CPU architecture.\n\nYou can run your Linux tasks on an ARM-based platform by setting the value to `ARM64` . This option is available for tasks that run on Linux Amazon EC2 instance or Linux containers on Fargate.", + "CpuArchitecture": "The CPU architecture.\n\nYou can run your Linux tasks on an ARM-based platform by setting the value to `ARM64` . This option is available for tasks that run on Linux Amazon EC2 instance, Amazon ECS Managed Instances, or Linux containers on Fargate.", "OperatingSystemFamily": "The operating system." }, "AWS::ECS::TaskDefinition Secret": { @@ -17972,9 +18047,9 @@ "Subnets": "The IDs of the subnets associated with the task or service. There's a limit of 16 subnets that can be specified.\n\n> All specified subnets must be from the same VPC." }, "AWS::ECS::TaskSet CapacityProviderStrategyItem": { - "Base": "The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of `0` is used.\n\nBase value characteristics:\n\n- Only one capacity provider in a strategy can have a base defined\n- Default value is `0` if not specified\n- Valid range: 0 to 100,000\n- Base requirements are satisfied first before weight distribution", - "CapacityProvider": "The short name of the capacity provider.", - "Weight": "The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The `weight` value is taken into consideration after the `base` value, if defined, is satisfied.\n\nIf no `weight` value is specified, the default value of `0` is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of `0` can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of `0` , any `RunTask` or `CreateService` actions using the capacity provider strategy will fail.\n\nWeight value characteristics:\n\n- Weight is considered after the base value is satisfied\n- Default value is `0` if not specified\n- Valid range: 0 to 1,000\n- At least one capacity provider must have a weight greater than zero\n- Capacity providers with weight of `0` cannot place tasks\n\nTask distribution logic:\n\n- Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider\n- Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios\n\nExamples:\n\nEqual Distribution: Two capacity providers both with weight `1` will split tasks evenly after base requirements are met.\n\nWeighted Distribution: If capacityProviderA has weight `1` and capacityProviderB has weight `4` , then for every 1 task on A, 4 tasks will run on B." + "Base": "The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of `0` is used.\n\nBase value characteristics:\n\n- Only one capacity provider in a strategy can have a base defined\n- The default value is `0` if not specified\n- The valid range is 0 to 100,000\n- Base requirements are satisfied first before weight distribution", + "CapacityProvider": "The short name of the capacity provider. This can be either an AWS managed capacity provider ( `FARGATE` or `FARGATE_SPOT` ) or the name of a custom capacity provider that you created.", + "Weight": "The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The `weight` value is taken into consideration after the `base` value, if defined, is satisfied.\n\nIf no `weight` value is specified, the default value of `0` is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of `0` can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of `0` , any `RunTask` or `CreateService` actions using the capacity provider strategy will fail.\n\nWeight value characteristics:\n\n- Weight is considered after the base value is satisfied\n- The default value is `0` if not specified\n- The valid range is 0 to 1,000\n- At least one capacity provider must have a weight greater than zero\n- Capacity providers with weight of `0` cannot place tasks\n\nTask distribution logic:\n\n- Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider\n- Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios\n\nExamples:\n\nEqual Distribution: Two capacity providers both with weight `1` will split tasks evenly after base requirements are met.\n\nWeighted Distribution: If capacityProviderA has weight `1` and capacityProviderB has weight `4` , then for every 1 task on A, 4 tasks will run on B." }, "AWS::ECS::TaskSet LoadBalancer": { "ContainerName": "The name of the container (as it appears in a container definition) to associate with the load balancer.\n\nYou need to specify the container name when configuring the target group for an Amazon ECS load balancer.", @@ -21769,6 +21844,25 @@ "Tags": "The tags to use with this DevEndpoint.", "WorkerType": "The type of predefined worker that is allocated to the development endpoint. Accepts a value of Standard, G.1X, or G.2X.\n\n- For the `Standard` worker type, each worker provides 4 vCPU, 16 GB of memory and a 50GB disk, and 2 executors per worker.\n- For the `G.1X` worker type, each worker maps to 1 DPU (4 vCPU, 16 GB of memory, 64 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.\n- For the `G.2X` worker type, each worker maps to 2 DPU (8 vCPU, 32 GB of memory, 128 GB disk), and provides 1 executor per worker. We recommend this worker type for memory-intensive jobs.\n\nKnown issue: when a development endpoint is created with the `G.2X` `WorkerType` configuration, the Spark drivers for the development endpoint will run on 4 vCPU, 16 GB of memory, and a 64 GB disk." }, + "AWS::Glue::IntegrationResourceProperty": { + "ResourceArn": "", + "SourceProcessingProperties": "", + "Tags": "", + "TargetProcessingProperties": "" + }, + "AWS::Glue::IntegrationResourceProperty SourceProcessingProperties": { + "RoleArn": "" + }, + "AWS::Glue::IntegrationResourceProperty Tag": { + "Key": "The tag key. The key is required when you create a tag on an object. The key is case-sensitive, and must not contain the prefix aws.", + "Value": "The tag value. The value is optional when you create a tag on an object. The value is case-sensitive, and must not contain the prefix aws." + }, + "AWS::Glue::IntegrationResourceProperty TargetProcessingProperties": { + "ConnectionName": "", + "EventBusArn": "", + "KmsArn": "", + "RoleArn": "" + }, "AWS::Glue::Job": { "AllocatedCapacity": "This parameter is no longer supported. Use `MaxCapacity` instead.\n\nThe number of capacity units that are allocated to this job.", "Command": "The code that executes a job.", @@ -23502,6 +23596,7 @@ }, "AWS::ImageBuilder::Image": { "ContainerRecipeArn": "The Amazon Resource Name (ARN) of the container recipe that defines how images are configured and tested.", + "DeletionSettings": "", "DistributionConfigurationArn": "The Amazon Resource Name (ARN) of the distribution configuration that defines and configures the outputs of your pipeline.", "EnhancedImageMetadataEnabled": "Collects additional information about the image being created, including the operating system (OS) version and package list. This information is used to enhance the overall experience of using EC2 Image Builder. Enabled by default.", "ExecutionRole": "The name or Amazon Resource Name (ARN) for the IAM role you create that grants Image Builder access to perform workflow actions.", @@ -23514,6 +23609,9 @@ "Tags": "The tags of the image.", "Workflows": "Contains an array of workflow configuration objects." }, + "AWS::ImageBuilder::Image DeletionSettings": { + "ExecutionRole": "" + }, "AWS::ImageBuilder::Image EcrConfiguration": { "ContainerTags": "Tags for Image Builder to apply to the output container image that Amazon Inspector scans. Tags can help you identify and manage your scanned images.", "RepositoryName": "The name of the container repository that Amazon Inspector scans to identify findings for your container images. The name includes the path for the repository location. If you don\u2019t provide this information, Image Builder creates a repository in your account named `image-builder-image-scanning-repository` for vulnerability scans of your output container images." @@ -26692,7 +26790,7 @@ "SharePointConfiguration": "Provides the configuration information to connect to Microsoft SharePoint as your data source.", "TemplateConfiguration": "Provides a template for the configuration information to connect to your data source.", "WebCrawlerConfiguration": "Provides the configuration information required for Amazon Kendra Web Crawler.", - "WorkDocsConfiguration": "Provides the configuration information to connect to Amazon WorkDocs as your data source." + "WorkDocsConfiguration": "Provides the configuration information to connect to WorkDocs as your data source." }, "AWS::Kendra::DataSource DataSourceToIndexFieldMapping": { "DataSourceFieldName": "The name of the field in the data source. You must first create the index field using the `UpdateIndex` API.", @@ -26903,11 +27001,11 @@ }, "AWS::Kendra::DataSource WorkDocsConfiguration": { "CrawlComments": "`TRUE` to include comments on documents in your index. Including comments in your index means each comment is a document that can be searched on.\n\nThe default is set to `FALSE` .", - "ExclusionPatterns": "A list of regular expression patterns to exclude certain files in your Amazon WorkDocs site repository. Files that match the patterns are excluded from the index. Files that don\u2019t match the patterns are included in the index. If a file matches both an inclusion and exclusion pattern, the exclusion pattern takes precedence and the file isn't included in the index.", - "FieldMappings": "A list of `DataSourceToIndexFieldMapping` objects that map Amazon WorkDocs data source attributes or field names to Amazon Kendra index field names. To create custom fields, use the `UpdateIndex` API before you map to Amazon WorkDocs fields. For more information, see [Mapping data source fields](https://docs.aws.amazon.com/kendra/latest/dg/field-mapping.html) . The Amazon WorkDocs data source field names must exist in your Amazon WorkDocs custom metadata.", - "InclusionPatterns": "A list of regular expression patterns to include certain files in your Amazon WorkDocs site repository. Files that match the patterns are included in the index. Files that don't match the patterns are excluded from the index. If a file matches both an inclusion and exclusion pattern, the exclusion pattern takes precedence and the file isn't included in the index.", - "OrganizationId": "The identifier of the directory corresponding to your Amazon WorkDocs site repository.\n\nYou can find the organization ID in the [AWS Directory Service](https://docs.aws.amazon.com/directoryservicev2/) by going to *Active Directory* , then *Directories* . Your Amazon WorkDocs site directory has an ID, which is the organization ID. You can also set up a new Amazon WorkDocs directory in the AWS Directory Service console and enable a Amazon WorkDocs site for the directory in the Amazon WorkDocs console.", - "UseChangeLog": "`TRUE` to use the Amazon WorkDocs change log to determine which documents require updating in the index. Depending on the change log's size, it may take longer for Amazon Kendra to use the change log than to scan all of your documents in Amazon WorkDocs." + "ExclusionPatterns": "A list of regular expression patterns to exclude certain files in your WorkDocs site repository. Files that match the patterns are excluded from the index. Files that don\u2019t match the patterns are included in the index. If a file matches both an inclusion and exclusion pattern, the exclusion pattern takes precedence and the file isn't included in the index.", + "FieldMappings": "A list of `DataSourceToIndexFieldMapping` objects that map WorkDocs data source attributes or field names to Amazon Kendra index field names. To create custom fields, use the `UpdateIndex` API before you map to WorkDocs fields. For more information, see [Mapping data source fields](https://docs.aws.amazon.com/kendra/latest/dg/field-mapping.html) . The WorkDocs data source field names must exist in your WorkDocs custom metadata.", + "InclusionPatterns": "A list of regular expression patterns to include certain files in your WorkDocs site repository. Files that match the patterns are included in the index. Files that don't match the patterns are excluded from the index. If a file matches both an inclusion and exclusion pattern, the exclusion pattern takes precedence and the file isn't included in the index.", + "OrganizationId": "The identifier of the directory corresponding to your WorkDocs site repository.\n\nYou can find the organization ID in the [AWS Directory Service](https://docs.aws.amazon.com/directoryservicev2/) by going to *Active Directory* , then *Directories* . Your WorkDocs site directory has an ID, which is the organization ID. You can also set up a new WorkDocs directory in the AWS Directory Service console and enable a WorkDocs site for the directory in the WorkDocs console.", + "UseChangeLog": "`TRUE` to use the WorkDocs change log to determine which documents require updating in the index. Depending on the change log's size, it may take longer for Amazon Kendra to use the change log than to scan all of your documents in WorkDocs." }, "AWS::Kendra::Faq": { "Description": "A description for the FAQ.", @@ -27009,6 +27107,7 @@ }, "AWS::Kinesis::Stream": { "DesiredShardLevelMetrics": "A list of shard-level metrics in properties to enable enhanced monitoring mode.", + "MaxRecordSizeInKiB": "The maximum record size of a single record in kibibyte (KiB) that you can write to, and read from a stream.", "Name": "The name of the Kinesis stream. If you don't specify a name, AWS CloudFormation generates a unique physical ID and uses that ID for the stream name. For more information, see [Name Type](https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-name.html) .\n\nIf you specify a name, you cannot perform updates that require replacement of this resource. You can perform updates that require no or some interruption. If you must replace the resource, specify a new name.", "RetentionPeriodHours": "The number of hours for the data records that are stored in shards to remain accessible. The default value is 24. For more information about the stream retention period, see [Changing the Data Retention Period](https://docs.aws.amazon.com/streams/latest/dev/kinesis-extended-retention.html) in the Amazon Kinesis Developer Guide.", "ShardCount": "The number of shards that the stream uses. For greater provisioned throughput, increase the number of shards.", @@ -28272,7 +28371,7 @@ "EventSourceToken": "For Alexa Smart Home functions, a token that the invoker must supply.", "FunctionName": "The name or ARN of the Lambda function, version, or alias.\n\n**Name formats** - *Function name* \u2013 `my-function` (name-only), `my-function:v1` (with alias).\n- *Function ARN* \u2013 `arn:aws:lambda:us-west-2:123456789012:function:my-function` .\n- *Partial ARN* \u2013 `123456789012:function:my-function` .\n\nYou can append a version number or alias to any of the formats. The length constraint applies only to the full ARN. If you specify only the function name, it is limited to 64 characters in length.", "FunctionUrlAuthType": "The type of authentication that your function URL uses. Set to `AWS_IAM` if you want to restrict access to authenticated users only. Set to `NONE` if you want to bypass IAM authentication to create a public endpoint. For more information, see [Control access to Lambda function URLs](https://docs.aws.amazon.com/lambda/latest/dg/urls-auth.html) .", - "InvokedViaFunctionUrl": "Restricts the `lambda:InvokeFunction` action to function URL calls. When set to `true` , this prevents the principal from invoking the function by any means other than the function URL. For more information, see [Control access to Lambda function URLs](https://docs.aws.amazon.com/lambda/latest/dg/urls-auth.html) .", + "InvokedViaFunctionUrl": "Restricts the `lambda:InvokeFunction` action to function URL calls. When specified, this option prevents the principal from invoking the function by any means other than the function URL. For more information, see [Control access to Lambda function URLs](https://docs.aws.amazon.com/lambda/latest/dg/urls-auth.html) .", "Principal": "The AWS service , AWS account , IAM user, or IAM role that invokes the function. If you specify a service, use `SourceArn` or `SourceAccount` to limit who can invoke the function through that service.", "PrincipalOrgID": "The identifier for your organization in AWS Organizations . Use this to grant permissions to all the AWS accounts under this organization.", "SourceAccount": "For AWS service , the ID of the AWS account that owns the resource. Use this together with `SourceArn` to ensure that the specified account owns the resource. It is possible for an Amazon S3 bucket to be deleted by its owner and recreated by another account.", @@ -29342,11 +29441,20 @@ "Restrictions": "The API key restrictions for the API key resource.", "Tags": "Applies one or more tags to the map resource. A tag is a key-value pair that helps manage, identify, search, and filter your resources by labelling them." }, + "AWS::Location::APIKey AndroidApp": { + "CertificateFingerprint": "", + "Package": "" + }, "AWS::Location::APIKey ApiKeyRestrictions": { "AllowActions": "A list of allowed actions that an API key resource grants permissions to perform. You must have at least one action for each type of resource. For example, if you have a place resource, you must include at least one place action.\n\nThe following are valid values for the actions.\n\n- *Map actions*\n\n- `geo:GetMap*` - Allows all actions needed for map rendering.\n- *Enhanced Maps actions*\n\n- `geo-maps:GetTile` - Allows getting map tiles for rendering.\n- `geo-maps:GetStaticMap` - Allows getting static map images.\n- *Place actions*\n\n- `geo:SearchPlaceIndexForText` - Allows finding geo coordinates of a known place.\n- `geo:SearchPlaceIndexForPosition` - Allows getting nearest address to geo coordinates.\n- `geo:SearchPlaceIndexForSuggestions` - Allows suggestions based on an incomplete or misspelled query.\n- `geo:GetPlace` - Allows getting details of a place.\n- *Enhanced Places actions*\n\n- `geo-places:Autcomplete` - Allows auto-completion of search text.\n- `geo-places:Geocode` - Allows finding geo coordinates of a known place.\n- `geo-places:GetPlace` - Allows getting details of a place.\n- `geo-places:ReverseGeocode` - Allows getting nearest address to geo coordinates.\n- `geo-places:SearchNearby` - Allows category based places search around geo coordinates.\n- `geo-places:SearchText` - Allows place or address search based on free-form text.\n- `geo-places:Suggest` - Allows suggestions based on an incomplete or misspelled query.\n- *Route actions*\n\n- `geo:CalculateRoute` - Allows point to point routing.\n- `geo:CalculateRouteMatrix` - Allows matrix routing.\n- *Enhanced Routes actions*\n\n- `geo-routes:CalculateIsolines` - Allows isoline calculation.\n- `geo-routes:CalculateRoutes` - Allows point to point routing.\n- `geo-routes:CalculateRouteMatrix` - Allows matrix routing.\n- `geo-routes:OptimizeWaypoints` - Allows computing the best sequence of waypoints.\n- `geo-routes:SnapToRoads` - Allows snapping GPS points to a likely route.\n\n> You must use these strings exactly. For example, to provide access to map rendering, the only valid action is `geo:GetMap*` as an input to the list. `[\"geo:GetMap*\"]` is valid but `[\"geo:GetTile\"]` is not. Similarly, you cannot use `[\"geo:SearchPlaceIndexFor*\"]` - you must list each of the Place actions separately.", + "AllowAndroidApps": "", + "AllowAppleApps": "", "AllowReferers": "An optional list of allowed HTTP referers for which requests must originate from. Requests using this API key from other domains will not be allowed.\n\nRequirements:\n\n- Contain only alphanumeric characters (A\u2013Z, a\u2013z, 0\u20139) or any symbols in this list `$\\-._+!*`(),;/?:@=&`\n- May contain a percent (%) if followed by 2 hexadecimal digits (A-F, a-f, 0-9); this is used for URL encoding purposes.\n- May contain wildcard characters question mark (?) and asterisk (*).\n\nQuestion mark (?) will replace any single character (including hexadecimal digits).\n\nAsterisk (*) will replace any multiple characters (including multiple hexadecimal digits).\n- No spaces allowed. For example, `https://example.com` .", "AllowResources": "A list of allowed resource ARNs that a API key bearer can perform actions on.\n\n- The ARN must be the correct ARN for a map, place, or route ARN. You may include wildcards in the resource-id to match multiple resources of the same type.\n- The resources must be in the same `partition` , `region` , and `account-id` as the key that is being created.\n- Other than wildcards, you must include the full ARN, including the `arn` , `partition` , `service` , `region` , `account-id` and `resource-id` delimited by colons (:).\n- No spaces allowed, even with wildcards. For example, `arn:aws:geo:region: *account-id* :map/ExampleMap*` .\n\nFor more information about ARN format, see [Amazon Resource Names (ARNs)](https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html) ." }, + "AWS::Location::APIKey AppleApp": { + "BundleId": "" + }, "AWS::Location::APIKey Tag": { "Key": "The key value/string of an API key. This value is used when making API calls to authorize the call. For example, see [GetMapGlyphs](https://docs.aws.amazon.com/location/latest/APIReference/API_GetMapGlyphs.html) .", "Value": "The value of the tag that is associated with the specified API key." @@ -30286,7 +30394,7 @@ "Value": "The tag value to associate with the specified tag key ( `Key` ). A tag value can contain up to 256 UTF-8 characters. A tag value cannot be null, but it can be an empty string." }, "AWS::Macie::Session": { - "FindingPublishingFrequency": "Specifies how often Amazon Macie publishes updates to policy findings for the account. This includes publishing updates to AWS Security Hub and Amazon EventBridge (formerly Amazon CloudWatch Events ). Valid values are:\n\n- FIFTEEN_MINUTES\n- ONE_HOUR\n- SIX_HOURS", + "FindingPublishingFrequency": "Specifies how often Amazon Macie publishes updates to policy findings for the account. This includes publishing updates to Security Hub and Amazon EventBridge (formerly Amazon CloudWatch Events ). Valid values are:\n\n- FIFTEEN_MINUTES\n- ONE_HOUR\n- SIX_HOURS", "Status": "The status of Amazon Macie for the account. Valid values are: `ENABLED` , start or resume Macie activities for the account; and, `PAUSED` , suspend Macie activities for the account." }, "AWS::ManagedBlockchain::Accessor": { @@ -33119,6 +33227,7 @@ "Value": "The value to use in the custom metric dimension." }, "AWS::NetworkFirewall::FirewallPolicy FirewallPolicy": { + "EnableTLSSessionHolding": "When true, prevents TCP and TLS packets from reaching destination servers until TLS Inspection has evaluated Server Name Indication (SNI) rules. Requires an associated TLS Inspection configuration.", "PolicyVariables": "Contains variables that you can use to override default Suricata settings in your firewall policy.", "StatefulDefaultActions": "The default actions to take on a packet that doesn't match any stateful rules. The stateful default action is optional, and is only valid when using the strict rule order.\n\nValid values of the stateful default action:\n\n- aws:drop_strict\n- aws:drop_established\n- aws:alert_strict\n- aws:alert_established\n\nFor more information, see [Strict evaluation order](https://docs.aws.amazon.com/network-firewall/latest/developerguide/suricata-rule-evaluation-order.html#suricata-strict-rule-evaluation-order.html) in the *AWS Network Firewall Developer Guide* .", "StatefulEngineOptions": "Additional options governing how Network Firewall handles stateful rules. The stateful rule groups that you use in your policy must have stateful rule options settings that are compatible with these settings.", @@ -36156,7 +36265,7 @@ "IdentityType": "The authentication type being used by a Amazon Q Business application.", "PersonalizationConfiguration": "Configuration information about chat response personalization. For more information, see [Personalizing chat responses](https://docs.aws.amazon.com/amazonq/latest/qbusiness-ug/personalizing-chat-responses.html) .", "QAppsConfiguration": "Configuration information about Amazon Q Apps.", - "QuickSightConfiguration": "The Amazon QuickSight configuration for an Amazon Q Business application that uses QuickSight as the identity provider.", + "QuickSightConfiguration": "The Amazon Quick Suite configuration for an Amazon Q Business application that uses Quick Suite as the identity provider.", "RoleArn": "The Amazon Resource Name (ARN) of an IAM role with permissions to access your Amazon CloudWatch logs and metrics. If this property is not specified, Amazon Q Business will create a [service linked role (SLR)](https://docs.aws.amazon.com/amazonq/latest/qbusiness-ug/using-service-linked-roles.html#slr-permissions) and use it as the application's role.", "Tags": "A list of key-value pairs that identify or categorize your Amazon Q Business application. You can also use tags to help control access to the application. Tag keys and values can consist of Unicode letters, digits, white space, and any of the following symbols: _ . : / = + - @." }, @@ -36177,7 +36286,7 @@ "QAppsControlMode": "Status information about whether end users can create and use Amazon Q Apps in the web experience." }, "AWS::QBusiness::Application QuickSightConfiguration": { - "ClientNamespace": "The Amazon QuickSight namespace that is used as the identity provider. For more information about QuickSight namespaces, see [Namespace operations](https://docs.aws.amazon.com/quicksight/latest/developerguide/namespace-operations.html) ." + "ClientNamespace": "The Amazon Quick Suite namespace that is used as the identity provider. For more information about Quick Suite namespaces, see [Namespace operations](https://docs.aws.amazon.com/quicksight/latest/developerguide/namespace-operations.html) ." }, "AWS::QBusiness::Application Tag": { "Key": "The key for the tag. Keys are not case sensitive and must be unique for the Amazon Q Business application or data source.", @@ -38841,7 +38950,7 @@ "FilterControls": "The list of filter controls that are on a sheet.\n\nFor more information, see [Adding filter controls to analysis sheets](https://docs.aws.amazon.com/quicksight/latest/user/filter-controls.html) in the *Amazon Quick Suite User Guide* .", "Images": "A list of images on a sheet.", "Layouts": "Layouts define how the components of a sheet are arranged.\n\nFor more information, see [Types of layout](https://docs.aws.amazon.com/quicksight/latest/user/types-of-layout.html) in the *Amazon Quick Suite User Guide* .", - "Name": "The name of the sheet. This name is displayed on the sheet's tab in the Amazon QuickSight console.", + "Name": "The name of the sheet. This name is displayed on the sheet's tab in the Quick Suite console.", "ParameterControls": "The list of parameter controls that are on a sheet.\n\nFor more information, see [Using a Control with a Parameter in Amazon Quick Sight](https://docs.aws.amazon.com/quicksight/latest/user/parameters-controls.html) in the *Amazon Quick Suite User Guide* .", "SheetControlLayouts": "The control layouts of the sheet.", "SheetId": "The unique identifier of a sheet.", @@ -41912,7 +42021,7 @@ "FilterControls": "The list of filter controls that are on a sheet.\n\nFor more information, see [Adding filter controls to analysis sheets](https://docs.aws.amazon.com/quicksight/latest/user/filter-controls.html) in the *Amazon Quick Suite User Guide* .", "Images": "A list of images on a sheet.", "Layouts": "Layouts define how the components of a sheet are arranged.\n\nFor more information, see [Types of layout](https://docs.aws.amazon.com/quicksight/latest/user/types-of-layout.html) in the *Amazon Quick Suite User Guide* .", - "Name": "The name of the sheet. This name is displayed on the sheet's tab in the Amazon QuickSight console.", + "Name": "The name of the sheet. This name is displayed on the sheet's tab in the Quick Suite console.", "ParameterControls": "The list of parameter controls that are on a sheet.\n\nFor more information, see [Using a Control with a Parameter in Amazon Quick Sight](https://docs.aws.amazon.com/quicksight/latest/user/parameters-controls.html) in the *Amazon Quick Suite User Guide* .", "SheetControlLayouts": "The control layouts of the sheet.", "SheetId": "The unique identifier of a sheet.", @@ -42516,6 +42625,7 @@ "AwsAccountId": "The AWS account ID.", "ColumnGroups": "Groupings of columns that work together in certain Amazon Quick Sight features. Currently, only geospatial hierarchy is supported.", "ColumnLevelPermissionRules": "A set of one or more definitions of a `ColumnLevelPermissionRule` .", + "DataPrepConfiguration": "The data preparation configuration associated with this dataset.", "DataSetId": "An ID for the dataset that you want to create. This ID is unique per AWS Region for each AWS account.", "DataSetRefreshProperties": "The refresh properties of a dataset.", "DataSetUsageConfiguration": "The usage configuration to apply to child datasets that reference this dataset as a source.", @@ -42524,16 +42634,35 @@ "FolderArns": "", "ImportMode": "Indicates whether you want to import the data into SPICE.", "IngestionWaitPolicy": "The wait policy to use when creating or updating a Dataset. The default is to wait for SPICE ingestion to finish with timeout of 36 hours.", - "LogicalTableMap": "Configures the combination and transformation of the data from the physical tables.", "Name": "The display name for the dataset.", "PerformanceConfiguration": "The performance optimization configuration of a dataset.", "Permissions": "A list of resource permissions on the dataset.", "PhysicalTableMap": "Declares the physical tables that are available in the underlying data sources.", - "RowLevelPermissionDataSet": "The row-level security configuration for the data that you want to create.", - "RowLevelPermissionTagConfiguration": "The element you can use to define tags for row-level security.", + "SemanticModelConfiguration": "The semantic model configuration associated with this dataset.", "Tags": "Contains a map of the key-value pairs for the resource tag or tags assigned to the dataset.", "UseAs": "The usage of the dataset." }, + "AWS::QuickSight::DataSet AggregateOperation": { + "Aggregations": "The list of aggregation functions to apply to the grouped data, such as `SUM` , `COUNT` , or `AVERAGE` .", + "Alias": "Alias for this operation.", + "GroupByColumnNames": "The list of column names to group by when performing the aggregation. Rows with the same values in these columns will be grouped together.", + "Source": "The source transform operation that provides input data for the aggregation." + }, + "AWS::QuickSight::DataSet Aggregation": { + "AggregationFunction": "The aggregation function to apply, such as `SUM` , `COUNT` , `AVERAGE` , `MIN` , `MAX`", + "NewColumnId": "A unique identifier for the new column that will contain the aggregated values.", + "NewColumnName": "The name for the new column that will contain the aggregated values." + }, + "AWS::QuickSight::DataSet AppendOperation": { + "Alias": "Alias for this operation.", + "AppendedColumns": "The list of columns to include in the appended result, mapping columns from both sources.", + "FirstSource": "The first data source to be included in the append operation.", + "SecondSource": "The second data source to be appended to the first source." + }, + "AWS::QuickSight::DataSet AppendedColumn": { + "ColumnName": "The name of the column to include in the appended result.", + "NewColumnId": "A unique identifier for the column in the appended result." + }, "AWS::QuickSight::DataSet CalculatedColumn": { "ColumnId": "A unique ID to identify a calculated column. During a dataset update, if the column ID of a calculated column matches that of an existing calculated column, Quick Sight preserves the existing calculated column.", "ColumnName": "Column name.", @@ -42545,6 +42674,11 @@ "NewColumnType": "New column data type.", "SubType": "The sub data type of the new column. Sub types are only available for decimal columns that are part of a SPICE dataset." }, + "AWS::QuickSight::DataSet CastColumnTypesOperation": { + "Alias": "Alias for this operation.", + "CastColumnTypeOperations": "The list of column type casting operations to perform.", + "Source": "The source transform operation that provides input data for the type casting." + }, "AWS::QuickSight::DataSet ColumnDescription": { "Text": "The text of a description for a column." }, @@ -42559,8 +42693,14 @@ "ColumnDescription": "A description for a column.", "ColumnGeographicRole": "A geospatial role for a column." }, + "AWS::QuickSight::DataSet ColumnToUnpivot": { + "ColumnName": "The name of the column to unpivot from the source data.", + "NewValue": "The value to assign to this column in the unpivoted result, typically the column name or a descriptive label." + }, "AWS::QuickSight::DataSet CreateColumnsOperation": { - "Columns": "Calculated columns to create." + "Alias": "Alias for this operation.", + "Columns": "Calculated columns to create.", + "Source": "The source transform operation that provides input data for creating new calculated columns." }, "AWS::QuickSight::DataSet CustomSql": { "Columns": "The column schema from the SQL query result set.", @@ -42568,10 +42708,92 @@ "Name": "A display name for the SQL query result.", "SqlQuery": "The SQL query." }, + "AWS::QuickSight::DataSet DataPrepAggregationFunction": { + "ListAggregation": "A list aggregation function that concatenates values from multiple rows into a single delimited string.", + "PercentileAggregation": "", + "SimpleAggregation": "A simple aggregation function such as `SUM` , `COUNT` , `AVERAGE` , `MIN` , `MAX` , `MEDIAN` , `VARIANCE` , or `STANDARD_DEVIATION` ." + }, + "AWS::QuickSight::DataSet DataPrepConfiguration": { + "DestinationTableMap": "A map of destination tables that receive the final prepared data.", + "SourceTableMap": "A map of source tables that provide information about underlying sources.", + "TransformStepMap": "A map of transformation steps that process the data." + }, + "AWS::QuickSight::DataSet DataPrepListAggregationFunction": { + "Distinct": "Whether to include only distinct values in the concatenated result, removing duplicates.", + "InputColumnName": "The name of the column containing values to be concatenated.", + "Separator": "The string used to separate values in the concatenated result." + }, + "AWS::QuickSight::DataSet DataPrepPercentileAggregationFunction": { + "InputColumnName": "", + "PercentileValue": "" + }, + "AWS::QuickSight::DataSet DataPrepSimpleAggregationFunction": { + "FunctionType": "The type of aggregation function to perform, such as `COUNT` , `SUM` , `AVERAGE` , `MIN` , `MAX` , `MEDIAN` , `VARIANCE` , or `STANDARD_DEVIATION` .", + "InputColumnName": "The name of the column on which to perform the aggregation function." + }, + "AWS::QuickSight::DataSet DataSetColumnIdMapping": { + "SourceColumnId": "", + "TargetColumnId": "" + }, + "AWS::QuickSight::DataSet DataSetDateComparisonFilterCondition": { + "Operator": "The comparison operator to use, such as `BEFORE` , `BEFORE_OR_EQUALS_TO` , `AFTER` , or `AFTER_OR_EQUALS_TO` .", + "Value": "The date value to compare against." + }, + "AWS::QuickSight::DataSet DataSetDateFilterCondition": { + "ColumnName": "The name of the date column to filter.", + "ComparisonFilterCondition": "A comparison-based filter condition for the date column.", + "RangeFilterCondition": "A range-based filter condition for the date column, filtering values between minimum and maximum dates." + }, + "AWS::QuickSight::DataSet DataSetDateFilterValue": { + "StaticValue": "A static date value used for filtering." + }, + "AWS::QuickSight::DataSet DataSetDateRangeFilterCondition": { + "IncludeMaximum": "Whether to include the maximum value in the filter range.", + "IncludeMinimum": "Whether to include the minimum value in the filter range.", + "RangeMaximum": "The maximum date value for the range filter.", + "RangeMinimum": "The minimum date value for the range filter." + }, + "AWS::QuickSight::DataSet DataSetNumericComparisonFilterCondition": { + "Operator": "The comparison operator to use, such as `EQUALS` , `GREATER_THAN` , `LESS_THAN` , or their variants.", + "Value": "The numeric value to compare against." + }, + "AWS::QuickSight::DataSet DataSetNumericFilterCondition": { + "ColumnName": "The name of the numeric column to filter.", + "ComparisonFilterCondition": "A comparison-based filter condition for the numeric column.", + "RangeFilterCondition": "A range-based filter condition for the numeric column, filtering values between minimum and maximum numbers." + }, + "AWS::QuickSight::DataSet DataSetNumericFilterValue": { + "StaticValue": "A static numeric value used for filtering." + }, + "AWS::QuickSight::DataSet DataSetNumericRangeFilterCondition": { + "IncludeMaximum": "Whether to include the maximum value in the filter range.", + "IncludeMinimum": "Whether to include the minimum value in the filter range.", + "RangeMaximum": "The maximum numeric value for the range filter.", + "RangeMinimum": "The minimum numeric value for the range filter." + }, "AWS::QuickSight::DataSet DataSetRefreshProperties": { "FailureConfiguration": "The failure configuration for a dataset.", "RefreshConfiguration": "The refresh configuration for a dataset." }, + "AWS::QuickSight::DataSet DataSetStringComparisonFilterCondition": { + "Operator": "The comparison operator to use, such as `EQUALS` , `CONTAINS` , `STARTS_WITH` , `ENDS_WITH` , or their negations.", + "Value": "The string value to compare against." + }, + "AWS::QuickSight::DataSet DataSetStringFilterCondition": { + "ColumnName": "The name of the string column to filter.", + "ComparisonFilterCondition": "A comparison-based filter condition for the string column.", + "ListFilterCondition": "A list-based filter condition that includes or excludes values from a specified list." + }, + "AWS::QuickSight::DataSet DataSetStringFilterValue": { + "StaticValue": "A static string value used for filtering." + }, + "AWS::QuickSight::DataSet DataSetStringListFilterCondition": { + "Operator": "The list operator to use, either `INCLUDE` to match values in the list or `EXCLUDE` to filter out values in the list.", + "Values": "The list of string values to include or exclude in the filter." + }, + "AWS::QuickSight::DataSet DataSetStringListFilterValue": { + "StaticValues": "A list of static string values used for filtering." + }, "AWS::QuickSight::DataSet DataSetUsageConfiguration": { "DisableUseAsDirectQuerySource": "An option that controls whether a child dataset of a direct query can use this dataset as a source.", "DisableUseAsImportedSource": "An option that controls whether a child dataset that's stored in Quick Sight can use this dataset as a source." @@ -42601,18 +42823,41 @@ "AWS::QuickSight::DataSet DecimalDatasetParameterDefaultValues": { "StaticValues": "A list of static default values for a given decimal parameter." }, + "AWS::QuickSight::DataSet DestinationTable": { + "Alias": "Alias for the destination table.", + "Source": "The source configuration that specifies which transform operation provides data to this destination table." + }, + "AWS::QuickSight::DataSet DestinationTableSource": { + "TransformOperationId": "The identifier of the transform operation that provides data to the destination table." + }, "AWS::QuickSight::DataSet FieldFolder": { "Columns": "A folder has a list of columns. A column can only be in one folder.", "Description": "The description for a field folder." }, "AWS::QuickSight::DataSet FilterOperation": { - "ConditionExpression": "An expression that must evaluate to a Boolean value. Rows for which the expression evaluates to true are kept in the dataset." + "ConditionExpression": "An expression that must evaluate to a Boolean value. Rows for which the expression evaluates to true are kept in the dataset.", + "DateFilterCondition": "A date-based filter condition within a filter operation.", + "NumericFilterCondition": "A numeric-based filter condition within a filter operation.", + "StringFilterCondition": "A string-based filter condition within a filter operation." + }, + "AWS::QuickSight::DataSet FiltersOperation": { + "Alias": "Alias for this operation.", + "FilterOperations": "The list of filter operations to apply.", + "Source": "The source transform operation that provides input data for filtering." }, "AWS::QuickSight::DataSet GeoSpatialColumnGroup": { "Columns": "Columns in this hierarchy.", "CountryCode": "Country code.", "Name": "A display name for the hierarchy." }, + "AWS::QuickSight::DataSet ImportTableOperation": { + "Alias": "Alias for this operation.", + "Source": "The source configuration that specifies which source table to import and any column mappings." + }, + "AWS::QuickSight::DataSet ImportTableOperationSource": { + "ColumnIdMappings": "The mappings between source column identifiers and target column identifiers during the import.", + "SourceTableId": "The identifier of the source table to import data from." + }, "AWS::QuickSight::DataSet IncrementalRefresh": { "LookbackWindow": "The lookback window setup for an incremental refresh configuration." }, @@ -42621,6 +42866,7 @@ "WaitForSpiceIngestion": "Wait for SPICE ingestion to finish to mark dataset creation or update as successful. Default (true). Applicable only when `DataSetImportMode` mode is set to SPICE." }, "AWS::QuickSight::DataSet InputColumn": { + "Id": "A unique identifier for the input column.", "Name": "The name of this column in the underlying data source.", "SubType": "The sub data type of the column. Sub types are only available for decimal columns that are part of a SPICE dataset.", "Type": "The data type of the column." @@ -42645,10 +42891,17 @@ "AWS::QuickSight::DataSet JoinKeyProperties": { "UniqueKey": "A value that indicates that a row in a table is uniquely identified by the columns in a join key. This is used by Quick Suite to optimize query performance." }, - "AWS::QuickSight::DataSet LogicalTable": { - "Alias": "A display name for the logical table.", - "DataTransforms": "Transform operations that act on this logical table. For this structure to be valid, only one of the attributes can be non-null.", - "Source": "Source of this logical table." + "AWS::QuickSight::DataSet JoinOperandProperties": { + "OutputColumnNameOverrides": "A list of column name overrides to apply to the join operand's output columns." + }, + "AWS::QuickSight::DataSet JoinOperation": { + "Alias": "Alias for this operation.", + "LeftOperand": "The left operand for the join operation.", + "LeftOperandProperties": "Properties that control how the left operand's columns are handled in the join result.", + "OnClause": "The join condition that specifies how to match rows between the left and right operands.", + "RightOperand": "The right operand for the join operation.", + "RightOperandProperties": "Properties that control how the right operand's columns are handled in the join result.", + "Type": "The type of join to perform, such as `INNER` , `LEFT` , `RIGHT` , or `OUTER` ." }, "AWS::QuickSight::DataSet LogicalTableSource": { "DataSetArn": "The Amazon Resource Number (ARN) of the parent dataset.", @@ -42668,25 +42921,53 @@ }, "AWS::QuickSight::DataSet OutputColumn": { "Description": "A description for a column.", + "Id": "A unique identifier for the output column.", "Name": "The display name of the column..", "SubType": "The sub data type of the column.", "Type": "The data type of the column." }, + "AWS::QuickSight::DataSet OutputColumnNameOverride": { + "OutputColumnName": "The new name to assign to the column in the output.", + "SourceColumnName": "The original name of the column from the source transform operation." + }, "AWS::QuickSight::DataSet OverrideDatasetParameterOperation": { "NewDefaultValues": "The new default values for the parameter.", "NewParameterName": "The new name for the parameter.", "ParameterName": "The name of the parameter to be overridden with different values." }, + "AWS::QuickSight::DataSet ParentDataSet": { + "DataSetArn": "The Amazon Resource Name (ARN) of the parent dataset.", + "InputColumns": "The list of input columns available from the parent dataset." + }, "AWS::QuickSight::DataSet PerformanceConfiguration": { "UniqueKeys": "" }, "AWS::QuickSight::DataSet PhysicalTable": { "CustomSql": "A physical table type built from the results of the custom SQL query.", "RelationalTable": "A physical table type for relational data sources.", - "S3Source": "A physical table type for as S3 data source." + "S3Source": "A physical table type for as S3 data source.", + "SaaSTable": "A physical table type for Software-as-a-Service (SaaS) sources." + }, + "AWS::QuickSight::DataSet PivotConfiguration": { + "LabelColumnName": "The name of the column that contains the labels to be pivoted into separate columns.", + "PivotedLabels": "The list of specific label values to pivot into separate columns." + }, + "AWS::QuickSight::DataSet PivotOperation": { + "Alias": "Alias for this operation.", + "GroupByColumnNames": "The list of column names to group by when performing the pivot operation.", + "PivotConfiguration": "Configuration that specifies which labels to pivot and how to structure the resulting columns.", + "Source": "The source transform operation that provides input data for pivoting.", + "ValueColumnConfiguration": "Configuration for how to aggregate values when multiple rows map to the same pivoted column." + }, + "AWS::QuickSight::DataSet PivotedLabel": { + "LabelName": "The label value from the source data to be pivoted.", + "NewColumnId": "A unique identifier for the new column created from this pivoted label.", + "NewColumnName": "The name for the new column created from this pivoted label." }, "AWS::QuickSight::DataSet ProjectOperation": { - "ProjectedColumns": "Projected columns." + "Alias": "Alias for this operation.", + "ProjectedColumns": "Projected columns.", + "Source": "The source transform operation that provides input data for column projection." }, "AWS::QuickSight::DataSet RefreshConfiguration": { "IncrementalRefresh": "The incremental refresh for the dataset." @@ -42708,10 +42989,19 @@ "ColumnName": "The name of the column to be renamed.", "NewColumnName": "The new name for the column." }, + "AWS::QuickSight::DataSet RenameColumnsOperation": { + "Alias": "Alias for this operation.", + "RenameColumnOperations": "The list of column rename operations to perform, specifying old and new column names.", + "Source": "The source transform operation that provides input data for column renaming." + }, "AWS::QuickSight::DataSet ResourcePermission": { "Actions": "The IAM action to grant or revoke permisions on", "Principal": "The Amazon Resource Name (ARN) of the principal. This can be one of the following:\n\n- The ARN of an Amazon Quick Suite user or group associated with a data source or dataset. (This is common.)\n- The ARN of an Amazon Quick Suite user, group, or namespace associated with an analysis, dashboard, template, or theme. (This is common.)\n- The ARN of an AWS account root: This is an IAM ARN rather than a Quick Suite ARN. Use this option only to share resources (templates) across AWS accounts . (This is less common.)" }, + "AWS::QuickSight::DataSet RowLevelPermissionConfiguration": { + "RowLevelPermissionDataSet": "", + "TagConfiguration": "" + }, "AWS::QuickSight::DataSet RowLevelPermissionDataSet": { "Arn": "The Amazon Resource Name (ARN) of the dataset that contains permissions for RLS.", "FormatVersion": "The user or group rules associated with the dataset that contains permissions for RLS.\n\nBy default, `FormatVersion` is `VERSION_1` . When `FormatVersion` is `VERSION_1` , `UserName` and `GroupName` are required. When `FormatVersion` is `VERSION_2` , `UserARN` and `GroupARN` are required, and `Namespace` must not exist.", @@ -42735,6 +43025,23 @@ "InputColumns": "A physical table type for an S3 data source.\n\n> For files that aren't JSON, only `STRING` data types are supported in input columns.", "UploadSettings": "Information about the format for the S3 source file or files." }, + "AWS::QuickSight::DataSet SaaSTable": { + "DataSourceArn": "The Amazon Resource Name (ARN) of the SaaS data source.", + "InputColumns": "The list of input columns available from the SaaS table.", + "TablePath": "The hierarchical path to the table within the SaaS data source." + }, + "AWS::QuickSight::DataSet SemanticModelConfiguration": { + "TableMap": "A map of semantic tables that define the analytical structure." + }, + "AWS::QuickSight::DataSet SemanticTable": { + "Alias": "Alias for the semantic table.", + "DestinationTableId": "The identifier of the destination table from data preparation that provides data to this semantic table.", + "RowLevelPermissionConfiguration": "Configuration for row level security that control data access for this semantic table." + }, + "AWS::QuickSight::DataSet SourceTable": { + "DataSet": "A parent dataset that serves as the data source instead of a physical table.", + "PhysicalTableId": "The identifier of the physical table that serves as the data source." + }, "AWS::QuickSight::DataSet StringDatasetParameter": { "DefaultValues": "A list of default values for a given string dataset parameter type. This structure only accepts static values.", "Id": "An identifier for the string parameter that is created in the dataset.", @@ -42744,6 +43051,10 @@ "AWS::QuickSight::DataSet StringDatasetParameterDefaultValues": { "StaticValues": "A list of static default values for a given string parameter." }, + "AWS::QuickSight::DataSet TablePathElement": { + "Id": "The unique identifier of the path element.", + "Name": "The name of the path element." + }, "AWS::QuickSight::DataSet Tag": { "Key": "", "Value": "" @@ -42762,9 +43073,35 @@ "TagColumnOperation": "An operation that tags a column with additional information.", "UntagColumnOperation": "" }, + "AWS::QuickSight::DataSet TransformOperationSource": { + "ColumnIdMappings": "The mappings between source column identifiers and target column identifiers for this transformation.", + "TransformOperationId": "The identifier of the transform operation that provides input data." + }, + "AWS::QuickSight::DataSet TransformStep": { + "AggregateStep": "A transform step that groups data and applies aggregation functions to calculate summary values.", + "AppendStep": "A transform step that combines rows from multiple sources by stacking them vertically.", + "CastColumnTypesStep": "A transform step that changes the data types of one or more columns.", + "CreateColumnsStep": "", + "FiltersStep": "A transform step that applies filter conditions.", + "ImportTableStep": "A transform step that brings data from a source table.", + "JoinStep": "A transform step that combines data from two sources based on specified join conditions.", + "PivotStep": "A transform step that converts row values into columns to reshape the data structure.", + "ProjectStep": "", + "RenameColumnsStep": "A transform step that changes the names of one or more columns.", + "UnpivotStep": "A transform step that converts columns into rows to normalize the data structure." + }, "AWS::QuickSight::DataSet UniqueKey": { "ColumnNames": "" }, + "AWS::QuickSight::DataSet UnpivotOperation": { + "Alias": "Alias for this operation.", + "ColumnsToUnpivot": "The list of columns to unpivot from the source data.", + "Source": "The source transform operation that provides input data for unpivoting.", + "UnpivotedLabelColumnId": "A unique identifier for the new column that will contain the unpivoted column names.", + "UnpivotedLabelColumnName": "The name for the new column that will contain the unpivoted column names.", + "UnpivotedValueColumnId": "A unique identifier for the new column that will contain the unpivoted values.", + "UnpivotedValueColumnName": "The name for the new column that will contain the unpivoted values." + }, "AWS::QuickSight::DataSet UntagColumnOperation": { "ColumnName": "The column that this operation acts on.", "TagNames": "The column tags to remove from this column." @@ -42776,6 +43113,9 @@ "StartFromRow": "A row number to start reading data from.", "TextQualifier": "Text qualifier." }, + "AWS::QuickSight::DataSet ValueColumnConfiguration": { + "AggregationFunction": "The aggregation function to apply when multiple values map to the same pivoted cell." + }, "AWS::QuickSight::DataSource": { "AlternateDataSourceParameters": "A set of alternate data source parameters that you want to share for the credentials stored with this data source. The credentials are applied in tandem with the data source parameters when you copy a data source by using a create or update request. The API operation compares the `DataSourceParameters` structure that's in the request with the structures in the `AlternateDataSourceParameters` allow list. If the structures are an exact match, the request is allowed to use the credentials from this existing data source. If the `AlternateDataSourceParameters` list is null, the `Credentials` originally used with this `DataSourceParameters` are automatically allowed.", "AwsAccountId": "The AWS account ID.", @@ -45194,7 +45534,7 @@ "FilterControls": "The list of filter controls that are on a sheet.\n\nFor more information, see [Adding filter controls to analysis sheets](https://docs.aws.amazon.com/quicksight/latest/user/filter-controls.html) in the *Amazon Quick Suite User Guide* .", "Images": "A list of images on a sheet.", "Layouts": "Layouts define how the components of a sheet are arranged.\n\nFor more information, see [Types of layout](https://docs.aws.amazon.com/quicksight/latest/user/types-of-layout.html) in the *Amazon Quick Suite User Guide* .", - "Name": "The name of the sheet. This name is displayed on the sheet's tab in the Amazon QuickSight console.", + "Name": "The name of the sheet. This name is displayed on the sheet's tab in the Quick Suite console.", "ParameterControls": "The list of parameter controls that are on a sheet.\n\nFor more information, see [Using a Control with a Parameter in Amazon Quick Sight](https://docs.aws.amazon.com/quicksight/latest/user/parameters-controls.html) in the *Amazon Quick Suite User Guide* .", "SheetControlLayouts": "The control layouts of the sheet.", "SheetId": "The unique identifier of a sheet.", @@ -46542,6 +46882,124 @@ "Key": "A key is the required name of the tag. The string value can be from 1 to 128 Unicode characters in length and can't be prefixed with `aws:` or `rds:` . The string can only contain only the set of Unicode letters, digits, white-space, '_', '.', ':', '/', '=', '+', '-', '@' (Java regex: \"^([\\\\p{L}\\\\p{Z}\\\\p{N}_.:/=+\\\\-@]*)$\").", "Value": "A value is the optional value of the tag. The string value can be from 1 to 256 Unicode characters in length and can't be prefixed with `aws:` or `rds:` . The string can only contain only the set of Unicode letters, digits, white-space, '_', '.', ':', '/', '=', '+', '-', '@' (Java regex: \"^([\\\\p{L}\\\\p{Z}\\\\p{N}_.:/=+\\\\-@]*)$\")." }, + "AWS::RTBFabric::Link": { + "GatewayId": "The unique identifier of the gateway.", + "HttpResponderAllowed": "Boolean to specify if an HTTP responder is allowed.", + "LinkAttributes": "Attributes of the link.", + "LinkLogSettings": "Settings for the application logs.", + "ModuleConfigurationList": "", + "PeerGatewayId": "The unique identifier of the peer gateway.", + "Tags": "A map of the key-value pairs of the tag or tags to assign to the resource." + }, + "AWS::RTBFabric::Link Action": { + "HeaderTag": "Describes the header tag for a bid action.", + "NoBid": "Describes the parameters of a no bid module." + }, + "AWS::RTBFabric::Link ApplicationLogs": { + "LinkApplicationLogSampling": "Describes a link application log sample." + }, + "AWS::RTBFabric::Link Filter": { + "Criteria": "Describes the criteria for a filter." + }, + "AWS::RTBFabric::Link FilterCriterion": { + "Path": "The path to filter.", + "Values": "The value to filter." + }, + "AWS::RTBFabric::Link HeaderTagAction": { + "Name": "The name of the bid action.", + "Value": "The value of the bid action." + }, + "AWS::RTBFabric::Link LinkApplicationLogSampling": { + "ErrorLog": "An error log entry.", + "FilterLog": "A filter log entry." + }, + "AWS::RTBFabric::Link LinkAttributes": { + "CustomerProvidedId": "The customer-provided unique identifier of the link.", + "ResponderErrorMasking": "Describes the masking for HTTP error codes." + }, + "AWS::RTBFabric::Link LinkLogSettings": { + "ApplicationLogs": "Describes the configuration of a link application log." + }, + "AWS::RTBFabric::Link ModuleConfiguration": { + "DependsOn": "The dependencies of the module.", + "ModuleParameters": "Describes the parameters of a module.", + "Name": "The name of the module.", + "Version": "The version of the module." + }, + "AWS::RTBFabric::Link ModuleParameters": { + "NoBid": "Describes the parameters of a no bid module.", + "OpenRtbAttribute": "Describes the parameters of an open RTB attribute module." + }, + "AWS::RTBFabric::Link NoBidAction": { + "NoBidReasonCode": "The reason code for the no bid action." + }, + "AWS::RTBFabric::Link NoBidModuleParameters": { + "PassThroughPercentage": "The pass through percentage.", + "Reason": "The reason description.", + "ReasonCode": "The reason code." + }, + "AWS::RTBFabric::Link OpenRtbAttributeModuleParameters": { + "Action": "Describes a bid action.", + "FilterConfiguration": "Describes the configuration of a filter.", + "FilterType": "The filter type.", + "HoldbackPercentage": "The hold back percentage." + }, + "AWS::RTBFabric::Link ResponderErrorMaskingForHttpCode": { + "Action": "The action for the error..", + "HttpCode": "The HTTP error code.", + "LoggingTypes": "The error log type.", + "ResponseLoggingPercentage": "The percentage of response logging." + }, + "AWS::RTBFabric::Link Tag": { + "Key": "The key name of the tag.", + "Value": "The value for the tag." + }, + "AWS::RTBFabric::RequesterGateway": { + "Description": "An optional description for the requester gateway.", + "SecurityGroupIds": "The unique identifiers of the security groups.", + "SubnetIds": "The unique identifiers of the subnets.", + "Tags": "A map of the key-value pairs of the tag or tags to assign to the resource.", + "VpcId": "The unique identifier of the Virtual Private Cloud (VPC)." + }, + "AWS::RTBFabric::RequesterGateway Tag": { + "Key": "The key name of the tag.", + "Value": "The value for the tag." + }, + "AWS::RTBFabric::ResponderGateway": { + "Description": "An optional description for the responder gateway.", + "DomainName": "The domain name for the responder gateway.", + "ManagedEndpointConfiguration": "The configuration for the managed endpoint.", + "Port": "The networking port to use.", + "Protocol": "The networking protocol to use.", + "SecurityGroupIds": "The unique identifiers of the security groups.", + "SubnetIds": "The unique identifiers of the subnets.", + "Tags": "A map of the key-value pairs of the tag or tags to assign to the resource.", + "TrustStoreConfiguration": "The configuration of the trust store.", + "VpcId": "The unique identifier of the Virtual Private Cloud (VPC)." + }, + "AWS::RTBFabric::ResponderGateway AutoScalingGroupsConfiguration": { + "AutoScalingGroupNameList": "The names of the auto scaling group.", + "RoleArn": "The role ARN of the auto scaling group." + }, + "AWS::RTBFabric::ResponderGateway EksEndpointsConfiguration": { + "ClusterApiServerCaCertificateChain": "The CA certificate chain of the cluster API server.", + "ClusterApiServerEndpointUri": "The URI of the cluster API server endpoint.", + "ClusterName": "The name of the cluster.", + "EndpointsResourceName": "The name of the endpoint resource.", + "EndpointsResourceNamespace": "The namespace of the endpoint resource.", + "RoleArn": "The role ARN for the cluster." + }, + "AWS::RTBFabric::ResponderGateway ManagedEndpointConfiguration": { + "AutoScalingGroupsConfiguration": "Describes the configuration of an auto scaling group.", + "EksEndpointsConfiguration": "Describes the configuration of an Amazon Elastic Kubernetes Service endpoint." + }, + "AWS::RTBFabric::ResponderGateway Tag": { + "Key": "The key name of the tag.", + "Value": "The value for the tag." + }, + "AWS::RTBFabric::ResponderGateway TrustStoreConfiguration": { + "CertificateAuthorityCertificates": "The certificate authority certificate." + }, "AWS::RUM::AppMonitor": { "AppMonitorConfiguration": "A structure that contains much of the configuration data for the app monitor. If you are using Amazon Cognito for authorization, you must include this structure in your request, and it must include the ID of the Amazon Cognito identity pool to use for authorization. If you don't include `AppMonitorConfiguration` , you must set up your own authorization method. For more information, see [Authorize your application to send data to AWS](https://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/CloudWatch-RUM-get-started-authorization.html) .\n\nIf you omit this argument, the sample rate used for CloudWatch RUM is set to 10% of the user sessions.", "CustomEvents": "Specifies whether this app monitor allows the web client to define and send custom events. If you omit this parameter, custom events are `DISABLED` .", @@ -48513,6 +48971,31 @@ "ResourcePolicy": "The `JSON` that defines the policy.", "TableARN": "The Amazon Resource Name (ARN) of the table." }, + "AWS::S3Vectors::Index": { + "DataType": "The data type of the vectors to be inserted into the vector index. Currently, only `float32` is supported, which represents 32-bit floating-point numbers.", + "Dimension": "The dimensions of the vectors to be inserted into the vector index. This value must be between 1 and 4096, inclusive. All vectors stored in the index must have the same number of dimensions.\n\nThe dimension value affects the storage requirements and search performance. Higher dimensions require more storage space and may impact search latency.", + "DistanceMetric": "The distance metric to be used for similarity search. Valid values are:\n\n- `cosine` - Measures the cosine of the angle between two vectors.\n- `euclidean` - Measures the straight-line distance between two points in multi-dimensional space. Lower values indicate greater similarity.", + "IndexName": "The name of the vector index to create. The index name must be between 3 and 63 characters long and can contain only lowercase letters, numbers, hyphens (-), and dots (.). The index name must be unique within the vector bucket.\n\nIf you don't specify a name, AWS CloudFormation generates a unique ID and uses that ID for the index name.\n\n> If you specify a name, you can't perform updates that require replacement of this resource. You can perform updates that require no or some interruption. If you need to replace the resource, specify a new name.", + "MetadataConfiguration": "The metadata configuration for the vector index.", + "VectorBucketArn": "The Amazon Resource Name (ARN) of the vector bucket that contains the vector index.", + "VectorBucketName": "The name of the vector bucket that contains the vector index." + }, + "AWS::S3Vectors::Index MetadataConfiguration": { + "NonFilterableMetadataKeys": "Non-filterable metadata keys allow you to enrich vectors with additional context during storage and retrieval. Unlike default metadata keys, these keys can't be used as query filters. Non-filterable metadata keys can be retrieved but can't be searched, queried, or filtered. You can access non-filterable metadata keys of your vectors after finding the vectors.\n\nYou can specify 1 to 10 non-filterable metadata keys. Each key must be 1 to 63 characters long." + }, + "AWS::S3Vectors::VectorBucket": { + "EncryptionConfiguration": "The encryption configuration for the vector bucket.", + "VectorBucketName": "A name for the vector bucket. The bucket name must contain only lowercase letters, numbers, and hyphens (-). The bucket name must be unique in the same AWS account for each AWS Region. If you don't specify a name, AWS CloudFormation generates a unique ID and uses that ID for the bucket name.\n\nThe bucket name must be between 3 and 63 characters long and must not contain uppercase characters or underscores.\n\n> If you specify a name, you can't perform updates that require replacement of this resource. You can perform updates that require no or some interruption. If you need to replace the resource, specify a new name." + }, + "AWS::S3Vectors::VectorBucket EncryptionConfiguration": { + "KmsKeyArn": "AWS Key Management Service (KMS) customer managed key ARN to use for the encryption configuration. This parameter is required if and only if `SseType` is set to `aws:kms` .\n\nYou must specify the full ARN of the KMS key. Key IDs or key aliases aren't supported.\n\n> Amazon S3 Vectors only supports symmetric encryption KMS keys. For more information, see [Asymmetric keys in AWS KMS](https://docs.aws.amazon.com//kms/latest/developerguide/symmetric-asymmetric.html) in the *AWS Key Management Service Developer Guide* .", + "SseType": "The server-side encryption type to use for the encryption configuration of the vector bucket. Valid values are `AES256` for Amazon S3 managed keys and `aws:kms` for AWS KMS keys." + }, + "AWS::S3Vectors::VectorBucketPolicy": { + "Policy": "A policy document containing permissions to add to the specified vector bucket. In IAM , you must provide policy documents in JSON format. However, in CloudFormation you can provide the policy in JSON or YAML format because CloudFormation converts YAML to JSON before submitting it to IAM .", + "VectorBucketArn": "The Amazon Resource Name (ARN) of the S3 vector bucket to which the policy applies.", + "VectorBucketName": "The name of the S3 vector bucket to which the policy applies." + }, "AWS::SDB::Domain": { "Description": "Information about the SimpleDB domain." }, @@ -48932,6 +49415,21 @@ "Key": "The key of the key-value tag.", "Value": "The value of the key-value tag." }, + "AWS::SES::MultiRegionEndpoint": { + "Details": "Contains details of a multi-region endpoint (global-endpoint) being created.", + "EndpointName": "The name of the multi-region endpoint (global-endpoint).", + "Tags": "An array of objects that define the tags (keys and values) to associate with the multi-region endpoint (global-endpoint)." + }, + "AWS::SES::MultiRegionEndpoint Details": { + "RouteDetails": "" + }, + "AWS::SES::MultiRegionEndpoint RouteDetailsItems": { + "Region": "" + }, + "AWS::SES::MultiRegionEndpoint Tag": { + "Key": "", + "Value": "" + }, "AWS::SES::ReceiptFilter": { "Filter": "A data structure that describes the IP address filter to create, which consists of a name, an IP address range, and whether to allow or block mail from it." }, @@ -51470,7 +51968,7 @@ "KmsKeyId": "The Amazon Resource Name (ARN) of a AWS Key Management Service key that SageMaker AI uses to encrypt data on the storage volume attached to your notebook instance. The KMS key you provide must be enabled. For information, see [Enabling and Disabling Keys](https://docs.aws.amazon.com/kms/latest/developerguide/enabling-keys.html) in the *AWS Key Management Service Developer Guide* .", "LifecycleConfigName": "The name of a lifecycle configuration to associate with the notebook instance. For information about lifecycle configurations, see [Customize a Notebook Instance](https://docs.aws.amazon.com/sagemaker/latest/dg/notebook-lifecycle-config.html) in the *Amazon SageMaker Developer Guide* .", "NotebookInstanceName": "The name of the new notebook instance.", - "PlatformIdentifier": "The platform identifier of the notebook instance runtime environment.", + "PlatformIdentifier": "The platform identifier of the notebook instance runtime environment. The default value is `notebook-al2-v2` .", "RoleArn": "When you send any requests to AWS resources from the notebook instance, SageMaker AI assumes this role to perform tasks on your behalf. You must grant this role necessary permissions so SageMaker AI can perform these tasks. The policy must allow the SageMaker AI service principal (sagemaker.amazonaws.com) permissions to assume this role. For more information, see [SageMaker AI Roles](https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-roles.html) .\n\n> To be able to pass this role to SageMaker AI, the caller of this API must have the `iam:PassRole` permission.", "RootAccess": "Whether root access is enabled or disabled for users of the notebook instance. The default value is `Enabled` .\n\n> Lifecycle configurations need root access to be able to set up a notebook instance. Because of this, lifecycle configurations associated with a notebook instance always run with root access even if you disable root access for users.", "SecurityGroupIds": "The VPC security group IDs, in the form sg-xxxxxxxx. The security groups must be for the same VPC as specified in the subnet.", @@ -51630,7 +52128,7 @@ "AWS::SageMaker::ProcessingJob S3Input": { "LocalPath": "The local path in your container where you want Amazon SageMaker to write input data to. `LocalPath` is an absolute path to the input data and must begin with `/opt/ml/processing/` . `LocalPath` is a required parameter when `AppManaged` is `False` (default).", "S3CompressionType": "Whether to GZIP-decompress the data in Amazon S3 as it is streamed into the processing container. `Gzip` can only be used when `Pipe` mode is specified as the `S3InputMode` . In `Pipe` mode, Amazon SageMaker streams input data from the source directly to your container without using the EBS volume.", - "S3DataDistributionType": "Whether to distribute the data from Amazon S3 to all processing instances with `FullyReplicated` , or whether the data from Amazon S3 is shared by Amazon S3 key, downloading one shard of data to each processing instance.", + "S3DataDistributionType": "Whether to distribute the data from Amazon S3 to all processing instances with `FullyReplicated` , or whether the data from Amazon S3 is sharded by Amazon S3 key, downloading one shard of data to each processing instance.", "S3DataType": "Whether you use an `S3Prefix` or a `ManifestFile` for the data type. If you choose `S3Prefix` , `S3Uri` identifies a key name prefix. Amazon SageMaker uses all objects with the specified key name prefix for the processing job. If you choose `ManifestFile` , `S3Uri` identifies an object that is a manifest file containing a list of object keys that you want Amazon SageMaker to use for the processing job.", "S3InputMode": "Whether to use `File` or `Pipe` input mode. In File mode, Amazon SageMaker copies the data from the input source onto the local ML storage volume before starting your processing container. This is the most commonly used input mode. In `Pipe` mode, Amazon SageMaker streams input data from the source directly to your processing container into named pipes without using the ML storage volume.", "S3Uri": "The URI of the Amazon S3 prefix Amazon SageMaker downloads data required to run a processing job." @@ -52136,16 +52634,16 @@ "ComplianceAssociatedStandardsId": "The unique identifier of a standard in which a control is enabled. This field consists of the resource portion of the Amazon Resource Name (ARN) returned for a standard in the [DescribeStandards](https://docs.aws.amazon.com/securityhub/1.0/APIReference/API_DescribeStandards.html) API response.\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", "ComplianceSecurityControlId": "The security control ID for which a finding was generated. Security control IDs are the same across standards.\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", "ComplianceStatus": "The result of a security check. This field is only used for findings generated from controls.\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", - "Confidence": "The likelihood that a finding accurately identifies the behavior or issue that it was intended to identify. `Confidence` is scored on a 0\u2013100 basis using a ratio scale. A value of `0` means 0 percent confidence, and a value of `100` means 100 percent confidence. For example, a data exfiltration detection based on a statistical deviation of network traffic has low confidence because an actual exfiltration hasn't been verified. For more information, see [Confidence](https://docs.aws.amazon.com/securityhub/latest/userguide/asff-top-level-attributes.html#asff-confidence) in the *AWS Security Hub User Guide* .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", - "CreatedAt": "A timestamp that indicates when this finding record was created.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", - "Criticality": "The level of importance that is assigned to the resources that are associated with a finding. `Criticality` is scored on a 0\u2013100 basis, using a ratio scale that supports only full integers. A score of `0` means that the underlying resources have no criticality, and a score of `100` is reserved for the most critical resources. For more information, see [Criticality](https://docs.aws.amazon.com/securityhub/latest/userguide/asff-top-level-attributes.html#asff-criticality) in the *AWS Security Hub User Guide* .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", + "Confidence": "The likelihood that a finding accurately identifies the behavior or issue that it was intended to identify. `Confidence` is scored on a 0\u2013100 basis using a ratio scale. A value of `0` means 0 percent confidence, and a value of `100` means 100 percent confidence. For example, a data exfiltration detection based on a statistical deviation of network traffic has low confidence because an actual exfiltration hasn't been verified. For more information, see [Confidence](https://docs.aws.amazon.com/securityhub/latest/userguide/asff-top-level-attributes.html#asff-confidence) in the *Security Hub User Guide* .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", + "CreatedAt": "A timestamp that indicates when this finding record was created.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", + "Criticality": "The level of importance that is assigned to the resources that are associated with a finding. `Criticality` is scored on a 0\u2013100 basis, using a ratio scale that supports only full integers. A score of `0` means that the underlying resources have no criticality, and a score of `100` is reserved for the most critical resources. For more information, see [Criticality](https://docs.aws.amazon.com/securityhub/latest/userguide/asff-top-level-attributes.html#asff-criticality) in the *Security Hub User Guide* .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", "Description": "A finding's description.\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", - "FirstObservedAt": "A timestamp that indicates when the potential security issue captured by a finding was first observed by the security findings product.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", + "FirstObservedAt": "A timestamp that indicates when the potential security issue captured by a finding was first observed by the security findings product.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", "GeneratorId": "The identifier for the solution-specific component that generated a finding.\n\nArray Members: Minimum number of 1 item. Maximum number of 100 items.", "Id": "The product-specific identifier for a finding.\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", - "LastObservedAt": "A timestamp that indicates when the security findings provider most recently observed a change in the resource that is involved in the finding.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", + "LastObservedAt": "A timestamp that indicates when the security findings provider most recently observed a change in the resource that is involved in the finding.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", "NoteText": "The text of a user-defined note that's added to a finding.\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", - "NoteUpdatedAt": "The timestamp of when the note was updated.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", + "NoteUpdatedAt": "The timestamp of when the note was updated.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", "NoteUpdatedBy": "The principal that created a note.\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", "ProductArn": "The Amazon Resource Name (ARN) for a third-party product that generated a finding in Security Hub.\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", "ProductName": "Provides the name of the product that generated the finding. For control-based findings, the product name is Security Hub.\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", @@ -52161,23 +52659,23 @@ "SeverityLabel": "The severity value of the finding.\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", "SourceUrl": "Provides a URL that links to a page about the current finding in the finding product.\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", "Title": "A finding's title.\n\nArray Members: Minimum number of 1 item. Maximum number of 100 items.", - "Type": "One or more finding types in the format of namespace/category/classifier that classify a finding. For a list of namespaces, classifiers, and categories, see [Types taxonomy for ASFF](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-findings-format-type-taxonomy.html) in the *AWS Security Hub User Guide* .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", - "UpdatedAt": "A timestamp that indicates when the finding record was most recently updated.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", + "Type": "One or more finding types in the format of namespace/category/classifier that classify a finding. For a list of namespaces, classifiers, and categories, see [Types taxonomy for ASFF](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-findings-format-type-taxonomy.html) in the *Security Hub User Guide* .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", + "UpdatedAt": "A timestamp that indicates when the finding record was most recently updated.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", "UserDefinedFields": "A list of user-defined name and value string pairs added to a finding.\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", "VerificationState": "Provides the veracity of a finding.\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", "WorkflowStatus": "Provides information about the status of the investigation into a finding.\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items." }, "AWS::SecurityHub::AutomationRule DateFilter": { "DateRange": "A date range for the date filter.", - "End": "A timestamp that provides the end date for the date filter.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", - "Start": "A timestamp that provides the start date for the date filter.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) ." + "End": "A timestamp that provides the end date for the date filter.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", + "Start": "A timestamp that provides the start date for the date filter.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) ." }, "AWS::SecurityHub::AutomationRule DateRange": { "Unit": "A date range unit for the date filter.", "Value": "A date range value for the date filter." }, "AWS::SecurityHub::AutomationRule MapFilter": { - "Comparison": "The condition to apply to the key value when filtering Security Hub findings with a map filter.\n\nTo search for values that have the filter value, use one of the following comparison operators:\n\n- To search for values that include the filter value, use `CONTAINS` . For example, for the `ResourceTags` field, the filter `Department CONTAINS Security` matches findings that include the value `Security` for the `Department` tag. In the same example, a finding with a value of `Security team` for the `Department` tag is a match.\n- To search for values that exactly match the filter value, use `EQUALS` . For example, for the `ResourceTags` field, the filter `Department EQUALS Security` matches findings that have the value `Security` for the `Department` tag.\n\n`CONTAINS` and `EQUALS` filters on the same field are joined by `OR` . A finding matches if it matches any one of those filters. For example, the filters `Department CONTAINS Security OR Department CONTAINS Finance` match a finding that includes either `Security` , `Finance` , or both values.\n\nTo search for values that don't have the filter value, use one of the following comparison operators:\n\n- To search for values that exclude the filter value, use `NOT_CONTAINS` . For example, for the `ResourceTags` field, the filter `Department NOT_CONTAINS Finance` matches findings that exclude the value `Finance` for the `Department` tag.\n- To search for values other than the filter value, use `NOT_EQUALS` . For example, for the `ResourceTags` field, the filter `Department NOT_EQUALS Finance` matches findings that don\u2019t have the value `Finance` for the `Department` tag.\n\n`NOT_CONTAINS` and `NOT_EQUALS` filters on the same field are joined by `AND` . A finding matches only if it matches all of those filters. For example, the filters `Department NOT_CONTAINS Security AND Department NOT_CONTAINS Finance` match a finding that excludes both the `Security` and `Finance` values.\n\n`CONTAINS` filters can only be used with other `CONTAINS` filters. `NOT_CONTAINS` filters can only be used with other `NOT_CONTAINS` filters.\n\nYou can\u2019t have both a `CONTAINS` filter and a `NOT_CONTAINS` filter on the same field. Similarly, you can\u2019t have both an `EQUALS` filter and a `NOT_EQUALS` filter on the same field. Combining filters in this way returns an error.\n\n`CONTAINS` and `NOT_CONTAINS` operators can be used only with automation rules. For more information, see [Automation rules](https://docs.aws.amazon.com/securityhub/latest/userguide/automation-rules.html) in the *AWS Security Hub User Guide* .", + "Comparison": "The condition to apply to the key value when filtering Security Hub findings with a map filter.\n\nTo search for values that have the filter value, use one of the following comparison operators:\n\n- To search for values that include the filter value, use `CONTAINS` . For example, for the `ResourceTags` field, the filter `Department CONTAINS Security` matches findings that include the value `Security` for the `Department` tag. In the same example, a finding with a value of `Security team` for the `Department` tag is a match.\n- To search for values that exactly match the filter value, use `EQUALS` . For example, for the `ResourceTags` field, the filter `Department EQUALS Security` matches findings that have the value `Security` for the `Department` tag.\n\n`CONTAINS` and `EQUALS` filters on the same field are joined by `OR` . A finding matches if it matches any one of those filters. For example, the filters `Department CONTAINS Security OR Department CONTAINS Finance` match a finding that includes either `Security` , `Finance` , or both values.\n\nTo search for values that don't have the filter value, use one of the following comparison operators:\n\n- To search for values that exclude the filter value, use `NOT_CONTAINS` . For example, for the `ResourceTags` field, the filter `Department NOT_CONTAINS Finance` matches findings that exclude the value `Finance` for the `Department` tag.\n- To search for values other than the filter value, use `NOT_EQUALS` . For example, for the `ResourceTags` field, the filter `Department NOT_EQUALS Finance` matches findings that don\u2019t have the value `Finance` for the `Department` tag.\n\n`NOT_CONTAINS` and `NOT_EQUALS` filters on the same field are joined by `AND` . A finding matches only if it matches all of those filters. For example, the filters `Department NOT_CONTAINS Security AND Department NOT_CONTAINS Finance` match a finding that excludes both the `Security` and `Finance` values.\n\n`CONTAINS` filters can only be used with other `CONTAINS` filters. `NOT_CONTAINS` filters can only be used with other `NOT_CONTAINS` filters.\n\nYou can\u2019t have both a `CONTAINS` filter and a `NOT_CONTAINS` filter on the same field. Similarly, you can\u2019t have both an `EQUALS` filter and a `NOT_EQUALS` filter on the same field. Combining filters in this way returns an error.\n\n`CONTAINS` and `NOT_CONTAINS` operators can be used only with automation rules. For more information, see [Automation rules](https://docs.aws.amazon.com/securityhub/latest/userguide/automation-rules.html) in the *Security Hub User Guide* .", "Key": "The key of the map filter. For example, for `ResourceTags` , `Key` identifies the name of the tag. For `UserDefinedFields` , `Key` is the name of the field.", "Value": "The value for the key in the map filter. Filter values are case sensitive. For example, one of the values for a tag called `Department` might be `Security` . If you provide `security` as the filter value, then there's no match." }, @@ -52200,7 +52698,7 @@ "Product": "The native severity as defined by the AWS service or integrated partner product that generated the finding." }, "AWS::SecurityHub::AutomationRule StringFilter": { - "Comparison": "The condition to apply to a string value when filtering Security Hub findings.\n\nTo search for values that have the filter value, use one of the following comparison operators:\n\n- To search for values that include the filter value, use `CONTAINS` . For example, the filter `Title CONTAINS CloudFront` matches findings that have a `Title` that includes the string CloudFront.\n- To search for values that exactly match the filter value, use `EQUALS` . For example, the filter `AwsAccountId EQUALS 123456789012` only matches findings that have an account ID of `123456789012` .\n- To search for values that start with the filter value, use `PREFIX` . For example, the filter `ResourceRegion PREFIX us` matches findings that have a `ResourceRegion` that starts with `us` . A `ResourceRegion` that starts with a different value, such as `af` , `ap` , or `ca` , doesn't match.\n\n`CONTAINS` , `EQUALS` , and `PREFIX` filters on the same field are joined by `OR` . A finding matches if it matches any one of those filters. For example, the filters `Title CONTAINS CloudFront OR Title CONTAINS CloudWatch` match a finding that includes either `CloudFront` , `CloudWatch` , or both strings in the title.\n\nTo search for values that don\u2019t have the filter value, use one of the following comparison operators:\n\n- To search for values that exclude the filter value, use `NOT_CONTAINS` . For example, the filter `Title NOT_CONTAINS CloudFront` matches findings that have a `Title` that excludes the string CloudFront.\n- To search for values other than the filter value, use `NOT_EQUALS` . For example, the filter `AwsAccountId NOT_EQUALS 123456789012` only matches findings that have an account ID other than `123456789012` .\n- To search for values that don't start with the filter value, use `PREFIX_NOT_EQUALS` . For example, the filter `ResourceRegion PREFIX_NOT_EQUALS us` matches findings with a `ResourceRegion` that starts with a value other than `us` .\n\n`NOT_CONTAINS` , `NOT_EQUALS` , and `PREFIX_NOT_EQUALS` filters on the same field are joined by `AND` . A finding matches only if it matches all of those filters. For example, the filters `Title NOT_CONTAINS CloudFront AND Title NOT_CONTAINS CloudWatch` match a finding that excludes both `CloudFront` and `CloudWatch` in the title.\n\nYou can\u2019t have both a `CONTAINS` filter and a `NOT_CONTAINS` filter on the same field. Similarly, you can't provide both an `EQUALS` filter and a `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filter on the same field. Combining filters in this way returns an error. `CONTAINS` filters can only be used with other `CONTAINS` filters. `NOT_CONTAINS` filters can only be used with other `NOT_CONTAINS` filters.\n\nYou can combine `PREFIX` filters with `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filters for the same field. Security Hub first processes the `PREFIX` filters, and then the `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filters.\n\nFor example, for the following filters, Security Hub first identifies findings that have resource types that start with either `AwsIam` or `AwsEc2` . It then excludes findings that have a resource type of `AwsIamPolicy` and findings that have a resource type of `AwsEc2NetworkInterface` .\n\n- `ResourceType PREFIX AwsIam`\n- `ResourceType PREFIX AwsEc2`\n- `ResourceType NOT_EQUALS AwsIamPolicy`\n- `ResourceType NOT_EQUALS AwsEc2NetworkInterface`\n\n`CONTAINS` and `NOT_CONTAINS` operators can be used only with automation rules V1. `CONTAINS_WORD` operator is only supported in `GetFindingsV2` , `GetFindingStatisticsV2` , `GetResourcesV2` , and `GetResourceStatisticsV2` APIs. For more information, see [Automation rules](https://docs.aws.amazon.com/securityhub/latest/userguide/automation-rules.html) in the *AWS Security Hub User Guide* .", + "Comparison": "The condition to apply to a string value when filtering Security Hub findings.\n\nTo search for values that have the filter value, use one of the following comparison operators:\n\n- To search for values that include the filter value, use `CONTAINS` . For example, the filter `Title CONTAINS CloudFront` matches findings that have a `Title` that includes the string CloudFront.\n- To search for values that exactly match the filter value, use `EQUALS` . For example, the filter `AwsAccountId EQUALS 123456789012` only matches findings that have an account ID of `123456789012` .\n- To search for values that start with the filter value, use `PREFIX` . For example, the filter `ResourceRegion PREFIX us` matches findings that have a `ResourceRegion` that starts with `us` . A `ResourceRegion` that starts with a different value, such as `af` , `ap` , or `ca` , doesn't match.\n\n`CONTAINS` , `EQUALS` , and `PREFIX` filters on the same field are joined by `OR` . A finding matches if it matches any one of those filters. For example, the filters `Title CONTAINS CloudFront OR Title CONTAINS CloudWatch` match a finding that includes either `CloudFront` , `CloudWatch` , or both strings in the title.\n\nTo search for values that don\u2019t have the filter value, use one of the following comparison operators:\n\n- To search for values that exclude the filter value, use `NOT_CONTAINS` . For example, the filter `Title NOT_CONTAINS CloudFront` matches findings that have a `Title` that excludes the string CloudFront.\n- To search for values other than the filter value, use `NOT_EQUALS` . For example, the filter `AwsAccountId NOT_EQUALS 123456789012` only matches findings that have an account ID other than `123456789012` .\n- To search for values that don't start with the filter value, use `PREFIX_NOT_EQUALS` . For example, the filter `ResourceRegion PREFIX_NOT_EQUALS us` matches findings with a `ResourceRegion` that starts with a value other than `us` .\n\n`NOT_CONTAINS` , `NOT_EQUALS` , and `PREFIX_NOT_EQUALS` filters on the same field are joined by `AND` . A finding matches only if it matches all of those filters. For example, the filters `Title NOT_CONTAINS CloudFront AND Title NOT_CONTAINS CloudWatch` match a finding that excludes both `CloudFront` and `CloudWatch` in the title.\n\nYou can\u2019t have both a `CONTAINS` filter and a `NOT_CONTAINS` filter on the same field. Similarly, you can't provide both an `EQUALS` filter and a `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filter on the same field. Combining filters in this way returns an error. `CONTAINS` filters can only be used with other `CONTAINS` filters. `NOT_CONTAINS` filters can only be used with other `NOT_CONTAINS` filters.\n\nYou can combine `PREFIX` filters with `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filters for the same field. Security Hub first processes the `PREFIX` filters, and then the `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filters.\n\nFor example, for the following filters, Security Hub first identifies findings that have resource types that start with either `AwsIam` or `AwsEc2` . It then excludes findings that have a resource type of `AwsIamPolicy` and findings that have a resource type of `AwsEc2NetworkInterface` .\n\n- `ResourceType PREFIX AwsIam`\n- `ResourceType PREFIX AwsEc2`\n- `ResourceType NOT_EQUALS AwsIamPolicy`\n- `ResourceType NOT_EQUALS AwsEc2NetworkInterface`\n\n`CONTAINS` and `NOT_CONTAINS` operators can be used only with automation rules V1. `CONTAINS_WORD` operator is only supported in `GetFindingsV2` , `GetFindingStatisticsV2` , `GetResourcesV2` , and `GetResourceStatisticsV2` APIs. For more information, see [Automation rules](https://docs.aws.amazon.com/securityhub/latest/userguide/automation-rules.html) in the *Security Hub User Guide* .", "Value": "The string filter value. Filter values are case sensitive. For example, the product name for control-based findings is `Security Hub` . If you provide `security hub` as the filter value, there's no match." }, "AWS::SecurityHub::AutomationRule WorkflowUpdate": { @@ -52241,8 +52739,8 @@ }, "AWS::SecurityHub::AutomationRuleV2 DateFilter": { "DateRange": "A date range for the date filter.", - "End": "A timestamp that provides the end date for the date filter.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", - "Start": "A timestamp that provides the start date for the date filter.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) ." + "End": "A timestamp that provides the end date for the date filter.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", + "Start": "A timestamp that provides the start date for the date filter.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) ." }, "AWS::SecurityHub::AutomationRuleV2 DateRange": { "Unit": "A date range unit for the date filter.", @@ -52252,7 +52750,7 @@ "ConnectorArn": "The ARN of the connector that establishes the integration." }, "AWS::SecurityHub::AutomationRuleV2 MapFilter": { - "Comparison": "The condition to apply to the key value when filtering Security Hub findings with a map filter.\n\nTo search for values that have the filter value, use one of the following comparison operators:\n\n- To search for values that include the filter value, use `CONTAINS` . For example, for the `ResourceTags` field, the filter `Department CONTAINS Security` matches findings that include the value `Security` for the `Department` tag. In the same example, a finding with a value of `Security team` for the `Department` tag is a match.\n- To search for values that exactly match the filter value, use `EQUALS` . For example, for the `ResourceTags` field, the filter `Department EQUALS Security` matches findings that have the value `Security` for the `Department` tag.\n\n`CONTAINS` and `EQUALS` filters on the same field are joined by `OR` . A finding matches if it matches any one of those filters. For example, the filters `Department CONTAINS Security OR Department CONTAINS Finance` match a finding that includes either `Security` , `Finance` , or both values.\n\nTo search for values that don't have the filter value, use one of the following comparison operators:\n\n- To search for values that exclude the filter value, use `NOT_CONTAINS` . For example, for the `ResourceTags` field, the filter `Department NOT_CONTAINS Finance` matches findings that exclude the value `Finance` for the `Department` tag.\n- To search for values other than the filter value, use `NOT_EQUALS` . For example, for the `ResourceTags` field, the filter `Department NOT_EQUALS Finance` matches findings that don\u2019t have the value `Finance` for the `Department` tag.\n\n`NOT_CONTAINS` and `NOT_EQUALS` filters on the same field are joined by `AND` . A finding matches only if it matches all of those filters. For example, the filters `Department NOT_CONTAINS Security AND Department NOT_CONTAINS Finance` match a finding that excludes both the `Security` and `Finance` values.\n\n`CONTAINS` filters can only be used with other `CONTAINS` filters. `NOT_CONTAINS` filters can only be used with other `NOT_CONTAINS` filters.\n\nYou can\u2019t have both a `CONTAINS` filter and a `NOT_CONTAINS` filter on the same field. Similarly, you can\u2019t have both an `EQUALS` filter and a `NOT_EQUALS` filter on the same field. Combining filters in this way returns an error.\n\n`CONTAINS` and `NOT_CONTAINS` operators can be used only with automation rules. For more information, see [Automation rules](https://docs.aws.amazon.com/securityhub/latest/userguide/automation-rules.html) in the *AWS Security Hub User Guide* .", + "Comparison": "The condition to apply to the key value when filtering Security Hub findings with a map filter.\n\nTo search for values that have the filter value, use one of the following comparison operators:\n\n- To search for values that include the filter value, use `CONTAINS` . For example, for the `ResourceTags` field, the filter `Department CONTAINS Security` matches findings that include the value `Security` for the `Department` tag. In the same example, a finding with a value of `Security team` for the `Department` tag is a match.\n- To search for values that exactly match the filter value, use `EQUALS` . For example, for the `ResourceTags` field, the filter `Department EQUALS Security` matches findings that have the value `Security` for the `Department` tag.\n\n`CONTAINS` and `EQUALS` filters on the same field are joined by `OR` . A finding matches if it matches any one of those filters. For example, the filters `Department CONTAINS Security OR Department CONTAINS Finance` match a finding that includes either `Security` , `Finance` , or both values.\n\nTo search for values that don't have the filter value, use one of the following comparison operators:\n\n- To search for values that exclude the filter value, use `NOT_CONTAINS` . For example, for the `ResourceTags` field, the filter `Department NOT_CONTAINS Finance` matches findings that exclude the value `Finance` for the `Department` tag.\n- To search for values other than the filter value, use `NOT_EQUALS` . For example, for the `ResourceTags` field, the filter `Department NOT_EQUALS Finance` matches findings that don\u2019t have the value `Finance` for the `Department` tag.\n\n`NOT_CONTAINS` and `NOT_EQUALS` filters on the same field are joined by `AND` . A finding matches only if it matches all of those filters. For example, the filters `Department NOT_CONTAINS Security AND Department NOT_CONTAINS Finance` match a finding that excludes both the `Security` and `Finance` values.\n\n`CONTAINS` filters can only be used with other `CONTAINS` filters. `NOT_CONTAINS` filters can only be used with other `NOT_CONTAINS` filters.\n\nYou can\u2019t have both a `CONTAINS` filter and a `NOT_CONTAINS` filter on the same field. Similarly, you can\u2019t have both an `EQUALS` filter and a `NOT_EQUALS` filter on the same field. Combining filters in this way returns an error.\n\n`CONTAINS` and `NOT_CONTAINS` operators can be used only with automation rules. For more information, see [Automation rules](https://docs.aws.amazon.com/securityhub/latest/userguide/automation-rules.html) in the *Security Hub User Guide* .", "Key": "The key of the map filter. For example, for `ResourceTags` , `Key` identifies the name of the tag. For `UserDefinedFields` , `Key` is the name of the field.", "Value": "The value for the key in the map filter. Filter values are case sensitive. For example, one of the values for a tag called `Department` might be `Security` . If you provide `security` as the filter value, then there's no match." }, @@ -52286,18 +52784,18 @@ "Filter": "Enables filtering of security findings based on string field values in OCSF." }, "AWS::SecurityHub::AutomationRuleV2 StringFilter": { - "Comparison": "The condition to apply to a string value when filtering Security Hub findings.\n\nTo search for values that have the filter value, use one of the following comparison operators:\n\n- To search for values that include the filter value, use `CONTAINS` . For example, the filter `Title CONTAINS CloudFront` matches findings that have a `Title` that includes the string CloudFront.\n- To search for values that exactly match the filter value, use `EQUALS` . For example, the filter `AwsAccountId EQUALS 123456789012` only matches findings that have an account ID of `123456789012` .\n- To search for values that start with the filter value, use `PREFIX` . For example, the filter `ResourceRegion PREFIX us` matches findings that have a `ResourceRegion` that starts with `us` . A `ResourceRegion` that starts with a different value, such as `af` , `ap` , or `ca` , doesn't match.\n\n`CONTAINS` , `EQUALS` , and `PREFIX` filters on the same field are joined by `OR` . A finding matches if it matches any one of those filters. For example, the filters `Title CONTAINS CloudFront OR Title CONTAINS CloudWatch` match a finding that includes either `CloudFront` , `CloudWatch` , or both strings in the title.\n\nTo search for values that don\u2019t have the filter value, use one of the following comparison operators:\n\n- To search for values that exclude the filter value, use `NOT_CONTAINS` . For example, the filter `Title NOT_CONTAINS CloudFront` matches findings that have a `Title` that excludes the string CloudFront.\n- To search for values other than the filter value, use `NOT_EQUALS` . For example, the filter `AwsAccountId NOT_EQUALS 123456789012` only matches findings that have an account ID other than `123456789012` .\n- To search for values that don't start with the filter value, use `PREFIX_NOT_EQUALS` . For example, the filter `ResourceRegion PREFIX_NOT_EQUALS us` matches findings with a `ResourceRegion` that starts with a value other than `us` .\n\n`NOT_CONTAINS` , `NOT_EQUALS` , and `PREFIX_NOT_EQUALS` filters on the same field are joined by `AND` . A finding matches only if it matches all of those filters. For example, the filters `Title NOT_CONTAINS CloudFront AND Title NOT_CONTAINS CloudWatch` match a finding that excludes both `CloudFront` and `CloudWatch` in the title.\n\nYou can\u2019t have both a `CONTAINS` filter and a `NOT_CONTAINS` filter on the same field. Similarly, you can't provide both an `EQUALS` filter and a `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filter on the same field. Combining filters in this way returns an error. `CONTAINS` filters can only be used with other `CONTAINS` filters. `NOT_CONTAINS` filters can only be used with other `NOT_CONTAINS` filters.\n\nYou can combine `PREFIX` filters with `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filters for the same field. Security Hub first processes the `PREFIX` filters, and then the `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filters.\n\nFor example, for the following filters, Security Hub first identifies findings that have resource types that start with either `AwsIam` or `AwsEc2` . It then excludes findings that have a resource type of `AwsIamPolicy` and findings that have a resource type of `AwsEc2NetworkInterface` .\n\n- `ResourceType PREFIX AwsIam`\n- `ResourceType PREFIX AwsEc2`\n- `ResourceType NOT_EQUALS AwsIamPolicy`\n- `ResourceType NOT_EQUALS AwsEc2NetworkInterface`\n\n`CONTAINS` and `NOT_CONTAINS` operators can be used only with automation rules V1. `CONTAINS_WORD` operator is only supported in `GetFindingsV2` , `GetFindingStatisticsV2` , `GetResourcesV2` , and `GetResourceStatisticsV2` APIs. For more information, see [Automation rules](https://docs.aws.amazon.com/securityhub/latest/userguide/automation-rules.html) in the *AWS Security Hub User Guide* .", + "Comparison": "The condition to apply to a string value when filtering Security Hub findings.\n\nTo search for values that have the filter value, use one of the following comparison operators:\n\n- To search for values that include the filter value, use `CONTAINS` . For example, the filter `Title CONTAINS CloudFront` matches findings that have a `Title` that includes the string CloudFront.\n- To search for values that exactly match the filter value, use `EQUALS` . For example, the filter `AwsAccountId EQUALS 123456789012` only matches findings that have an account ID of `123456789012` .\n- To search for values that start with the filter value, use `PREFIX` . For example, the filter `ResourceRegion PREFIX us` matches findings that have a `ResourceRegion` that starts with `us` . A `ResourceRegion` that starts with a different value, such as `af` , `ap` , or `ca` , doesn't match.\n\n`CONTAINS` , `EQUALS` , and `PREFIX` filters on the same field are joined by `OR` . A finding matches if it matches any one of those filters. For example, the filters `Title CONTAINS CloudFront OR Title CONTAINS CloudWatch` match a finding that includes either `CloudFront` , `CloudWatch` , or both strings in the title.\n\nTo search for values that don\u2019t have the filter value, use one of the following comparison operators:\n\n- To search for values that exclude the filter value, use `NOT_CONTAINS` . For example, the filter `Title NOT_CONTAINS CloudFront` matches findings that have a `Title` that excludes the string CloudFront.\n- To search for values other than the filter value, use `NOT_EQUALS` . For example, the filter `AwsAccountId NOT_EQUALS 123456789012` only matches findings that have an account ID other than `123456789012` .\n- To search for values that don't start with the filter value, use `PREFIX_NOT_EQUALS` . For example, the filter `ResourceRegion PREFIX_NOT_EQUALS us` matches findings with a `ResourceRegion` that starts with a value other than `us` .\n\n`NOT_CONTAINS` , `NOT_EQUALS` , and `PREFIX_NOT_EQUALS` filters on the same field are joined by `AND` . A finding matches only if it matches all of those filters. For example, the filters `Title NOT_CONTAINS CloudFront AND Title NOT_CONTAINS CloudWatch` match a finding that excludes both `CloudFront` and `CloudWatch` in the title.\n\nYou can\u2019t have both a `CONTAINS` filter and a `NOT_CONTAINS` filter on the same field. Similarly, you can't provide both an `EQUALS` filter and a `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filter on the same field. Combining filters in this way returns an error. `CONTAINS` filters can only be used with other `CONTAINS` filters. `NOT_CONTAINS` filters can only be used with other `NOT_CONTAINS` filters.\n\nYou can combine `PREFIX` filters with `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filters for the same field. Security Hub first processes the `PREFIX` filters, and then the `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filters.\n\nFor example, for the following filters, Security Hub first identifies findings that have resource types that start with either `AwsIam` or `AwsEc2` . It then excludes findings that have a resource type of `AwsIamPolicy` and findings that have a resource type of `AwsEc2NetworkInterface` .\n\n- `ResourceType PREFIX AwsIam`\n- `ResourceType PREFIX AwsEc2`\n- `ResourceType NOT_EQUALS AwsIamPolicy`\n- `ResourceType NOT_EQUALS AwsEc2NetworkInterface`\n\n`CONTAINS` and `NOT_CONTAINS` operators can be used only with automation rules V1. `CONTAINS_WORD` operator is only supported in `GetFindingsV2` , `GetFindingStatisticsV2` , `GetResourcesV2` , and `GetResourceStatisticsV2` APIs. For more information, see [Automation rules](https://docs.aws.amazon.com/securityhub/latest/userguide/automation-rules.html) in the *Security Hub User Guide* .", "Value": "The string filter value. Filter values are case sensitive. For example, the product name for control-based findings is `Security Hub` . If you provide `security hub` as the filter value, there's no match." }, "AWS::SecurityHub::ConfigurationPolicy": { - "ConfigurationPolicy": "An object that defines how AWS Security Hub is configured. It includes whether Security Hub is enabled or disabled, a list of enabled security standards, a list of enabled or disabled security controls, and a list of custom parameter values for specified controls. If you provide a list of security controls that are enabled in the configuration policy, Security Hub disables all other controls (including newly released controls). If you provide a list of security controls that are disabled in the configuration policy, Security Hub enables all other controls (including newly released controls).", + "ConfigurationPolicy": "An object that defines how Security Hub is configured. It includes whether Security Hub is enabled or disabled, a list of enabled security standards, a list of enabled or disabled security controls, and a list of custom parameter values for specified controls. If you provide a list of security controls that are enabled in the configuration policy, Security Hub disables all other controls (including newly released controls). If you provide a list of security controls that are disabled in the configuration policy, Security Hub enables all other controls (including newly released controls).", "Description": "The description of the configuration policy.", "Name": "The name of the configuration policy. Alphanumeric characters and the following ASCII characters are permitted: `-, ., !, *, /` .", - "Tags": "User-defined tags associated with a configuration policy. For more information, see [Tagging AWS Security Hub resources](https://docs.aws.amazon.com/securityhub/latest/userguide/tagging-resources.html) in the *Security Hub user guide* ." + "Tags": "User-defined tags associated with a configuration policy. For more information, see [Tagging Security Hub resources](https://docs.aws.amazon.com/securityhub/latest/userguide/tagging-resources.html) in the *Security Hub user guide* ." }, "AWS::SecurityHub::ConfigurationPolicy ParameterConfiguration": { "Value": "The current value of a control parameter.", - "ValueType": "Identifies whether a control parameter uses a custom user-defined value or subscribes to the default AWS Security Hub behavior.\n\nWhen `ValueType` is set equal to `DEFAULT` , the default behavior can be a specific Security Hub default value, or the default behavior can be to ignore a specific parameter. When `ValueType` is set equal to `DEFAULT` , Security Hub ignores user-provided input for the `Value` field.\n\nWhen `ValueType` is set equal to `CUSTOM` , the `Value` field can't be empty." + "ValueType": "Identifies whether a control parameter uses a custom user-defined value or subscribes to the default Security Hub behavior.\n\nWhen `ValueType` is set equal to `DEFAULT` , the default behavior can be a specific Security Hub default value, or the default behavior can be to ignore a specific parameter. When `ValueType` is set equal to `DEFAULT` , Security Hub ignores user-provided input for the `Value` field.\n\nWhen `ValueType` is set equal to `CUSTOM` , the `Value` field can't be empty." }, "AWS::SecurityHub::ConfigurationPolicy ParameterValue": { "Boolean": "A control parameter that is a boolean.", @@ -52357,7 +52855,7 @@ "ComplianceSecurityControlParametersValue": "The current value of a security control parameter.", "ComplianceStatus": "Exclusive to findings that are generated as the result of a check run against a specific rule in a supported standard, such as CIS AWS Foundations. Contains security standard-related finding details.", "Confidence": "A finding's confidence. Confidence is defined as the likelihood that a finding accurately identifies the behavior or issue that it was intended to identify.\n\nConfidence is scored on a 0-100 basis using a ratio scale, where 0 means zero percent confidence and 100 means 100 percent confidence.", - "CreatedAt": "A timestamp that indicates when the security findings provider created the potential security issue that a finding reflects.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", + "CreatedAt": "A timestamp that indicates when the security findings provider created the potential security issue that a finding reflects.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", "Criticality": "The level of importance assigned to the resources associated with the finding.\n\nA score of 0 means that the underlying resources have no criticality, and a score of 100 is reserved for the most critical resources.", "Description": "A finding's description.", "FindingProviderFieldsConfidence": "The finding provider value for the finding confidence. Confidence is defined as the likelihood that a finding accurately identifies the behavior or issue that it was intended to identify.\n\nConfidence is scored on a 0-100 basis using a ratio scale, where 0 means zero percent confidence and 100 means 100 percent confidence.", @@ -52367,11 +52865,11 @@ "FindingProviderFieldsSeverityLabel": "The finding provider value for the severity label.", "FindingProviderFieldsSeverityOriginal": "The finding provider's original value for the severity.", "FindingProviderFieldsTypes": "One or more finding types that the finding provider assigned to the finding. Uses the format of `namespace/category/classifier` that classify a finding.\n\nValid namespace values are: Software and Configuration Checks | TTPs | Effects | Unusual Behaviors | Sensitive Data Identifications", - "FirstObservedAt": "A timestamp that indicates when the security findings provider first observed the potential security issue that a finding captured.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", + "FirstObservedAt": "A timestamp that indicates when the security findings provider first observed the potential security issue that a finding captured.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", "GeneratorId": "The identifier for the solution-specific component (a discrete unit of logic) that generated a finding. In various security findings providers' solutions, this generator can be called a rule, a check, a detector, a plugin, etc.", "Id": "The security findings provider-specific identifier for a finding.", "Keyword": "This field is deprecated. A keyword for a finding.", - "LastObservedAt": "A timestamp that indicates when the security findings provider most recently observed a change in the resource that is involved in the finding.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", + "LastObservedAt": "A timestamp that indicates when the security findings provider most recently observed a change in the resource that is involved in the finding.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", "MalwareName": "The name of the malware that was observed.", "MalwarePath": "The filesystem path of the malware that was observed.", "MalwareState": "The state of the malware that was observed.", @@ -52390,12 +52888,12 @@ "NoteText": "The text of a note.", "NoteUpdatedAt": "The timestamp of when the note was updated.", "NoteUpdatedBy": "The principal that created a note.", - "ProcessLaunchedAt": "A timestamp that identifies when the process was launched.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", + "ProcessLaunchedAt": "A timestamp that identifies when the process was launched.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", "ProcessName": "The name of the process.", "ProcessParentPid": "The parent process ID. This field accepts positive integers between `O` and `2147483647` .", "ProcessPath": "The path to the process executable.", "ProcessPid": "The process ID.", - "ProcessTerminatedAt": "A timestamp that identifies when the process was terminated.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", + "ProcessTerminatedAt": "A timestamp that identifies when the process was terminated.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", "ProductArn": "The ARN generated by Security Hub that uniquely identifies a third-party company (security findings provider) after this provider's product (solution that generates findings) is registered with Security Hub.", "ProductFields": "A data type where security findings providers can include additional solution-specific details that aren't part of the defined `AwsSecurityFinding` format.", "ProductName": "The name of the solution (product) that generates findings.", @@ -52424,7 +52922,7 @@ "ResourceAwsS3BucketOwnerName": "The display name of the owner of the S3 bucket.", "ResourceContainerImageId": "The identifier of the image related to a finding.", "ResourceContainerImageName": "The name of the image related to a finding.", - "ResourceContainerLaunchedAt": "A timestamp that identifies when the container was started.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", + "ResourceContainerLaunchedAt": "A timestamp that identifies when the container was started.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", "ResourceContainerName": "The name of the container related to a finding.", "ResourceDetailsOther": "The details of a resource that doesn't have a specific subfield for the resource type defined.", "ResourceId": "The canonical identifier for the given resource type.", @@ -52438,14 +52936,14 @@ "SeverityProduct": "Deprecated. This attribute isn't included in findings. Instead of providing `Product` , provide `Original` .\n\nThe native severity as defined by the AWS service or integrated partner product that generated the finding.", "SourceUrl": "A URL that links to a page about the current finding in the security findings provider's solution.", "ThreatIntelIndicatorCategory": "The category of a threat intelligence indicator.", - "ThreatIntelIndicatorLastObservedAt": "A timestamp that identifies the last observation of a threat intelligence indicator.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", + "ThreatIntelIndicatorLastObservedAt": "A timestamp that identifies the last observation of a threat intelligence indicator.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", "ThreatIntelIndicatorSource": "The source of the threat intelligence.", "ThreatIntelIndicatorSourceUrl": "The URL for more details from the source of the threat intelligence.", "ThreatIntelIndicatorType": "The type of a threat intelligence indicator.", "ThreatIntelIndicatorValue": "The value of a threat intelligence indicator.", "Title": "A finding's title.", "Type": "A finding type in the format of `namespace/category/classifier` that classifies a finding.", - "UpdatedAt": "A timestamp that indicates when the security findings provider last updated the finding record.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", + "UpdatedAt": "A timestamp that indicates when the security findings provider last updated the finding record.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", "UserDefinedFields": "A list of name/value string pairs associated with the finding. These are custom, user-defined fields added to a finding.", "VerificationState": "The veracity of a finding.", "VulnerabilitiesExploitAvailable": "Indicates whether a software vulnerability in your environment has a known exploit. You can filter findings by this field only if you use Security Hub and Amazon Inspector.", @@ -52458,8 +52956,8 @@ }, "AWS::SecurityHub::Insight DateFilter": { "DateRange": "A date range for the date filter.", - "End": "A timestamp that provides the end date for the date filter.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", - "Start": "A timestamp that provides the start date for the date filter.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) ." + "End": "A timestamp that provides the end date for the date filter.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", + "Start": "A timestamp that provides the start date for the date filter.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) ." }, "AWS::SecurityHub::Insight DateRange": { "Unit": "A date range unit for the date filter.", @@ -52472,7 +52970,7 @@ "Value": "A value for the keyword." }, "AWS::SecurityHub::Insight MapFilter": { - "Comparison": "The condition to apply to the key value when filtering Security Hub findings with a map filter.\n\nTo search for values that have the filter value, use one of the following comparison operators:\n\n- To search for values that include the filter value, use `CONTAINS` . For example, for the `ResourceTags` field, the filter `Department CONTAINS Security` matches findings that include the value `Security` for the `Department` tag. In the same example, a finding with a value of `Security team` for the `Department` tag is a match.\n- To search for values that exactly match the filter value, use `EQUALS` . For example, for the `ResourceTags` field, the filter `Department EQUALS Security` matches findings that have the value `Security` for the `Department` tag.\n\n`CONTAINS` and `EQUALS` filters on the same field are joined by `OR` . A finding matches if it matches any one of those filters. For example, the filters `Department CONTAINS Security OR Department CONTAINS Finance` match a finding that includes either `Security` , `Finance` , or both values.\n\nTo search for values that don't have the filter value, use one of the following comparison operators:\n\n- To search for values that exclude the filter value, use `NOT_CONTAINS` . For example, for the `ResourceTags` field, the filter `Department NOT_CONTAINS Finance` matches findings that exclude the value `Finance` for the `Department` tag.\n- To search for values other than the filter value, use `NOT_EQUALS` . For example, for the `ResourceTags` field, the filter `Department NOT_EQUALS Finance` matches findings that don\u2019t have the value `Finance` for the `Department` tag.\n\n`NOT_CONTAINS` and `NOT_EQUALS` filters on the same field are joined by `AND` . A finding matches only if it matches all of those filters. For example, the filters `Department NOT_CONTAINS Security AND Department NOT_CONTAINS Finance` match a finding that excludes both the `Security` and `Finance` values.\n\n`CONTAINS` filters can only be used with other `CONTAINS` filters. `NOT_CONTAINS` filters can only be used with other `NOT_CONTAINS` filters.\n\nYou can\u2019t have both a `CONTAINS` filter and a `NOT_CONTAINS` filter on the same field. Similarly, you can\u2019t have both an `EQUALS` filter and a `NOT_EQUALS` filter on the same field. Combining filters in this way returns an error.\n\n`CONTAINS` and `NOT_CONTAINS` operators can be used only with automation rules. For more information, see [Automation rules](https://docs.aws.amazon.com/securityhub/latest/userguide/automation-rules.html) in the *AWS Security Hub User Guide* .", + "Comparison": "The condition to apply to the key value when filtering Security Hub findings with a map filter.\n\nTo search for values that have the filter value, use one of the following comparison operators:\n\n- To search for values that include the filter value, use `CONTAINS` . For example, for the `ResourceTags` field, the filter `Department CONTAINS Security` matches findings that include the value `Security` for the `Department` tag. In the same example, a finding with a value of `Security team` for the `Department` tag is a match.\n- To search for values that exactly match the filter value, use `EQUALS` . For example, for the `ResourceTags` field, the filter `Department EQUALS Security` matches findings that have the value `Security` for the `Department` tag.\n\n`CONTAINS` and `EQUALS` filters on the same field are joined by `OR` . A finding matches if it matches any one of those filters. For example, the filters `Department CONTAINS Security OR Department CONTAINS Finance` match a finding that includes either `Security` , `Finance` , or both values.\n\nTo search for values that don't have the filter value, use one of the following comparison operators:\n\n- To search for values that exclude the filter value, use `NOT_CONTAINS` . For example, for the `ResourceTags` field, the filter `Department NOT_CONTAINS Finance` matches findings that exclude the value `Finance` for the `Department` tag.\n- To search for values other than the filter value, use `NOT_EQUALS` . For example, for the `ResourceTags` field, the filter `Department NOT_EQUALS Finance` matches findings that don\u2019t have the value `Finance` for the `Department` tag.\n\n`NOT_CONTAINS` and `NOT_EQUALS` filters on the same field are joined by `AND` . A finding matches only if it matches all of those filters. For example, the filters `Department NOT_CONTAINS Security AND Department NOT_CONTAINS Finance` match a finding that excludes both the `Security` and `Finance` values.\n\n`CONTAINS` filters can only be used with other `CONTAINS` filters. `NOT_CONTAINS` filters can only be used with other `NOT_CONTAINS` filters.\n\nYou can\u2019t have both a `CONTAINS` filter and a `NOT_CONTAINS` filter on the same field. Similarly, you can\u2019t have both an `EQUALS` filter and a `NOT_EQUALS` filter on the same field. Combining filters in this way returns an error.\n\n`CONTAINS` and `NOT_CONTAINS` operators can be used only with automation rules. For more information, see [Automation rules](https://docs.aws.amazon.com/securityhub/latest/userguide/automation-rules.html) in the *Security Hub User Guide* .", "Key": "The key of the map filter. For example, for `ResourceTags` , `Key` identifies the name of the tag. For `UserDefinedFields` , `Key` is the name of the field.", "Value": "The value for the key in the map filter. Filter values are case sensitive. For example, one of the values for a tag called `Department` might be `Security` . If you provide `security` as the filter value, then there's no match." }, @@ -52482,7 +52980,7 @@ "Lte": "The less-than-equal condition to be applied to a single field when querying for findings." }, "AWS::SecurityHub::Insight StringFilter": { - "Comparison": "The condition to apply to a string value when filtering Security Hub findings.\n\nTo search for values that have the filter value, use one of the following comparison operators:\n\n- To search for values that include the filter value, use `CONTAINS` . For example, the filter `Title CONTAINS CloudFront` matches findings that have a `Title` that includes the string CloudFront.\n- To search for values that exactly match the filter value, use `EQUALS` . For example, the filter `AwsAccountId EQUALS 123456789012` only matches findings that have an account ID of `123456789012` .\n- To search for values that start with the filter value, use `PREFIX` . For example, the filter `ResourceRegion PREFIX us` matches findings that have a `ResourceRegion` that starts with `us` . A `ResourceRegion` that starts with a different value, such as `af` , `ap` , or `ca` , doesn't match.\n\n`CONTAINS` , `EQUALS` , and `PREFIX` filters on the same field are joined by `OR` . A finding matches if it matches any one of those filters. For example, the filters `Title CONTAINS CloudFront OR Title CONTAINS CloudWatch` match a finding that includes either `CloudFront` , `CloudWatch` , or both strings in the title.\n\nTo search for values that don\u2019t have the filter value, use one of the following comparison operators:\n\n- To search for values that exclude the filter value, use `NOT_CONTAINS` . For example, the filter `Title NOT_CONTAINS CloudFront` matches findings that have a `Title` that excludes the string CloudFront.\n- To search for values other than the filter value, use `NOT_EQUALS` . For example, the filter `AwsAccountId NOT_EQUALS 123456789012` only matches findings that have an account ID other than `123456789012` .\n- To search for values that don't start with the filter value, use `PREFIX_NOT_EQUALS` . For example, the filter `ResourceRegion PREFIX_NOT_EQUALS us` matches findings with a `ResourceRegion` that starts with a value other than `us` .\n\n`NOT_CONTAINS` , `NOT_EQUALS` , and `PREFIX_NOT_EQUALS` filters on the same field are joined by `AND` . A finding matches only if it matches all of those filters. For example, the filters `Title NOT_CONTAINS CloudFront AND Title NOT_CONTAINS CloudWatch` match a finding that excludes both `CloudFront` and `CloudWatch` in the title.\n\nYou can\u2019t have both a `CONTAINS` filter and a `NOT_CONTAINS` filter on the same field. Similarly, you can't provide both an `EQUALS` filter and a `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filter on the same field. Combining filters in this way returns an error. `CONTAINS` filters can only be used with other `CONTAINS` filters. `NOT_CONTAINS` filters can only be used with other `NOT_CONTAINS` filters.\n\nYou can combine `PREFIX` filters with `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filters for the same field. Security Hub first processes the `PREFIX` filters, and then the `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filters.\n\nFor example, for the following filters, Security Hub first identifies findings that have resource types that start with either `AwsIam` or `AwsEc2` . It then excludes findings that have a resource type of `AwsIamPolicy` and findings that have a resource type of `AwsEc2NetworkInterface` .\n\n- `ResourceType PREFIX AwsIam`\n- `ResourceType PREFIX AwsEc2`\n- `ResourceType NOT_EQUALS AwsIamPolicy`\n- `ResourceType NOT_EQUALS AwsEc2NetworkInterface`\n\n`CONTAINS` and `NOT_CONTAINS` operators can be used only with automation rules V1. `CONTAINS_WORD` operator is only supported in `GetFindingsV2` , `GetFindingStatisticsV2` , `GetResourcesV2` , and `GetResourceStatisticsV2` APIs. For more information, see [Automation rules](https://docs.aws.amazon.com/securityhub/latest/userguide/automation-rules.html) in the *AWS Security Hub User Guide* .", + "Comparison": "The condition to apply to a string value when filtering Security Hub findings.\n\nTo search for values that have the filter value, use one of the following comparison operators:\n\n- To search for values that include the filter value, use `CONTAINS` . For example, the filter `Title CONTAINS CloudFront` matches findings that have a `Title` that includes the string CloudFront.\n- To search for values that exactly match the filter value, use `EQUALS` . For example, the filter `AwsAccountId EQUALS 123456789012` only matches findings that have an account ID of `123456789012` .\n- To search for values that start with the filter value, use `PREFIX` . For example, the filter `ResourceRegion PREFIX us` matches findings that have a `ResourceRegion` that starts with `us` . A `ResourceRegion` that starts with a different value, such as `af` , `ap` , or `ca` , doesn't match.\n\n`CONTAINS` , `EQUALS` , and `PREFIX` filters on the same field are joined by `OR` . A finding matches if it matches any one of those filters. For example, the filters `Title CONTAINS CloudFront OR Title CONTAINS CloudWatch` match a finding that includes either `CloudFront` , `CloudWatch` , or both strings in the title.\n\nTo search for values that don\u2019t have the filter value, use one of the following comparison operators:\n\n- To search for values that exclude the filter value, use `NOT_CONTAINS` . For example, the filter `Title NOT_CONTAINS CloudFront` matches findings that have a `Title` that excludes the string CloudFront.\n- To search for values other than the filter value, use `NOT_EQUALS` . For example, the filter `AwsAccountId NOT_EQUALS 123456789012` only matches findings that have an account ID other than `123456789012` .\n- To search for values that don't start with the filter value, use `PREFIX_NOT_EQUALS` . For example, the filter `ResourceRegion PREFIX_NOT_EQUALS us` matches findings with a `ResourceRegion` that starts with a value other than `us` .\n\n`NOT_CONTAINS` , `NOT_EQUALS` , and `PREFIX_NOT_EQUALS` filters on the same field are joined by `AND` . A finding matches only if it matches all of those filters. For example, the filters `Title NOT_CONTAINS CloudFront AND Title NOT_CONTAINS CloudWatch` match a finding that excludes both `CloudFront` and `CloudWatch` in the title.\n\nYou can\u2019t have both a `CONTAINS` filter and a `NOT_CONTAINS` filter on the same field. Similarly, you can't provide both an `EQUALS` filter and a `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filter on the same field. Combining filters in this way returns an error. `CONTAINS` filters can only be used with other `CONTAINS` filters. `NOT_CONTAINS` filters can only be used with other `NOT_CONTAINS` filters.\n\nYou can combine `PREFIX` filters with `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filters for the same field. Security Hub first processes the `PREFIX` filters, and then the `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filters.\n\nFor example, for the following filters, Security Hub first identifies findings that have resource types that start with either `AwsIam` or `AwsEc2` . It then excludes findings that have a resource type of `AwsIamPolicy` and findings that have a resource type of `AwsEc2NetworkInterface` .\n\n- `ResourceType PREFIX AwsIam`\n- `ResourceType PREFIX AwsEc2`\n- `ResourceType NOT_EQUALS AwsIamPolicy`\n- `ResourceType NOT_EQUALS AwsEc2NetworkInterface`\n\n`CONTAINS` and `NOT_CONTAINS` operators can be used only with automation rules V1. `CONTAINS_WORD` operator is only supported in `GetFindingsV2` , `GetFindingStatisticsV2` , `GetResourcesV2` , and `GetResourceStatisticsV2` APIs. For more information, see [Automation rules](https://docs.aws.amazon.com/securityhub/latest/userguide/automation-rules.html) in the *Security Hub User Guide* .", "Value": "The string filter value. Filter values are case sensitive. For example, the product name for control-based findings is `Security Hub` . If you provide `security hub` as the filter value, there's no match." }, "AWS::SecurityHub::OrganizationConfiguration": { @@ -52506,7 +53004,7 @@ }, "AWS::SecurityHub::SecurityControl ParameterConfiguration": { "Value": "The current value of a control parameter.", - "ValueType": "Identifies whether a control parameter uses a custom user-defined value or subscribes to the default AWS Security Hub behavior.\n\nWhen `ValueType` is set equal to `DEFAULT` , the default behavior can be a specific Security Hub default value, or the default behavior can be to ignore a specific parameter. When `ValueType` is set equal to `DEFAULT` , Security Hub ignores user-provided input for the `Value` field.\n\nWhen `ValueType` is set equal to `CUSTOM` , the `Value` field can't be empty." + "ValueType": "Identifies whether a control parameter uses a custom user-defined value or subscribes to the default Security Hub behavior.\n\nWhen `ValueType` is set equal to `DEFAULT` , the default behavior can be a specific Security Hub default value, or the default behavior can be to ignore a specific parameter. When `ValueType` is set equal to `DEFAULT` , Security Hub ignores user-provided input for the `Value` field.\n\nWhen `ValueType` is set equal to `CUSTOM` , the `Value` field can't be empty." }, "AWS::SecurityHub::SecurityControl ParameterValue": { "Boolean": "A control parameter that is a boolean.", @@ -52633,6 +53131,10 @@ "AWS::ServiceCatalog::CloudFormationProduct ConnectionParameters": { "CodeStar": "Provides `ConnectionType` details." }, + "AWS::ServiceCatalog::CloudFormationProduct Info": { + "ImportFromPhysicalId": "", + "LoadTemplateFromURL": "" + }, "AWS::ServiceCatalog::CloudFormationProduct ProvisioningArtifactProperties": { "Description": "The description of the provisioning artifact, including how it differs from the previous provisioning artifact.", "DisableTemplateValidation": "If set to true, AWS Service Catalog stops validating the specified provisioning artifact even if it is invalid.", @@ -53342,6 +53844,8 @@ "AWS::Transfer::Connector": { "AccessRole": "Connectors are used to send files using either the AS2 or SFTP protocol. For the access role, provide the Amazon Resource Name (ARN) of the AWS Identity and Access Management role to use.\n\n*For AS2 connectors*\n\nWith AS2, you can send files by calling `StartFileTransfer` and specifying the file paths in the request parameter, `SendFilePaths` . We use the file\u2019s parent directory (for example, for `--send-file-paths /bucket/dir/file.txt` , parent directory is `/bucket/dir/` ) to temporarily store a processed AS2 message file, store the MDN when we receive them from the partner, and write a final JSON file containing relevant metadata of the transmission. So, the `AccessRole` needs to provide read and write access to the parent directory of the file location used in the `StartFileTransfer` request. Additionally, you need to provide read and write access to the parent directory of the files that you intend to send with `StartFileTransfer` .\n\nIf you are using Basic authentication for your AS2 connector, the access role requires the `secretsmanager:GetSecretValue` permission for the secret. If the secret is encrypted using a customer-managed key instead of the AWS managed key in Secrets Manager, then the role also needs the `kms:Decrypt` permission for that key.\n\n*For SFTP connectors*\n\nMake sure that the access role provides read and write access to the parent directory of the file location that's used in the `StartFileTransfer` request. Additionally, make sure that the role provides `secretsmanager:GetSecretValue` permission to AWS Secrets Manager .", "As2Config": "A structure that contains the parameters for an AS2 connector object.", + "EgressConfig": "Current egress configuration of the connector, showing how traffic is routed to the SFTP server. Contains VPC Lattice settings when using VPC_LATTICE egress type.\n\nWhen using the VPC_LATTICE egress type, AWS Transfer Family uses a managed Service Network to simplify the resource sharing process.", + "EgressType": "Type of egress configuration for the connector. SERVICE_MANAGED uses Transfer Family managed NAT gateways, while VPC_LATTICE routes traffic through customer VPCs using VPC Lattice.", "LoggingRole": "The Amazon Resource Name (ARN) of the AWS Identity and Access Management (IAM) role that allows a connector to turn on CloudWatch logging for Amazon S3 events. When set, you can view connector activity in your CloudWatch logs.", "SecurityPolicyName": "The text name of the security policy for the specified connector.", "SftpConfig": "A structure that contains the parameters for an SFTP connector object.", @@ -53360,6 +53864,13 @@ "PreserveContentType": "", "SigningAlgorithm": "The algorithm that is used to sign the AS2 messages sent with the connector." }, + "AWS::Transfer::Connector ConnectorEgressConfig": { + "VpcLattice": "VPC_LATTICE configuration for routing connector traffic through customer VPCs. Enables private connectivity to SFTP servers without requiring public internet access or complex network configurations." + }, + "AWS::Transfer::Connector ConnectorVpcLatticeEgressConfig": { + "PortNumber": "Port number for connecting to the SFTP server through VPC_LATTICE. Defaults to 22 if not specified. Must match the port on which the target SFTP server is listening.", + "ResourceConfigurationArn": "ARN of the VPC_LATTICE Resource Configuration that defines the target SFTP server location. Must point to a valid Resource Configuration in the customer's VPC with appropriate network connectivity to the SFTP server." + }, "AWS::Transfer::Connector SftpConfig": { "MaxConcurrentConnections": "Specify the number of concurrent connections that your connector creates to the remote server. The default value is `1` . The maximum values is `5` .\n\n> If you are using the AWS Management Console , the default value is `5` . \n\nThis parameter specifies the number of active connections that your connector can establish with the remote server at the same time. Increasing this value can enhance connector performance when transferring large file batches by enabling parallel operations.", "TrustedHostKeys": "The public portion of the host key, or keys, that are used to identify the external server to which you are connecting. You can use the `ssh-keyscan` command against the SFTP server to retrieve the necessary key.\n\n> `TrustedHostKeys` is optional for `CreateConnector` . If not provided, you can use `TestConnection` to retrieve the server host key during the initial connection attempt, and subsequently update the connector with the observed host key. \n\nWhen creating connectors with egress config (VPC_LATTICE type connectors), since host name is not something we can verify, the only accepted trusted host key format is `key-type key-body` without the host name. For example: `ssh-rsa AAAAB3Nza...`\n\nThe three standard SSH public key format elements are `` , `` , and an optional `` , with spaces between each element. Specify only the `` and `` : do not enter the `` portion of the key.\n\nFor the trusted host key, AWS Transfer Family accepts RSA and ECDSA keys.\n\n- For RSA keys, the `` string is `ssh-rsa` .\n- For ECDSA keys, the `` string is either `ecdsa-sha2-nistp256` , `ecdsa-sha2-nistp384` , or `ecdsa-sha2-nistp521` , depending on the size of the key you generated.\n\nRun this command to retrieve the SFTP server host key, where your SFTP server name is `ftp.host.com` .\n\n`ssh-keyscan ftp.host.com`\n\nThis prints the public host key to standard output.\n\n`ftp.host.com ssh-rsa AAAAB3Nza...`\n\nCopy and paste this string into the `TrustedHostKeys` field for the `create-connector` command or into the *Trusted host keys* field in the console.\n\nFor VPC Lattice type connectors (VPC_LATTICE), remove the hostname from the key and use only the `key-type key-body` format. In this example, it should be: `ssh-rsa AAAAB3Nza...`", @@ -54422,6 +54933,7 @@ "TextTransformations": "Text transformations eliminate some of the unusual formatting that attackers use in web requests in an effort to bypass detection. If you specify one or more transformations in a rule statement, AWS WAF performs all transformations on the content of the request component identified by `FieldToMatch` , starting from the lowest priority setting, before inspecting the content for a match." }, "AWS::WAFv2::WebACL": { + "ApplicationConfig": "Returns a list of `ApplicationAttribute` s.", "AssociationConfig": "Specifies custom configurations for the associations between the web ACL and protected resources.\n\nUse this to customize the maximum size of the request body that your protected resources forward to AWS WAF for inspection. You can customize this setting for CloudFront, API Gateway, Amazon Cognito, App Runner, or Verified Access resources. The default setting is 16 KB (16,384 bytes).\n\n> You are charged additional fees when your protected resources forward body sizes that are larger than the default. For more information, see [AWS WAF Pricing](https://docs.aws.amazon.com/waf/pricing/) . \n\nFor Application Load Balancer and AWS AppSync , the limit is fixed at 8 KB (8,192 bytes).", "CaptchaConfig": "Specifies how AWS WAF should handle `CAPTCHA` evaluations for rules that don't have their own `CaptchaConfig` settings. If you don't specify this, AWS WAF uses its default settings for `CaptchaConfig` .", "ChallengeConfig": "Specifies how AWS WAF should handle challenge evaluations for rules that don't have their own `ChallengeConfig` settings. If you don't specify this, AWS WAF uses its default settings for `ChallengeConfig` .", @@ -54464,6 +54976,13 @@ "AWS::WAFv2::WebACL AndStatement": { "Statements": "The statements to combine with AND logic. You can use any statements that can be nested." }, + "AWS::WAFv2::WebACL ApplicationAttribute": { + "Name": "Specifies the attribute name.", + "Values": "Specifies the attribute value." + }, + "AWS::WAFv2::WebACL ApplicationConfig": { + "Attributes": "Contains the attribute name and a list of values for that attribute." + }, "AWS::WAFv2::WebACL AsnMatchStatement": { "AsnList": "Contains one or more Autonomous System Numbers (ASNs). ASNs are unique identifiers assigned to large internet networks managed by organizations such as internet service providers, enterprises, universities, or government agencies.", "ForwardedIPConfig": "The configuration for inspecting IP addresses to match against an ASN in an HTTP header that you specify, instead of using the IP address that's reported by the web request origin. Commonly, this is the X-Forwarded-For (XFF) header, but you can specify any header name." @@ -55327,8 +55846,8 @@ }, "AWS::WorkSpacesThinClient::Environment": { "DesiredSoftwareSetId": "The ID of the software set to apply.", - "DesktopArn": "The Amazon Resource Name (ARN) of the desktop to stream from Amazon WorkSpaces, WorkSpaces Secure Browser, or AppStream 2.0.", - "DesktopEndpoint": "The URL for the identity provider login (only for environments that use AppStream 2.0).", + "DesktopArn": "The Amazon Resource Name (ARN) of the desktop to stream from Amazon WorkSpaces, WorkSpaces Secure Browser, or WorkSpaces Applications.", + "DesktopEndpoint": "The URL for the identity provider login (only for environments that use WorkSpaces Applications).", "DeviceCreationTags": "An array of key-value pairs to apply to the newly created devices for this environment.", "KmsKeyArn": "The Amazon Resource Name (ARN) of the AWS Key Management Service key used to encrypt the environment.", "MaintenanceWindow": "A specification for a time window to apply software updates.", diff --git a/schema_source/cloudformation.schema.json b/schema_source/cloudformation.schema.json index a90540b59..1e70001f8 100644 --- a/schema_source/cloudformation.schema.json +++ b/schema_source/cloudformation.schema.json @@ -18086,7 +18086,7 @@ "type": "string" }, "InstanceType": { - "markdownDescription": "The instance type to use when launching fleet instances. The following instance types are available for non-Elastic fleets:\n\n- stream.standard.small\n- stream.standard.medium\n- stream.standard.large\n- stream.compute.large\n- stream.compute.xlarge\n- stream.compute.2xlarge\n- stream.compute.4xlarge\n- stream.compute.8xlarge\n- stream.memory.large\n- stream.memory.xlarge\n- stream.memory.2xlarge\n- stream.memory.4xlarge\n- stream.memory.8xlarge\n- stream.memory.z1d.large\n- stream.memory.z1d.xlarge\n- stream.memory.z1d.2xlarge\n- stream.memory.z1d.3xlarge\n- stream.memory.z1d.6xlarge\n- stream.memory.z1d.12xlarge\n- stream.graphics-design.large\n- stream.graphics-design.xlarge\n- stream.graphics-design.2xlarge\n- stream.graphics-design.4xlarge\n- stream.graphics-desktop.2xlarge\n- stream.graphics.g4dn.xlarge\n- stream.graphics.g4dn.2xlarge\n- stream.graphics.g4dn.4xlarge\n- stream.graphics.g4dn.8xlarge\n- stream.graphics.g4dn.12xlarge\n- stream.graphics.g4dn.16xlarge\n- stream.graphics-pro.4xlarge\n- stream.graphics-pro.8xlarge\n- stream.graphics-pro.16xlarge\n- stream.graphics.g5.xlarge\n- stream.graphics.g5.2xlarge\n- stream.graphics.g5.4xlarge\n- stream.graphics.g5.8xlarge\n- stream.graphics.g5.16xlarge\n- stream.graphics.g5.12xlarge\n- stream.graphics.g5.24xlarge\n- stream.graphics.g6.xlarge\n- stream.graphics.g6.2xlarge\n- stream.graphics.g6.4xlarge\n- stream.graphics.g6.8xlarge\n- stream.graphics.g6.16xlarge\n- stream.graphics.g6.12xlarge\n- stream.graphics.g6.24xlarge\n- stream.graphics.gr6.4xlarge\n- stream.graphics.gr6.8xlarge\n- stream.graphics.g6f.large\n- stream.graphics.g6f.xlarge\n- stream.graphics.g6f.2xlarge\n- stream.graphics.g6f.4xlarge\n- stream.graphics.gr6f.4xlarge\n\nThe following instance types are available for Elastic fleets:\n\n- stream.standard.small\n- stream.standard.medium", + "markdownDescription": "The instance type to use when launching fleet instances. The following instance types are available for non-Elastic fleets:\n\n- stream.standard.small\n- stream.standard.medium\n- stream.standard.large\n- stream.compute.large\n- stream.compute.xlarge\n- stream.compute.2xlarge\n- stream.compute.4xlarge\n- stream.compute.8xlarge\n- stream.memory.large\n- stream.memory.xlarge\n- stream.memory.2xlarge\n- stream.memory.4xlarge\n- stream.memory.8xlarge\n- stream.memory.z1d.large\n- stream.memory.z1d.xlarge\n- stream.memory.z1d.2xlarge\n- stream.memory.z1d.3xlarge\n- stream.memory.z1d.6xlarge\n- stream.memory.z1d.12xlarge\n- stream.graphics-design.large\n- stream.graphics-design.xlarge\n- stream.graphics-design.2xlarge\n- stream.graphics-design.4xlarge\n- stream.graphics.g4dn.xlarge\n- stream.graphics.g4dn.2xlarge\n- stream.graphics.g4dn.4xlarge\n- stream.graphics.g4dn.8xlarge\n- stream.graphics.g4dn.12xlarge\n- stream.graphics.g4dn.16xlarge\n- stream.graphics.g5.xlarge\n- stream.graphics.g5.2xlarge\n- stream.graphics.g5.4xlarge\n- stream.graphics.g5.8xlarge\n- stream.graphics.g5.16xlarge\n- stream.graphics.g5.12xlarge\n- stream.graphics.g5.24xlarge\n- stream.graphics.g6.xlarge\n- stream.graphics.g6.2xlarge\n- stream.graphics.g6.4xlarge\n- stream.graphics.g6.8xlarge\n- stream.graphics.g6.16xlarge\n- stream.graphics.g6.12xlarge\n- stream.graphics.g6.24xlarge\n- stream.graphics.gr6.4xlarge\n- stream.graphics.gr6.8xlarge\n- stream.graphics.g6f.large\n- stream.graphics.g6f.xlarge\n- stream.graphics.g6f.2xlarge\n- stream.graphics.g6f.4xlarge\n- stream.graphics.gr6f.4xlarge\n\nThe following instance types are available for Elastic fleets:\n\n- stream.standard.small\n- stream.standard.medium", "title": "InstanceType", "type": "string" }, @@ -18121,7 +18121,7 @@ "title": "SessionScriptS3Location" }, "StreamView": { - "markdownDescription": "The AppStream 2.0 view that is displayed to your users when they stream from the fleet. When `APP` is specified, only the windows of applications opened by users display. When `DESKTOP` is specified, the standard desktop that is provided by the operating system displays.\n\nThe default value is `APP` .", + "markdownDescription": "The WorkSpaces Applications view that is displayed to your users when they stream from the fleet. When `APP` is specified, only the windows of applications opened by users display. When `DESKTOP` is specified, the standard desktop that is provided by the operating system displays.\n\nThe default value is `APP` .", "title": "StreamView", "type": "string" }, @@ -18292,7 +18292,7 @@ "type": "array" }, "AppstreamAgentVersion": { - "markdownDescription": "The version of the AppStream 2.0 agent to use for this image builder. To use the latest version of the AppStream 2.0 agent, specify [LATEST].", + "markdownDescription": "The version of the WorkSpaces Applications agent to use for this image builder. To use the latest version of the WorkSpaces Applications agent, specify [LATEST].", "title": "AppstreamAgentVersion", "type": "string" }, @@ -18332,7 +18332,7 @@ "type": "string" }, "InstanceType": { - "markdownDescription": "The instance type to use when launching the image builder. The following instance types are available:\n\n- stream.standard.small\n- stream.standard.medium\n- stream.standard.large\n- stream.compute.large\n- stream.compute.xlarge\n- stream.compute.2xlarge\n- stream.compute.4xlarge\n- stream.compute.8xlarge\n- stream.memory.large\n- stream.memory.xlarge\n- stream.memory.2xlarge\n- stream.memory.4xlarge\n- stream.memory.8xlarge\n- stream.memory.z1d.large\n- stream.memory.z1d.xlarge\n- stream.memory.z1d.2xlarge\n- stream.memory.z1d.3xlarge\n- stream.memory.z1d.6xlarge\n- stream.memory.z1d.12xlarge\n- stream.graphics-design.large\n- stream.graphics-design.xlarge\n- stream.graphics-design.2xlarge\n- stream.graphics-design.4xlarge\n- stream.graphics-desktop.2xlarge\n- stream.graphics.g4dn.xlarge\n- stream.graphics.g4dn.2xlarge\n- stream.graphics.g4dn.4xlarge\n- stream.graphics.g4dn.8xlarge\n- stream.graphics.g4dn.12xlarge\n- stream.graphics.g4dn.16xlarge\n- stream.graphics-pro.4xlarge\n- stream.graphics-pro.8xlarge\n- stream.graphics-pro.16xlarge\n- stream.graphics.g5.xlarge\n- stream.graphics.g5.2xlarge\n- stream.graphics.g5.4xlarge\n- stream.graphics.g5.8xlarge\n- stream.graphics.g5.16xlarge\n- stream.graphics.g5.12xlarge\n- stream.graphics.g5.24xlarge\n- stream.graphics.g6.xlarge\n- stream.graphics.g6.2xlarge\n- stream.graphics.g6.4xlarge\n- stream.graphics.g6.8xlarge\n- stream.graphics.g6.16xlarge\n- stream.graphics.g6.12xlarge\n- stream.graphics.g6.24xlarge\n- stream.graphics.gr6.4xlarge\n- stream.graphics.gr6.8xlarge\n- stream.graphics.g6f.large\n- stream.graphics.g6f.xlarge\n- stream.graphics.g6f.2xlarge\n- stream.graphics.g6f.4xlarge\n- stream.graphics.gr6f.4xlarge", + "markdownDescription": "The instance type to use when launching the image builder. The following instance types are available:\n\n- stream.standard.small\n- stream.standard.medium\n- stream.standard.large\n- stream.compute.large\n- stream.compute.xlarge\n- stream.compute.2xlarge\n- stream.compute.4xlarge\n- stream.compute.8xlarge\n- stream.memory.large\n- stream.memory.xlarge\n- stream.memory.2xlarge\n- stream.memory.4xlarge\n- stream.memory.8xlarge\n- stream.memory.z1d.large\n- stream.memory.z1d.xlarge\n- stream.memory.z1d.2xlarge\n- stream.memory.z1d.3xlarge\n- stream.memory.z1d.6xlarge\n- stream.memory.z1d.12xlarge\n- stream.graphics-design.large\n- stream.graphics-design.xlarge\n- stream.graphics-design.2xlarge\n- stream.graphics-design.4xlarge\n- stream.graphics.g4dn.xlarge\n- stream.graphics.g4dn.2xlarge\n- stream.graphics.g4dn.4xlarge\n- stream.graphics.g4dn.8xlarge\n- stream.graphics.g4dn.12xlarge\n- stream.graphics.g4dn.16xlarge\n- stream.graphics.g5.xlarge\n- stream.graphics.g5.2xlarge\n- stream.graphics.g5.4xlarge\n- stream.graphics.g5.8xlarge\n- stream.graphics.g5.16xlarge\n- stream.graphics.g5.12xlarge\n- stream.graphics.g5.24xlarge\n- stream.graphics.g6.xlarge\n- stream.graphics.g6.2xlarge\n- stream.graphics.g6.4xlarge\n- stream.graphics.g6.8xlarge\n- stream.graphics.g6.16xlarge\n- stream.graphics.g6.12xlarge\n- stream.graphics.g6.24xlarge\n- stream.graphics.gr6.4xlarge\n- stream.graphics.gr6.8xlarge\n- stream.graphics.g6f.large\n- stream.graphics.g6f.xlarge\n- stream.graphics.g6f.2xlarge\n- stream.graphics.g6f.4xlarge\n- stream.graphics.gr6f.4xlarge", "title": "InstanceType", "type": "string" }, @@ -18479,7 +18479,7 @@ "items": { "$ref": "#/definitions/AWS::AppStream::Stack.AccessEndpoint" }, - "markdownDescription": "The list of virtual private cloud (VPC) interface endpoint objects. Users of the stack can connect to AppStream 2.0 only through the specified endpoints.", + "markdownDescription": "The list of virtual private cloud (VPC) interface endpoint objects. Users of the stack can connect to WorkSpaces Applications only through the specified endpoints.", "title": "AccessEndpoints", "type": "array" }, @@ -18515,7 +18515,7 @@ "items": { "type": "string" }, - "markdownDescription": "The domains where AppStream 2.0 streaming sessions can be embedded in an iframe. You must approve the domains that you want to host embedded AppStream 2.0 streaming sessions.", + "markdownDescription": "The domains where WorkSpaces Applications streaming sessions can be embedded in an iframe. You must approve the domains that you want to host embedded WorkSpaces Applications streaming sessions.", "title": "EmbedHostDomains", "type": "array" }, @@ -26786,7 +26786,7 @@ "type": "string" }, "RestoreTestingSelectionName": { - "markdownDescription": "The unique name of the restore testing selection that belongs to the related restore testing plan.", + "markdownDescription": "The unique name of the restore testing selection that belongs to the related restore testing plan.\n\nThe name consists of only alphanumeric characters and underscores. Maximum length is 50.", "title": "RestoreTestingSelectionName", "type": "string" }, @@ -33141,7 +33141,7 @@ "items": { "type": "string" }, - "markdownDescription": "The columns within the underlying AWS Glue table that can be utilized within collaborations.", + "markdownDescription": "The columns within the underlying AWS Glue table that can be used within collaborations.", "title": "AllowedColumns", "type": "array" }, @@ -48540,7 +48540,7 @@ "title": "RecordingMode" }, "RoleARN": { - "markdownDescription": "Amazon Resource Name (ARN) of the IAM role assumed by AWS Config and used by the configuration recorder. For more information, see [Permissions for the IAM Role Assigned](https://docs.aws.amazon.com/config/latest/developerguide/iamrole-permissions.html) to AWS Config in the AWS Config Developer Guide.\n\n> *Pre-existing AWS Config role*\n> \n> If you have used an AWS service that uses AWS Config , such as AWS Security Hub or AWS Control Tower , and an AWS Config role has already been created, make sure that the IAM role that you use when setting up AWS Config keeps the same minimum permissions as the already created AWS Config role. You must do this so that the other AWS service continues to run as expected.\n> \n> For example, if AWS Control Tower has an IAM role that allows AWS Config to read Amazon Simple Storage Service ( Amazon S3 ) objects, make sure that the same permissions are granted within the IAM role you use when setting up AWS Config . Otherwise, it may interfere with how AWS Control Tower operates. For more information about IAM roles for AWS Config , see [*Identity and Access Management for AWS Config*](https://docs.aws.amazon.com/config/latest/developerguide/security-iam.html) in the *AWS Config Developer Guide* .", + "markdownDescription": "Amazon Resource Name (ARN) of the IAM role assumed by AWS Config and used by the configuration recorder. For more information, see [Permissions for the IAM Role Assigned](https://docs.aws.amazon.com/config/latest/developerguide/iamrole-permissions.html) to AWS Config in the AWS Config Developer Guide.\n\n> *Pre-existing AWS Config role*\n> \n> If you have used an AWS service that uses AWS Config , such as Security Hub or AWS Control Tower , and an AWS Config role has already been created, make sure that the IAM role that you use when setting up AWS Config keeps the same minimum permissions as the already created AWS Config role. You must do this so that the other AWS service continues to run as expected.\n> \n> For example, if AWS Control Tower has an IAM role that allows AWS Config to read Amazon Simple Storage Service ( Amazon S3 ) objects, make sure that the same permissions are granted within the IAM role you use when setting up AWS Config . Otherwise, it may interfere with how AWS Control Tower operates. For more information about IAM roles for AWS Config , see [*Identity and Access Management for AWS Config*](https://docs.aws.amazon.com/config/latest/developerguide/security-iam.html) in the *AWS Config Developer Guide* .", "title": "RoleARN", "type": "string" } @@ -83091,17 +83091,17 @@ "additionalProperties": false, "properties": { "Base": { - "markdownDescription": "The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of `0` is used.\n\nBase value characteristics:\n\n- Only one capacity provider in a strategy can have a base defined\n- Default value is `0` if not specified\n- Valid range: 0 to 100,000\n- Base requirements are satisfied first before weight distribution", + "markdownDescription": "The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of `0` is used.\n\nBase value characteristics:\n\n- Only one capacity provider in a strategy can have a base defined\n- The default value is `0` if not specified\n- The valid range is 0 to 100,000\n- Base requirements are satisfied first before weight distribution", "title": "Base", "type": "number" }, "CapacityProvider": { - "markdownDescription": "The short name of the capacity provider.", + "markdownDescription": "The short name of the capacity provider. This can be either an AWS managed capacity provider ( `FARGATE` or `FARGATE_SPOT` ) or the name of a custom capacity provider that you created.", "title": "CapacityProvider", "type": "string" }, "Weight": { - "markdownDescription": "The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The `weight` value is taken into consideration after the `base` value, if defined, is satisfied.\n\nIf no `weight` value is specified, the default value of `0` is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of `0` can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of `0` , any `RunTask` or `CreateService` actions using the capacity provider strategy will fail.\n\nWeight value characteristics:\n\n- Weight is considered after the base value is satisfied\n- Default value is `0` if not specified\n- Valid range: 0 to 1,000\n- At least one capacity provider must have a weight greater than zero\n- Capacity providers with weight of `0` cannot place tasks\n\nTask distribution logic:\n\n- Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider\n- Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios\n\nExamples:\n\nEqual Distribution: Two capacity providers both with weight `1` will split tasks evenly after base requirements are met.\n\nWeighted Distribution: If capacityProviderA has weight `1` and capacityProviderB has weight `4` , then for every 1 task on A, 4 tasks will run on B.", + "markdownDescription": "The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The `weight` value is taken into consideration after the `base` value, if defined, is satisfied.\n\nIf no `weight` value is specified, the default value of `0` is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of `0` can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of `0` , any `RunTask` or `CreateService` actions using the capacity provider strategy will fail.\n\nWeight value characteristics:\n\n- Weight is considered after the base value is satisfied\n- The default value is `0` if not specified\n- The valid range is 0 to 1,000\n- At least one capacity provider must have a weight greater than zero\n- Capacity providers with weight of `0` cannot place tasks\n\nTask distribution logic:\n\n- Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider\n- Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios\n\nExamples:\n\nEqual Distribution: Two capacity providers both with weight `1` will split tasks evenly after base requirements are met.\n\nWeighted Distribution: If capacityProviderA has weight `1` and capacityProviderB has weight `4` , then for every 1 task on A, 4 tasks will run on B.", "title": "Weight", "type": "number" } @@ -83287,17 +83287,17 @@ "additionalProperties": false, "properties": { "Base": { - "markdownDescription": "The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of `0` is used.\n\nBase value characteristics:\n\n- Only one capacity provider in a strategy can have a base defined\n- Default value is `0` if not specified\n- Valid range: 0 to 100,000\n- Base requirements are satisfied first before weight distribution", + "markdownDescription": "The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of `0` is used.\n\nBase value characteristics:\n\n- Only one capacity provider in a strategy can have a base defined\n- The default value is `0` if not specified\n- The valid range is 0 to 100,000\n- Base requirements are satisfied first before weight distribution", "title": "Base", "type": "number" }, "CapacityProvider": { - "markdownDescription": "The short name of the capacity provider.", + "markdownDescription": "The short name of the capacity provider. This can be either an AWS managed capacity provider ( `FARGATE` or `FARGATE_SPOT` ) or the name of a custom capacity provider that you created.", "title": "CapacityProvider", "type": "string" }, "Weight": { - "markdownDescription": "The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The `weight` value is taken into consideration after the `base` value, if defined, is satisfied.\n\nIf no `weight` value is specified, the default value of `0` is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of `0` can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of `0` , any `RunTask` or `CreateService` actions using the capacity provider strategy will fail.\n\nWeight value characteristics:\n\n- Weight is considered after the base value is satisfied\n- Default value is `0` if not specified\n- Valid range: 0 to 1,000\n- At least one capacity provider must have a weight greater than zero\n- Capacity providers with weight of `0` cannot place tasks\n\nTask distribution logic:\n\n- Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider\n- Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios\n\nExamples:\n\nEqual Distribution: Two capacity providers both with weight `1` will split tasks evenly after base requirements are met.\n\nWeighted Distribution: If capacityProviderA has weight `1` and capacityProviderB has weight `4` , then for every 1 task on A, 4 tasks will run on B.", + "markdownDescription": "The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The `weight` value is taken into consideration after the `base` value, if defined, is satisfied.\n\nIf no `weight` value is specified, the default value of `0` is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of `0` can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of `0` , any `RunTask` or `CreateService` actions using the capacity provider strategy will fail.\n\nWeight value characteristics:\n\n- Weight is considered after the base value is satisfied\n- The default value is `0` if not specified\n- The valid range is 0 to 1,000\n- At least one capacity provider must have a weight greater than zero\n- Capacity providers with weight of `0` cannot place tasks\n\nTask distribution logic:\n\n- Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider\n- Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios\n\nExamples:\n\nEqual Distribution: Two capacity providers both with weight `1` will split tasks evenly after base requirements are met.\n\nWeighted Distribution: If capacityProviderA has weight `1` and capacityProviderB has weight `4` , then for every 1 task on A, 4 tasks will run on B.", "title": "Weight", "type": "number" } @@ -83611,17 +83611,17 @@ "additionalProperties": false, "properties": { "Base": { - "markdownDescription": "The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of `0` is used.\n\nBase value characteristics:\n\n- Only one capacity provider in a strategy can have a base defined\n- Default value is `0` if not specified\n- Valid range: 0 to 100,000\n- Base requirements are satisfied first before weight distribution", + "markdownDescription": "The *base* value designates how many tasks, at a minimum, to run on the specified capacity provider for each service. Only one capacity provider in a capacity provider strategy can have a *base* defined. If no value is specified, the default value of `0` is used.\n\nBase value characteristics:\n\n- Only one capacity provider in a strategy can have a base defined\n- The default value is `0` if not specified\n- The valid range is 0 to 100,000\n- Base requirements are satisfied first before weight distribution", "title": "Base", "type": "number" }, "CapacityProvider": { - "markdownDescription": "The short name of the capacity provider.", + "markdownDescription": "The short name of the capacity provider. This can be either an AWS managed capacity provider ( `FARGATE` or `FARGATE_SPOT` ) or the name of a custom capacity provider that you created.", "title": "CapacityProvider", "type": "string" }, "Weight": { - "markdownDescription": "The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The `weight` value is taken into consideration after the `base` value, if defined, is satisfied.\n\nIf no `weight` value is specified, the default value of `0` is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of `0` can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of `0` , any `RunTask` or `CreateService` actions using the capacity provider strategy will fail.\n\nWeight value characteristics:\n\n- Weight is considered after the base value is satisfied\n- Default value is `0` if not specified\n- Valid range: 0 to 1,000\n- At least one capacity provider must have a weight greater than zero\n- Capacity providers with weight of `0` cannot place tasks\n\nTask distribution logic:\n\n- Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider\n- Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios\n\nExamples:\n\nEqual Distribution: Two capacity providers both with weight `1` will split tasks evenly after base requirements are met.\n\nWeighted Distribution: If capacityProviderA has weight `1` and capacityProviderB has weight `4` , then for every 1 task on A, 4 tasks will run on B.", + "markdownDescription": "The *weight* value designates the relative percentage of the total number of tasks launched that should use the specified capacity provider. The `weight` value is taken into consideration after the `base` value, if defined, is satisfied.\n\nIf no `weight` value is specified, the default value of `0` is used. When multiple capacity providers are specified within a capacity provider strategy, at least one of the capacity providers must have a weight value greater than zero and any capacity providers with a weight of `0` can't be used to place tasks. If you specify multiple capacity providers in a strategy that all have a weight of `0` , any `RunTask` or `CreateService` actions using the capacity provider strategy will fail.\n\nWeight value characteristics:\n\n- Weight is considered after the base value is satisfied\n- The default value is `0` if not specified\n- The valid range is 0 to 1,000\n- At least one capacity provider must have a weight greater than zero\n- Capacity providers with weight of `0` cannot place tasks\n\nTask distribution logic:\n\n- Base satisfaction: The minimum number of tasks specified by the base value are placed on that capacity provider\n- Weight distribution: After base requirements are met, additional tasks are distributed according to weight ratios\n\nExamples:\n\nEqual Distribution: Two capacity providers both with weight `1` will split tasks evenly after base requirements are met.\n\nWeighted Distribution: If capacityProviderA has weight `1` and capacityProviderB has weight `4` , then for every 1 task on A, 4 tasks will run on B.", "title": "Weight", "type": "number" } @@ -84202,7 +84202,7 @@ "type": "string" }, "PidMode": { - "markdownDescription": "The process namespace to use for the containers in the task. The valid values are `host` or `task` . On Fargate for Linux containers, the only valid value is `task` . For example, monitoring sidecars might need `pidMode` to access information about other containers running in the same task.\n\nIf `host` is specified, all containers within the tasks that specified the `host` PID mode on the same container instance share the same process namespace with the host Amazon EC2 instance.\n\nIf `task` is specified, all containers within the specified task share the same process namespace.\n\nIf no value is specified, the default is a private namespace for each container.\n\nIf the `host` PID mode is used, there's a heightened risk of undesired process namespace exposure.\n\n> This parameter is not supported for Windows containers. > This parameter is only supported for tasks that are hosted on AWS Fargate if the tasks are using platform version `1.4.0` or later (Linux). This isn't supported for Windows containers on Fargate.", + "markdownDescription": "The process namespace to use for the containers in the task. The valid values are `host` or `task` . On Fargate for Linux containers, the only valid value is `task` . For example, monitoring sidecars might need `pidMode` to access information about other containers running in the same task.\n\nIf `host` is specified, all containers within the tasks that specified the `host` PID mode on the same container instance share the same process namespace with the host Amazon EC2 instance.\n\nIf `task` is specified, all containers within the specified task share the same process namespace.\n\nIf no value is specified, the The default is a private namespace for each container.\n\nIf the `host` PID mode is used, there's a heightened risk of undesired process namespace exposure.\n\n> This parameter is not supported for Windows containers. > This parameter is only supported for tasks that are hosted on AWS Fargate if the tasks are using platform version `1.4.0` or later (Linux). This isn't supported for Windows containers on Fargate.", "title": "PidMode", "type": "string" }, @@ -84229,7 +84229,7 @@ }, "RuntimePlatform": { "$ref": "#/definitions/AWS::ECS::TaskDefinition.RuntimePlatform", - "markdownDescription": "The operating system that your tasks definitions run on. A platform family is specified only for tasks using the Fargate launch type.", + "markdownDescription": "The operating system that your tasks definitions run on.", "title": "RuntimePlatform" }, "Tags": { @@ -84304,7 +84304,7 @@ "type": "array" }, "Cpu": { - "markdownDescription": "The number of `cpu` units reserved for the container. This parameter maps to `CpuShares` in the docker container create commandand the `--cpu-shares` option to docker run.\n\nThis field is optional for tasks using the Fargate launch type, and the only requirement is that the total amount of CPU reserved for all containers within a task be lower than the task-level `cpu` value.\n\n> You can determine the number of CPU units that are available per EC2 instance type by multiplying the vCPUs listed for that instance type on the [Amazon EC2 Instances](https://docs.aws.amazon.com/ec2/instance-types/) detail page by 1,024. \n\nLinux containers share unallocated CPU units with other containers on the container instance with the same ratio as their allocated amount. For example, if you run a single-container task on a single-core instance type with 512 CPU units specified for that container, and that's the only task running on the container instance, that container could use the full 1,024 CPU unit share at any given time. However, if you launched another copy of the same task on that container instance, each task is guaranteed a minimum of 512 CPU units when needed. Moreover, each container could float to higher CPU usage if the other container was not using it. If both tasks were 100% active all of the time, they would be limited to 512 CPU units.\n\nOn Linux container instances, the Docker daemon on the container instance uses the CPU value to calculate the relative CPU share ratios for running containers. The minimum valid CPU share value that the Linux kernel allows is 2, and the maximum valid CPU share value that the Linux kernel allows is 262144. However, the CPU parameter isn't required, and you can use CPU values below 2 or above 262144 in your container definitions. For CPU values below 2 (including null) or above 262144, the behavior varies based on your Amazon ECS container agent version:\n\n- *Agent versions less than or equal to 1.1.0:* Null and zero CPU values are passed to Docker as 0, which Docker then converts to 1,024 CPU shares. CPU values of 1 are passed to Docker as 1, which the Linux kernel converts to two CPU shares.\n- *Agent versions greater than or equal to 1.2.0:* Null, zero, and CPU values of 1 are passed to Docker as 2.\n- *Agent versions greater than or equal to 1.84.0:* CPU values greater than 256 vCPU are passed to Docker as 256, which is equivalent to 262144 CPU shares.\n\nOn Windows container instances, the CPU limit is enforced as an absolute limit, or a quota. Windows containers only have access to the specified amount of CPU that's described in the task definition. A null or zero CPU value is passed to Docker as `0` , which Windows interprets as 1% of one CPU.", + "markdownDescription": "The number of `cpu` units reserved for the container. This parameter maps to `CpuShares` in the docker container create command and the `--cpu-shares` option to docker run.\n\nThis field is optional for tasks using the Fargate launch type, and the only requirement is that the total amount of CPU reserved for all containers within a task be lower than the task-level `cpu` value.\n\n> You can determine the number of CPU units that are available per EC2 instance type by multiplying the vCPUs listed for that instance type on the [Amazon EC2 Instances](https://docs.aws.amazon.com/ec2/instance-types/) detail page by 1,024. \n\nLinux containers share unallocated CPU units with other containers on the container instance with the same ratio as their allocated amount. For example, if you run a single-container task on a single-core instance type with 512 CPU units specified for that container, and that's the only task running on the container instance, that container could use the full 1,024 CPU unit share at any given time. However, if you launched another copy of the same task on that container instance, each task is guaranteed a minimum of 512 CPU units when needed. Moreover, each container could float to higher CPU usage if the other container was not using it. If both tasks were 100% active all of the time, they would be limited to 512 CPU units.\n\nOn Linux container instances, the Docker daemon on the container instance uses the CPU value to calculate the relative CPU share ratios for running containers. The minimum valid CPU share value that the Linux kernel allows is 2, and the maximum valid CPU share value that the Linux kernel allows is 262144. However, the CPU parameter isn't required, and you can use CPU values below 2 or above 262144 in your container definitions. For CPU values below 2 (including null) or above 262144, the behavior varies based on your Amazon ECS container agent version:\n\n- *Agent versions less than or equal to 1.1.0:* Null and zero CPU values are passed to Docker as 0, which Docker then converts to 1,024 CPU shares. CPU values of 1 are passed to Docker as 1, which the Linux kernel converts to two CPU shares.\n- *Agent versions greater than or equal to 1.2.0:* Null, zero, and CPU values of 1 are passed to Docker as 2.\n- *Agent versions greater than or equal to 1.84.0:* CPU values greater than 256 vCPU are passed to Docker as 256, which is equivalent to 262144 CPU shares.\n\nOn Windows container instances, the CPU limit is enforced as an absolute limit, or a quota. Windows containers only have access to the specified amount of CPU that's described in the task definition. A null or zero CPU value is passed to Docker as `0` , which Windows interprets as 1% of one CPU.", "title": "Cpu", "type": "number" }, @@ -85083,7 +85083,7 @@ "additionalProperties": false, "properties": { "CpuArchitecture": { - "markdownDescription": "The CPU architecture.\n\nYou can run your Linux tasks on an ARM-based platform by setting the value to `ARM64` . This option is available for tasks that run on Linux Amazon EC2 instance or Linux containers on Fargate.", + "markdownDescription": "The CPU architecture.\n\nYou can run your Linux tasks on an ARM-based platform by setting the value to `ARM64` . This option is available for tasks that run on Linux Amazon EC2 instance, Amazon ECS Managed Instances, or Linux containers on Fargate.", "title": "CpuArchitecture", "type": "string" }, @@ -134916,7 +134916,7 @@ }, "WorkDocsConfiguration": { "$ref": "#/definitions/AWS::Kendra::DataSource.WorkDocsConfiguration", - "markdownDescription": "Provides the configuration information to connect to Amazon WorkDocs as your data source.", + "markdownDescription": "Provides the configuration information to connect to WorkDocs as your data source.", "title": "WorkDocsConfiguration" } }, @@ -136022,7 +136022,7 @@ "items": { "type": "string" }, - "markdownDescription": "A list of regular expression patterns to exclude certain files in your Amazon WorkDocs site repository. Files that match the patterns are excluded from the index. Files that don\u2019t match the patterns are included in the index. If a file matches both an inclusion and exclusion pattern, the exclusion pattern takes precedence and the file isn't included in the index.", + "markdownDescription": "A list of regular expression patterns to exclude certain files in your WorkDocs site repository. Files that match the patterns are excluded from the index. Files that don\u2019t match the patterns are included in the index. If a file matches both an inclusion and exclusion pattern, the exclusion pattern takes precedence and the file isn't included in the index.", "title": "ExclusionPatterns", "type": "array" }, @@ -136030,7 +136030,7 @@ "items": { "$ref": "#/definitions/AWS::Kendra::DataSource.DataSourceToIndexFieldMapping" }, - "markdownDescription": "A list of `DataSourceToIndexFieldMapping` objects that map Amazon WorkDocs data source attributes or field names to Amazon Kendra index field names. To create custom fields, use the `UpdateIndex` API before you map to Amazon WorkDocs fields. For more information, see [Mapping data source fields](https://docs.aws.amazon.com/kendra/latest/dg/field-mapping.html) . The Amazon WorkDocs data source field names must exist in your Amazon WorkDocs custom metadata.", + "markdownDescription": "A list of `DataSourceToIndexFieldMapping` objects that map WorkDocs data source attributes or field names to Amazon Kendra index field names. To create custom fields, use the `UpdateIndex` API before you map to WorkDocs fields. For more information, see [Mapping data source fields](https://docs.aws.amazon.com/kendra/latest/dg/field-mapping.html) . The WorkDocs data source field names must exist in your WorkDocs custom metadata.", "title": "FieldMappings", "type": "array" }, @@ -136038,17 +136038,17 @@ "items": { "type": "string" }, - "markdownDescription": "A list of regular expression patterns to include certain files in your Amazon WorkDocs site repository. Files that match the patterns are included in the index. Files that don't match the patterns are excluded from the index. If a file matches both an inclusion and exclusion pattern, the exclusion pattern takes precedence and the file isn't included in the index.", + "markdownDescription": "A list of regular expression patterns to include certain files in your WorkDocs site repository. Files that match the patterns are included in the index. Files that don't match the patterns are excluded from the index. If a file matches both an inclusion and exclusion pattern, the exclusion pattern takes precedence and the file isn't included in the index.", "title": "InclusionPatterns", "type": "array" }, "OrganizationId": { - "markdownDescription": "The identifier of the directory corresponding to your Amazon WorkDocs site repository.\n\nYou can find the organization ID in the [AWS Directory Service](https://docs.aws.amazon.com/directoryservicev2/) by going to *Active Directory* , then *Directories* . Your Amazon WorkDocs site directory has an ID, which is the organization ID. You can also set up a new Amazon WorkDocs directory in the AWS Directory Service console and enable a Amazon WorkDocs site for the directory in the Amazon WorkDocs console.", + "markdownDescription": "The identifier of the directory corresponding to your WorkDocs site repository.\n\nYou can find the organization ID in the [AWS Directory Service](https://docs.aws.amazon.com/directoryservicev2/) by going to *Active Directory* , then *Directories* . Your WorkDocs site directory has an ID, which is the organization ID. You can also set up a new WorkDocs directory in the AWS Directory Service console and enable a WorkDocs site for the directory in the WorkDocs console.", "title": "OrganizationId", "type": "string" }, "UseChangeLog": { - "markdownDescription": "`TRUE` to use the Amazon WorkDocs change log to determine which documents require updating in the index. Depending on the change log's size, it may take longer for Amazon Kendra to use the change log than to scan all of your documents in Amazon WorkDocs.", + "markdownDescription": "`TRUE` to use the WorkDocs change log to determine which documents require updating in the index. Depending on the change log's size, it may take longer for Amazon Kendra to use the change log than to scan all of your documents in WorkDocs.", "title": "UseChangeLog", "type": "boolean" } @@ -153825,7 +153825,7 @@ "additionalProperties": false, "properties": { "FindingPublishingFrequency": { - "markdownDescription": "Specifies how often Amazon Macie publishes updates to policy findings for the account. This includes publishing updates to AWS Security Hub and Amazon EventBridge (formerly Amazon CloudWatch Events ). Valid values are:\n\n- FIFTEEN_MINUTES\n- ONE_HOUR\n- SIX_HOURS", + "markdownDescription": "Specifies how often Amazon Macie publishes updates to policy findings for the account. This includes publishing updates to Security Hub and Amazon EventBridge (formerly Amazon CloudWatch Events ). Valid values are:\n\n- FIFTEEN_MINUTES\n- ONE_HOUR\n- SIX_HOURS", "title": "FindingPublishingFrequency", "type": "string" }, @@ -192806,7 +192806,7 @@ "type": "array" }, "Name": { - "markdownDescription": "The name of the sheet. This name is displayed on the sheet's tab in the Amazon QuickSight console.", + "markdownDescription": "The name of the sheet. This name is displayed on the sheet's tab in the Quick Suite console.", "title": "Name", "type": "string" }, @@ -205350,7 +205350,7 @@ "type": "array" }, "Name": { - "markdownDescription": "The name of the sheet. This name is displayed on the sheet's tab in the Amazon QuickSight console.", + "markdownDescription": "The name of the sheet. This name is displayed on the sheet's tab in the Quick Suite console.", "title": "Name", "type": "string" }, @@ -207877,13 +207877,11 @@ }, "LogicalTableMap": { "additionalProperties": false, - "markdownDescription": "Configures the combination and transformation of the data from the physical tables.", "patternProperties": { "^[a-zA-Z0-9]+$": { "$ref": "#/definitions/AWS::QuickSight::DataSet.LogicalTable" } }, - "title": "LogicalTableMap", "type": "object" }, "Name": { @@ -207911,14 +207909,10 @@ "type": "object" }, "RowLevelPermissionDataSet": { - "$ref": "#/definitions/AWS::QuickSight::DataSet.RowLevelPermissionDataSet", - "markdownDescription": "The row-level security configuration for the data that you want to create.", - "title": "RowLevelPermissionDataSet" + "$ref": "#/definitions/AWS::QuickSight::DataSet.RowLevelPermissionDataSet" }, "RowLevelPermissionTagConfiguration": { - "$ref": "#/definitions/AWS::QuickSight::DataSet.RowLevelPermissionTagConfiguration", - "markdownDescription": "The element you can use to define tags for row-level security.", - "title": "RowLevelPermissionTagConfiguration" + "$ref": "#/definitions/AWS::QuickSight::DataSet.RowLevelPermissionTagConfiguration" }, "Tags": { "items": { @@ -208482,22 +208476,16 @@ "additionalProperties": false, "properties": { "Alias": { - "markdownDescription": "A display name for the logical table.", - "title": "Alias", "type": "string" }, "DataTransforms": { "items": { "$ref": "#/definitions/AWS::QuickSight::DataSet.TransformOperation" }, - "markdownDescription": "Transform operations that act on this logical table. For this structure to be valid, only one of the attributes can be non-null.", - "title": "DataTransforms", "type": "array" }, "Source": { - "$ref": "#/definitions/AWS::QuickSight::DataSet.LogicalTableSource", - "markdownDescription": "Source of this logical table.", - "title": "Source" + "$ref": "#/definitions/AWS::QuickSight::DataSet.LogicalTableSource" } }, "required": [ @@ -219730,7 +219718,7 @@ "type": "array" }, "Name": { - "markdownDescription": "The name of the sheet. This name is displayed on the sheet's tab in the Amazon QuickSight console.", + "markdownDescription": "The name of the sheet. This name is displayed on the sheet's tab in the Quick Suite console.", "title": "Name", "type": "string" }, @@ -252047,7 +252035,7 @@ "type": "string" }, "PlatformIdentifier": { - "markdownDescription": "The platform identifier of the notebook instance runtime environment.", + "markdownDescription": "The platform identifier of the notebook instance runtime environment. The default value is `notebook-al2-v2` .", "title": "PlatformIdentifier", "type": "string" }, @@ -254793,7 +254781,7 @@ "items": { "$ref": "#/definitions/AWS::SecurityHub::AutomationRule.NumberFilter" }, - "markdownDescription": "The likelihood that a finding accurately identifies the behavior or issue that it was intended to identify. `Confidence` is scored on a 0\u2013100 basis using a ratio scale. A value of `0` means 0 percent confidence, and a value of `100` means 100 percent confidence. For example, a data exfiltration detection based on a statistical deviation of network traffic has low confidence because an actual exfiltration hasn't been verified. For more information, see [Confidence](https://docs.aws.amazon.com/securityhub/latest/userguide/asff-top-level-attributes.html#asff-confidence) in the *AWS Security Hub User Guide* .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", + "markdownDescription": "The likelihood that a finding accurately identifies the behavior or issue that it was intended to identify. `Confidence` is scored on a 0\u2013100 basis using a ratio scale. A value of `0` means 0 percent confidence, and a value of `100` means 100 percent confidence. For example, a data exfiltration detection based on a statistical deviation of network traffic has low confidence because an actual exfiltration hasn't been verified. For more information, see [Confidence](https://docs.aws.amazon.com/securityhub/latest/userguide/asff-top-level-attributes.html#asff-confidence) in the *Security Hub User Guide* .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", "title": "Confidence", "type": "array" }, @@ -254801,7 +254789,7 @@ "items": { "$ref": "#/definitions/AWS::SecurityHub::AutomationRule.DateFilter" }, - "markdownDescription": "A timestamp that indicates when this finding record was created.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", + "markdownDescription": "A timestamp that indicates when this finding record was created.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", "title": "CreatedAt", "type": "array" }, @@ -254809,7 +254797,7 @@ "items": { "$ref": "#/definitions/AWS::SecurityHub::AutomationRule.NumberFilter" }, - "markdownDescription": "The level of importance that is assigned to the resources that are associated with a finding. `Criticality` is scored on a 0\u2013100 basis, using a ratio scale that supports only full integers. A score of `0` means that the underlying resources have no criticality, and a score of `100` is reserved for the most critical resources. For more information, see [Criticality](https://docs.aws.amazon.com/securityhub/latest/userguide/asff-top-level-attributes.html#asff-criticality) in the *AWS Security Hub User Guide* .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", + "markdownDescription": "The level of importance that is assigned to the resources that are associated with a finding. `Criticality` is scored on a 0\u2013100 basis, using a ratio scale that supports only full integers. A score of `0` means that the underlying resources have no criticality, and a score of `100` is reserved for the most critical resources. For more information, see [Criticality](https://docs.aws.amazon.com/securityhub/latest/userguide/asff-top-level-attributes.html#asff-criticality) in the *Security Hub User Guide* .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", "title": "Criticality", "type": "array" }, @@ -254825,7 +254813,7 @@ "items": { "$ref": "#/definitions/AWS::SecurityHub::AutomationRule.DateFilter" }, - "markdownDescription": "A timestamp that indicates when the potential security issue captured by a finding was first observed by the security findings product.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", + "markdownDescription": "A timestamp that indicates when the potential security issue captured by a finding was first observed by the security findings product.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", "title": "FirstObservedAt", "type": "array" }, @@ -254849,7 +254837,7 @@ "items": { "$ref": "#/definitions/AWS::SecurityHub::AutomationRule.DateFilter" }, - "markdownDescription": "A timestamp that indicates when the security findings provider most recently observed a change in the resource that is involved in the finding.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", + "markdownDescription": "A timestamp that indicates when the security findings provider most recently observed a change in the resource that is involved in the finding.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", "title": "LastObservedAt", "type": "array" }, @@ -254865,7 +254853,7 @@ "items": { "$ref": "#/definitions/AWS::SecurityHub::AutomationRule.DateFilter" }, - "markdownDescription": "The timestamp of when the note was updated.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", + "markdownDescription": "The timestamp of when the note was updated.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", "title": "NoteUpdatedAt", "type": "array" }, @@ -254993,7 +254981,7 @@ "items": { "$ref": "#/definitions/AWS::SecurityHub::AutomationRule.StringFilter" }, - "markdownDescription": "One or more finding types in the format of namespace/category/classifier that classify a finding. For a list of namespaces, classifiers, and categories, see [Types taxonomy for ASFF](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-findings-format-type-taxonomy.html) in the *AWS Security Hub User Guide* .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", + "markdownDescription": "One or more finding types in the format of namespace/category/classifier that classify a finding. For a list of namespaces, classifiers, and categories, see [Types taxonomy for ASFF](https://docs.aws.amazon.com/securityhub/latest/userguide/securityhub-findings-format-type-taxonomy.html) in the *Security Hub User Guide* .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", "title": "Type", "type": "array" }, @@ -255001,7 +254989,7 @@ "items": { "$ref": "#/definitions/AWS::SecurityHub::AutomationRule.DateFilter" }, - "markdownDescription": "A timestamp that indicates when the finding record was most recently updated.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", + "markdownDescription": "A timestamp that indicates when the finding record was most recently updated.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .\n\nArray Members: Minimum number of 1 item. Maximum number of 20 items.", "title": "UpdatedAt", "type": "array" }, @@ -255041,12 +255029,12 @@ "title": "DateRange" }, "End": { - "markdownDescription": "A timestamp that provides the end date for the date filter.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", + "markdownDescription": "A timestamp that provides the end date for the date filter.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", "title": "End", "type": "string" }, "Start": { - "markdownDescription": "A timestamp that provides the start date for the date filter.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", + "markdownDescription": "A timestamp that provides the start date for the date filter.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", "title": "Start", "type": "string" } @@ -255077,7 +255065,7 @@ "additionalProperties": false, "properties": { "Comparison": { - "markdownDescription": "The condition to apply to the key value when filtering Security Hub findings with a map filter.\n\nTo search for values that have the filter value, use one of the following comparison operators:\n\n- To search for values that include the filter value, use `CONTAINS` . For example, for the `ResourceTags` field, the filter `Department CONTAINS Security` matches findings that include the value `Security` for the `Department` tag. In the same example, a finding with a value of `Security team` for the `Department` tag is a match.\n- To search for values that exactly match the filter value, use `EQUALS` . For example, for the `ResourceTags` field, the filter `Department EQUALS Security` matches findings that have the value `Security` for the `Department` tag.\n\n`CONTAINS` and `EQUALS` filters on the same field are joined by `OR` . A finding matches if it matches any one of those filters. For example, the filters `Department CONTAINS Security OR Department CONTAINS Finance` match a finding that includes either `Security` , `Finance` , or both values.\n\nTo search for values that don't have the filter value, use one of the following comparison operators:\n\n- To search for values that exclude the filter value, use `NOT_CONTAINS` . For example, for the `ResourceTags` field, the filter `Department NOT_CONTAINS Finance` matches findings that exclude the value `Finance` for the `Department` tag.\n- To search for values other than the filter value, use `NOT_EQUALS` . For example, for the `ResourceTags` field, the filter `Department NOT_EQUALS Finance` matches findings that don\u2019t have the value `Finance` for the `Department` tag.\n\n`NOT_CONTAINS` and `NOT_EQUALS` filters on the same field are joined by `AND` . A finding matches only if it matches all of those filters. For example, the filters `Department NOT_CONTAINS Security AND Department NOT_CONTAINS Finance` match a finding that excludes both the `Security` and `Finance` values.\n\n`CONTAINS` filters can only be used with other `CONTAINS` filters. `NOT_CONTAINS` filters can only be used with other `NOT_CONTAINS` filters.\n\nYou can\u2019t have both a `CONTAINS` filter and a `NOT_CONTAINS` filter on the same field. Similarly, you can\u2019t have both an `EQUALS` filter and a `NOT_EQUALS` filter on the same field. Combining filters in this way returns an error.\n\n`CONTAINS` and `NOT_CONTAINS` operators can be used only with automation rules. For more information, see [Automation rules](https://docs.aws.amazon.com/securityhub/latest/userguide/automation-rules.html) in the *AWS Security Hub User Guide* .", + "markdownDescription": "The condition to apply to the key value when filtering Security Hub findings with a map filter.\n\nTo search for values that have the filter value, use one of the following comparison operators:\n\n- To search for values that include the filter value, use `CONTAINS` . For example, for the `ResourceTags` field, the filter `Department CONTAINS Security` matches findings that include the value `Security` for the `Department` tag. In the same example, a finding with a value of `Security team` for the `Department` tag is a match.\n- To search for values that exactly match the filter value, use `EQUALS` . For example, for the `ResourceTags` field, the filter `Department EQUALS Security` matches findings that have the value `Security` for the `Department` tag.\n\n`CONTAINS` and `EQUALS` filters on the same field are joined by `OR` . A finding matches if it matches any one of those filters. For example, the filters `Department CONTAINS Security OR Department CONTAINS Finance` match a finding that includes either `Security` , `Finance` , or both values.\n\nTo search for values that don't have the filter value, use one of the following comparison operators:\n\n- To search for values that exclude the filter value, use `NOT_CONTAINS` . For example, for the `ResourceTags` field, the filter `Department NOT_CONTAINS Finance` matches findings that exclude the value `Finance` for the `Department` tag.\n- To search for values other than the filter value, use `NOT_EQUALS` . For example, for the `ResourceTags` field, the filter `Department NOT_EQUALS Finance` matches findings that don\u2019t have the value `Finance` for the `Department` tag.\n\n`NOT_CONTAINS` and `NOT_EQUALS` filters on the same field are joined by `AND` . A finding matches only if it matches all of those filters. For example, the filters `Department NOT_CONTAINS Security AND Department NOT_CONTAINS Finance` match a finding that excludes both the `Security` and `Finance` values.\n\n`CONTAINS` filters can only be used with other `CONTAINS` filters. `NOT_CONTAINS` filters can only be used with other `NOT_CONTAINS` filters.\n\nYou can\u2019t have both a `CONTAINS` filter and a `NOT_CONTAINS` filter on the same field. Similarly, you can\u2019t have both an `EQUALS` filter and a `NOT_EQUALS` filter on the same field. Combining filters in this way returns an error.\n\n`CONTAINS` and `NOT_CONTAINS` operators can be used only with automation rules. For more information, see [Automation rules](https://docs.aws.amazon.com/securityhub/latest/userguide/automation-rules.html) in the *Security Hub User Guide* .", "title": "Comparison", "type": "string" }, @@ -255185,7 +255173,7 @@ "additionalProperties": false, "properties": { "Comparison": { - "markdownDescription": "The condition to apply to a string value when filtering Security Hub findings.\n\nTo search for values that have the filter value, use one of the following comparison operators:\n\n- To search for values that include the filter value, use `CONTAINS` . For example, the filter `Title CONTAINS CloudFront` matches findings that have a `Title` that includes the string CloudFront.\n- To search for values that exactly match the filter value, use `EQUALS` . For example, the filter `AwsAccountId EQUALS 123456789012` only matches findings that have an account ID of `123456789012` .\n- To search for values that start with the filter value, use `PREFIX` . For example, the filter `ResourceRegion PREFIX us` matches findings that have a `ResourceRegion` that starts with `us` . A `ResourceRegion` that starts with a different value, such as `af` , `ap` , or `ca` , doesn't match.\n\n`CONTAINS` , `EQUALS` , and `PREFIX` filters on the same field are joined by `OR` . A finding matches if it matches any one of those filters. For example, the filters `Title CONTAINS CloudFront OR Title CONTAINS CloudWatch` match a finding that includes either `CloudFront` , `CloudWatch` , or both strings in the title.\n\nTo search for values that don\u2019t have the filter value, use one of the following comparison operators:\n\n- To search for values that exclude the filter value, use `NOT_CONTAINS` . For example, the filter `Title NOT_CONTAINS CloudFront` matches findings that have a `Title` that excludes the string CloudFront.\n- To search for values other than the filter value, use `NOT_EQUALS` . For example, the filter `AwsAccountId NOT_EQUALS 123456789012` only matches findings that have an account ID other than `123456789012` .\n- To search for values that don't start with the filter value, use `PREFIX_NOT_EQUALS` . For example, the filter `ResourceRegion PREFIX_NOT_EQUALS us` matches findings with a `ResourceRegion` that starts with a value other than `us` .\n\n`NOT_CONTAINS` , `NOT_EQUALS` , and `PREFIX_NOT_EQUALS` filters on the same field are joined by `AND` . A finding matches only if it matches all of those filters. For example, the filters `Title NOT_CONTAINS CloudFront AND Title NOT_CONTAINS CloudWatch` match a finding that excludes both `CloudFront` and `CloudWatch` in the title.\n\nYou can\u2019t have both a `CONTAINS` filter and a `NOT_CONTAINS` filter on the same field. Similarly, you can't provide both an `EQUALS` filter and a `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filter on the same field. Combining filters in this way returns an error. `CONTAINS` filters can only be used with other `CONTAINS` filters. `NOT_CONTAINS` filters can only be used with other `NOT_CONTAINS` filters.\n\nYou can combine `PREFIX` filters with `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filters for the same field. Security Hub first processes the `PREFIX` filters, and then the `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filters.\n\nFor example, for the following filters, Security Hub first identifies findings that have resource types that start with either `AwsIam` or `AwsEc2` . It then excludes findings that have a resource type of `AwsIamPolicy` and findings that have a resource type of `AwsEc2NetworkInterface` .\n\n- `ResourceType PREFIX AwsIam`\n- `ResourceType PREFIX AwsEc2`\n- `ResourceType NOT_EQUALS AwsIamPolicy`\n- `ResourceType NOT_EQUALS AwsEc2NetworkInterface`\n\n`CONTAINS` and `NOT_CONTAINS` operators can be used only with automation rules V1. `CONTAINS_WORD` operator is only supported in `GetFindingsV2` , `GetFindingStatisticsV2` , `GetResourcesV2` , and `GetResourceStatisticsV2` APIs. For more information, see [Automation rules](https://docs.aws.amazon.com/securityhub/latest/userguide/automation-rules.html) in the *AWS Security Hub User Guide* .", + "markdownDescription": "The condition to apply to a string value when filtering Security Hub findings.\n\nTo search for values that have the filter value, use one of the following comparison operators:\n\n- To search for values that include the filter value, use `CONTAINS` . For example, the filter `Title CONTAINS CloudFront` matches findings that have a `Title` that includes the string CloudFront.\n- To search for values that exactly match the filter value, use `EQUALS` . For example, the filter `AwsAccountId EQUALS 123456789012` only matches findings that have an account ID of `123456789012` .\n- To search for values that start with the filter value, use `PREFIX` . For example, the filter `ResourceRegion PREFIX us` matches findings that have a `ResourceRegion` that starts with `us` . A `ResourceRegion` that starts with a different value, such as `af` , `ap` , or `ca` , doesn't match.\n\n`CONTAINS` , `EQUALS` , and `PREFIX` filters on the same field are joined by `OR` . A finding matches if it matches any one of those filters. For example, the filters `Title CONTAINS CloudFront OR Title CONTAINS CloudWatch` match a finding that includes either `CloudFront` , `CloudWatch` , or both strings in the title.\n\nTo search for values that don\u2019t have the filter value, use one of the following comparison operators:\n\n- To search for values that exclude the filter value, use `NOT_CONTAINS` . For example, the filter `Title NOT_CONTAINS CloudFront` matches findings that have a `Title` that excludes the string CloudFront.\n- To search for values other than the filter value, use `NOT_EQUALS` . For example, the filter `AwsAccountId NOT_EQUALS 123456789012` only matches findings that have an account ID other than `123456789012` .\n- To search for values that don't start with the filter value, use `PREFIX_NOT_EQUALS` . For example, the filter `ResourceRegion PREFIX_NOT_EQUALS us` matches findings with a `ResourceRegion` that starts with a value other than `us` .\n\n`NOT_CONTAINS` , `NOT_EQUALS` , and `PREFIX_NOT_EQUALS` filters on the same field are joined by `AND` . A finding matches only if it matches all of those filters. For example, the filters `Title NOT_CONTAINS CloudFront AND Title NOT_CONTAINS CloudWatch` match a finding that excludes both `CloudFront` and `CloudWatch` in the title.\n\nYou can\u2019t have both a `CONTAINS` filter and a `NOT_CONTAINS` filter on the same field. Similarly, you can't provide both an `EQUALS` filter and a `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filter on the same field. Combining filters in this way returns an error. `CONTAINS` filters can only be used with other `CONTAINS` filters. `NOT_CONTAINS` filters can only be used with other `NOT_CONTAINS` filters.\n\nYou can combine `PREFIX` filters with `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filters for the same field. Security Hub first processes the `PREFIX` filters, and then the `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filters.\n\nFor example, for the following filters, Security Hub first identifies findings that have resource types that start with either `AwsIam` or `AwsEc2` . It then excludes findings that have a resource type of `AwsIamPolicy` and findings that have a resource type of `AwsEc2NetworkInterface` .\n\n- `ResourceType PREFIX AwsIam`\n- `ResourceType PREFIX AwsEc2`\n- `ResourceType NOT_EQUALS AwsIamPolicy`\n- `ResourceType NOT_EQUALS AwsEc2NetworkInterface`\n\n`CONTAINS` and `NOT_CONTAINS` operators can be used only with automation rules V1. `CONTAINS_WORD` operator is only supported in `GetFindingsV2` , `GetFindingStatisticsV2` , `GetResourcesV2` , and `GetResourceStatisticsV2` APIs. For more information, see [Automation rules](https://docs.aws.amazon.com/securityhub/latest/userguide/automation-rules.html) in the *Security Hub User Guide* .", "title": "Comparison", "type": "string" }, @@ -255524,7 +255512,7 @@ "items": { "$ref": "#/definitions/AWS::SecurityHub::Insight.DateFilter" }, - "markdownDescription": "A timestamp that indicates when the security findings provider created the potential security issue that a finding reflects.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", + "markdownDescription": "A timestamp that indicates when the security findings provider created the potential security issue that a finding reflects.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", "title": "CreatedAt", "type": "array" }, @@ -255604,7 +255592,7 @@ "items": { "$ref": "#/definitions/AWS::SecurityHub::Insight.DateFilter" }, - "markdownDescription": "A timestamp that indicates when the security findings provider first observed the potential security issue that a finding captured.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", + "markdownDescription": "A timestamp that indicates when the security findings provider first observed the potential security issue that a finding captured.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", "title": "FirstObservedAt", "type": "array" }, @@ -255628,7 +255616,7 @@ "items": { "$ref": "#/definitions/AWS::SecurityHub::Insight.DateFilter" }, - "markdownDescription": "A timestamp that indicates when the security findings provider most recently observed a change in the resource that is involved in the finding.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", + "markdownDescription": "A timestamp that indicates when the security findings provider most recently observed a change in the resource that is involved in the finding.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", "title": "LastObservedAt", "type": "array" }, @@ -255780,7 +255768,7 @@ "items": { "$ref": "#/definitions/AWS::SecurityHub::Insight.DateFilter" }, - "markdownDescription": "A timestamp that identifies when the process was launched.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", + "markdownDescription": "A timestamp that identifies when the process was launched.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", "title": "ProcessLaunchedAt", "type": "array" }, @@ -255820,7 +255808,7 @@ "items": { "$ref": "#/definitions/AWS::SecurityHub::Insight.DateFilter" }, - "markdownDescription": "A timestamp that identifies when the process was terminated.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", + "markdownDescription": "A timestamp that identifies when the process was terminated.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", "title": "ProcessTerminatedAt", "type": "array" }, @@ -256044,7 +256032,7 @@ "items": { "$ref": "#/definitions/AWS::SecurityHub::Insight.DateFilter" }, - "markdownDescription": "A timestamp that identifies when the container was started.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", + "markdownDescription": "A timestamp that identifies when the container was started.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", "title": "ResourceContainerLaunchedAt", "type": "array" }, @@ -256140,7 +256128,7 @@ "items": { "$ref": "#/definitions/AWS::SecurityHub::Insight.DateFilter" }, - "markdownDescription": "A timestamp that identifies the last observation of a threat intelligence indicator.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", + "markdownDescription": "A timestamp that identifies the last observation of a threat intelligence indicator.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", "title": "ThreatIntelIndicatorLastObservedAt", "type": "array" }, @@ -256196,7 +256184,7 @@ "items": { "$ref": "#/definitions/AWS::SecurityHub::Insight.DateFilter" }, - "markdownDescription": "A timestamp that indicates when the security findings provider last updated the finding record.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", + "markdownDescription": "A timestamp that indicates when the security findings provider last updated the finding record.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", "title": "UpdatedAt", "type": "array" }, @@ -256274,12 +256262,12 @@ "title": "DateRange" }, "End": { - "markdownDescription": "A timestamp that provides the end date for the date filter.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", + "markdownDescription": "A timestamp that provides the end date for the date filter.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", "title": "End", "type": "string" }, "Start": { - "markdownDescription": "A timestamp that provides the start date for the date filter.\n\nFor more information about the validation and formatting of timestamp fields in AWS Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", + "markdownDescription": "A timestamp that provides the start date for the date filter.\n\nFor more information about the validation and formatting of timestamp fields in Security Hub , see [Timestamps](https://docs.aws.amazon.com/securityhub/1.0/APIReference/Welcome.html#timestamps) .", "title": "Start", "type": "string" } @@ -256324,7 +256312,7 @@ "additionalProperties": false, "properties": { "Comparison": { - "markdownDescription": "The condition to apply to the key value when filtering Security Hub findings with a map filter.\n\nTo search for values that have the filter value, use one of the following comparison operators:\n\n- To search for values that include the filter value, use `CONTAINS` . For example, for the `ResourceTags` field, the filter `Department CONTAINS Security` matches findings that include the value `Security` for the `Department` tag. In the same example, a finding with a value of `Security team` for the `Department` tag is a match.\n- To search for values that exactly match the filter value, use `EQUALS` . For example, for the `ResourceTags` field, the filter `Department EQUALS Security` matches findings that have the value `Security` for the `Department` tag.\n\n`CONTAINS` and `EQUALS` filters on the same field are joined by `OR` . A finding matches if it matches any one of those filters. For example, the filters `Department CONTAINS Security OR Department CONTAINS Finance` match a finding that includes either `Security` , `Finance` , or both values.\n\nTo search for values that don't have the filter value, use one of the following comparison operators:\n\n- To search for values that exclude the filter value, use `NOT_CONTAINS` . For example, for the `ResourceTags` field, the filter `Department NOT_CONTAINS Finance` matches findings that exclude the value `Finance` for the `Department` tag.\n- To search for values other than the filter value, use `NOT_EQUALS` . For example, for the `ResourceTags` field, the filter `Department NOT_EQUALS Finance` matches findings that don\u2019t have the value `Finance` for the `Department` tag.\n\n`NOT_CONTAINS` and `NOT_EQUALS` filters on the same field are joined by `AND` . A finding matches only if it matches all of those filters. For example, the filters `Department NOT_CONTAINS Security AND Department NOT_CONTAINS Finance` match a finding that excludes both the `Security` and `Finance` values.\n\n`CONTAINS` filters can only be used with other `CONTAINS` filters. `NOT_CONTAINS` filters can only be used with other `NOT_CONTAINS` filters.\n\nYou can\u2019t have both a `CONTAINS` filter and a `NOT_CONTAINS` filter on the same field. Similarly, you can\u2019t have both an `EQUALS` filter and a `NOT_EQUALS` filter on the same field. Combining filters in this way returns an error.\n\n`CONTAINS` and `NOT_CONTAINS` operators can be used only with automation rules. For more information, see [Automation rules](https://docs.aws.amazon.com/securityhub/latest/userguide/automation-rules.html) in the *AWS Security Hub User Guide* .", + "markdownDescription": "The condition to apply to the key value when filtering Security Hub findings with a map filter.\n\nTo search for values that have the filter value, use one of the following comparison operators:\n\n- To search for values that include the filter value, use `CONTAINS` . For example, for the `ResourceTags` field, the filter `Department CONTAINS Security` matches findings that include the value `Security` for the `Department` tag. In the same example, a finding with a value of `Security team` for the `Department` tag is a match.\n- To search for values that exactly match the filter value, use `EQUALS` . For example, for the `ResourceTags` field, the filter `Department EQUALS Security` matches findings that have the value `Security` for the `Department` tag.\n\n`CONTAINS` and `EQUALS` filters on the same field are joined by `OR` . A finding matches if it matches any one of those filters. For example, the filters `Department CONTAINS Security OR Department CONTAINS Finance` match a finding that includes either `Security` , `Finance` , or both values.\n\nTo search for values that don't have the filter value, use one of the following comparison operators:\n\n- To search for values that exclude the filter value, use `NOT_CONTAINS` . For example, for the `ResourceTags` field, the filter `Department NOT_CONTAINS Finance` matches findings that exclude the value `Finance` for the `Department` tag.\n- To search for values other than the filter value, use `NOT_EQUALS` . For example, for the `ResourceTags` field, the filter `Department NOT_EQUALS Finance` matches findings that don\u2019t have the value `Finance` for the `Department` tag.\n\n`NOT_CONTAINS` and `NOT_EQUALS` filters on the same field are joined by `AND` . A finding matches only if it matches all of those filters. For example, the filters `Department NOT_CONTAINS Security AND Department NOT_CONTAINS Finance` match a finding that excludes both the `Security` and `Finance` values.\n\n`CONTAINS` filters can only be used with other `CONTAINS` filters. `NOT_CONTAINS` filters can only be used with other `NOT_CONTAINS` filters.\n\nYou can\u2019t have both a `CONTAINS` filter and a `NOT_CONTAINS` filter on the same field. Similarly, you can\u2019t have both an `EQUALS` filter and a `NOT_EQUALS` filter on the same field. Combining filters in this way returns an error.\n\n`CONTAINS` and `NOT_CONTAINS` operators can be used only with automation rules. For more information, see [Automation rules](https://docs.aws.amazon.com/securityhub/latest/userguide/automation-rules.html) in the *Security Hub User Guide* .", "title": "Comparison", "type": "string" }, @@ -256371,7 +256359,7 @@ "additionalProperties": false, "properties": { "Comparison": { - "markdownDescription": "The condition to apply to a string value when filtering Security Hub findings.\n\nTo search for values that have the filter value, use one of the following comparison operators:\n\n- To search for values that include the filter value, use `CONTAINS` . For example, the filter `Title CONTAINS CloudFront` matches findings that have a `Title` that includes the string CloudFront.\n- To search for values that exactly match the filter value, use `EQUALS` . For example, the filter `AwsAccountId EQUALS 123456789012` only matches findings that have an account ID of `123456789012` .\n- To search for values that start with the filter value, use `PREFIX` . For example, the filter `ResourceRegion PREFIX us` matches findings that have a `ResourceRegion` that starts with `us` . A `ResourceRegion` that starts with a different value, such as `af` , `ap` , or `ca` , doesn't match.\n\n`CONTAINS` , `EQUALS` , and `PREFIX` filters on the same field are joined by `OR` . A finding matches if it matches any one of those filters. For example, the filters `Title CONTAINS CloudFront OR Title CONTAINS CloudWatch` match a finding that includes either `CloudFront` , `CloudWatch` , or both strings in the title.\n\nTo search for values that don\u2019t have the filter value, use one of the following comparison operators:\n\n- To search for values that exclude the filter value, use `NOT_CONTAINS` . For example, the filter `Title NOT_CONTAINS CloudFront` matches findings that have a `Title` that excludes the string CloudFront.\n- To search for values other than the filter value, use `NOT_EQUALS` . For example, the filter `AwsAccountId NOT_EQUALS 123456789012` only matches findings that have an account ID other than `123456789012` .\n- To search for values that don't start with the filter value, use `PREFIX_NOT_EQUALS` . For example, the filter `ResourceRegion PREFIX_NOT_EQUALS us` matches findings with a `ResourceRegion` that starts with a value other than `us` .\n\n`NOT_CONTAINS` , `NOT_EQUALS` , and `PREFIX_NOT_EQUALS` filters on the same field are joined by `AND` . A finding matches only if it matches all of those filters. For example, the filters `Title NOT_CONTAINS CloudFront AND Title NOT_CONTAINS CloudWatch` match a finding that excludes both `CloudFront` and `CloudWatch` in the title.\n\nYou can\u2019t have both a `CONTAINS` filter and a `NOT_CONTAINS` filter on the same field. Similarly, you can't provide both an `EQUALS` filter and a `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filter on the same field. Combining filters in this way returns an error. `CONTAINS` filters can only be used with other `CONTAINS` filters. `NOT_CONTAINS` filters can only be used with other `NOT_CONTAINS` filters.\n\nYou can combine `PREFIX` filters with `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filters for the same field. Security Hub first processes the `PREFIX` filters, and then the `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filters.\n\nFor example, for the following filters, Security Hub first identifies findings that have resource types that start with either `AwsIam` or `AwsEc2` . It then excludes findings that have a resource type of `AwsIamPolicy` and findings that have a resource type of `AwsEc2NetworkInterface` .\n\n- `ResourceType PREFIX AwsIam`\n- `ResourceType PREFIX AwsEc2`\n- `ResourceType NOT_EQUALS AwsIamPolicy`\n- `ResourceType NOT_EQUALS AwsEc2NetworkInterface`\n\n`CONTAINS` and `NOT_CONTAINS` operators can be used only with automation rules V1. `CONTAINS_WORD` operator is only supported in `GetFindingsV2` , `GetFindingStatisticsV2` , `GetResourcesV2` , and `GetResourceStatisticsV2` APIs. For more information, see [Automation rules](https://docs.aws.amazon.com/securityhub/latest/userguide/automation-rules.html) in the *AWS Security Hub User Guide* .", + "markdownDescription": "The condition to apply to a string value when filtering Security Hub findings.\n\nTo search for values that have the filter value, use one of the following comparison operators:\n\n- To search for values that include the filter value, use `CONTAINS` . For example, the filter `Title CONTAINS CloudFront` matches findings that have a `Title` that includes the string CloudFront.\n- To search for values that exactly match the filter value, use `EQUALS` . For example, the filter `AwsAccountId EQUALS 123456789012` only matches findings that have an account ID of `123456789012` .\n- To search for values that start with the filter value, use `PREFIX` . For example, the filter `ResourceRegion PREFIX us` matches findings that have a `ResourceRegion` that starts with `us` . A `ResourceRegion` that starts with a different value, such as `af` , `ap` , or `ca` , doesn't match.\n\n`CONTAINS` , `EQUALS` , and `PREFIX` filters on the same field are joined by `OR` . A finding matches if it matches any one of those filters. For example, the filters `Title CONTAINS CloudFront OR Title CONTAINS CloudWatch` match a finding that includes either `CloudFront` , `CloudWatch` , or both strings in the title.\n\nTo search for values that don\u2019t have the filter value, use one of the following comparison operators:\n\n- To search for values that exclude the filter value, use `NOT_CONTAINS` . For example, the filter `Title NOT_CONTAINS CloudFront` matches findings that have a `Title` that excludes the string CloudFront.\n- To search for values other than the filter value, use `NOT_EQUALS` . For example, the filter `AwsAccountId NOT_EQUALS 123456789012` only matches findings that have an account ID other than `123456789012` .\n- To search for values that don't start with the filter value, use `PREFIX_NOT_EQUALS` . For example, the filter `ResourceRegion PREFIX_NOT_EQUALS us` matches findings with a `ResourceRegion` that starts with a value other than `us` .\n\n`NOT_CONTAINS` , `NOT_EQUALS` , and `PREFIX_NOT_EQUALS` filters on the same field are joined by `AND` . A finding matches only if it matches all of those filters. For example, the filters `Title NOT_CONTAINS CloudFront AND Title NOT_CONTAINS CloudWatch` match a finding that excludes both `CloudFront` and `CloudWatch` in the title.\n\nYou can\u2019t have both a `CONTAINS` filter and a `NOT_CONTAINS` filter on the same field. Similarly, you can't provide both an `EQUALS` filter and a `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filter on the same field. Combining filters in this way returns an error. `CONTAINS` filters can only be used with other `CONTAINS` filters. `NOT_CONTAINS` filters can only be used with other `NOT_CONTAINS` filters.\n\nYou can combine `PREFIX` filters with `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filters for the same field. Security Hub first processes the `PREFIX` filters, and then the `NOT_EQUALS` or `PREFIX_NOT_EQUALS` filters.\n\nFor example, for the following filters, Security Hub first identifies findings that have resource types that start with either `AwsIam` or `AwsEc2` . It then excludes findings that have a resource type of `AwsIamPolicy` and findings that have a resource type of `AwsEc2NetworkInterface` .\n\n- `ResourceType PREFIX AwsIam`\n- `ResourceType PREFIX AwsEc2`\n- `ResourceType NOT_EQUALS AwsIamPolicy`\n- `ResourceType NOT_EQUALS AwsEc2NetworkInterface`\n\n`CONTAINS` and `NOT_CONTAINS` operators can be used only with automation rules V1. `CONTAINS_WORD` operator is only supported in `GetFindingsV2` , `GetFindingStatisticsV2` , `GetResourcesV2` , and `GetResourceStatisticsV2` APIs. For more information, see [Automation rules](https://docs.aws.amazon.com/securityhub/latest/userguide/automation-rules.html) in the *Security Hub User Guide* .", "title": "Comparison", "type": "string" }, @@ -271884,12 +271872,12 @@ "type": "string" }, "DesktopArn": { - "markdownDescription": "The Amazon Resource Name (ARN) of the desktop to stream from Amazon WorkSpaces, WorkSpaces Secure Browser, or AppStream 2.0.", + "markdownDescription": "The Amazon Resource Name (ARN) of the desktop to stream from Amazon WorkSpaces, WorkSpaces Secure Browser, or WorkSpaces Applications.", "title": "DesktopArn", "type": "string" }, "DesktopEndpoint": { - "markdownDescription": "The URL for the identity provider login (only for environments that use AppStream 2.0).", + "markdownDescription": "The URL for the identity provider login (only for environments that use WorkSpaces Applications).", "title": "DesktopEndpoint", "type": "string" },