diff --git a/docs/reference/api-reference.md b/docs/reference/api-reference.md index 754d9eb6f..ee75a6e1f 100644 --- a/docs/reference/api-reference.md +++ b/docs/reference/api-reference.md @@ -2297,6 +2297,16 @@ from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node. - **`master_timeout` (Optional, string \| -1 \| 0)**: Period to wait for a connection to the master node. +## client.cat.circuitBreaker [_cat.circuit_breaker] +Get circuit breakers statistics + +[Endpoint documentation](https://www.elastic.co/docs/api/doc/elasticsearch#TODO) + +```ts +client.cat.circuitBreaker() +``` + + ## client.cat.componentTemplates [_cat.component_templates] Get component templates. @@ -2792,6 +2802,16 @@ local cluster state. If `false` the list of selected nodes are computed from the cluster state of the master node. In both cases the coordinating node will send requests for further information to each selected node. - **`master_timeout` (Optional, string \| -1 \| 0)**: Period to wait for a connection to the master node. +- **`expand_wildcards` (Optional, Enum("all" \| "open" \| "closed" \| "hidden" \| "none") \| Enum("all" \| "open" \| "closed" \| "hidden" \| "none")[])**: Type of index that wildcard expressions can match. If the request can target data streams, this argument +determines whether wildcard expressions match hidden data streams. Supports a list of values, +such as open,hidden. +- **`allow_no_indices` (Optional, boolean)**: If false, the request returns an error if any wildcard expression, index alias, or _all value targets only +missing or closed indices. This behavior applies even if the request targets other open indices. For example, +a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar. +- **`ignore_throttled` (Optional, boolean)**: If true, concrete, expanded or aliased indices are ignored when frozen. +- **`ignore_unavailable` (Optional, boolean)**: If true, missing or closed indices are not included in the response. +- **`allow_closed` (Optional, boolean)**: If true, allow closed indices to be returned in the response otherwise if false, keep the legacy behaviour +of throwing an exception if index pattern matches closed indices ## client.cat.shards [_cat.shards] Get shard information. @@ -5701,13 +5721,18 @@ To use the API, this parameter must be set to `true`. ## client.indices.downsample [_indices.downsample] Downsample an index. -Aggregate a time series (TSDS) index and store pre-computed statistical summaries (`min`, `max`, `sum`, `value_count` and `avg`) for each metric field grouped by a configured time interval. +Downsamples a time series (TSDS) index and reduces its size by keeping the last value or by pre-aggregating metrics: + +- When running in `aggregate` mode, it pre-calculates and stores statistical summaries (`min`, `max`, `sum`, `value_count` and `avg`) +for each metric field grouped by a configured time interval and their dimensions. +- When running in `last_value` mode, it keeps the last value for each metric in the configured interval and their dimensions. + For example, a TSDS index that contains metrics sampled every 10 seconds can be downsampled to an hourly index. All documents within an hour interval are summarized and stored as a single document in the downsample index. NOTE: Only indices in a time series data stream are supported. Neither field nor document level security can be defined on the source index. -The source index must be read only (`index.blocks.write: true`). +The source index must be read-only (`index.blocks.write: true`). [Endpoint documentation](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-downsample) @@ -5720,7 +5745,7 @@ client.indices.downsample({ index, target_index }) #### Request (object) [_request_indices.downsample] - **`index` (string)**: Name of the time series index to downsample. - **`target_index` (string)**: Name of the index to create. -- **`config` (Optional, { fixed_interval })** +- **`config` (Optional, { fixed_interval, sampling_method })** ## client.indices.exists [_indices.exists] Check indices. @@ -6025,6 +6050,16 @@ Supports a list of values, such as `open,hidden`. - **`master_timeout` (Optional, string \| -1 \| 0)**: Period to wait for a connection to the master node. If no response is received before the timeout expires, the request fails and returns an error. +## client.indices.getAllSampleConfiguration [_indices.get_all_sample_configuration] +Get sampling configurations for all indices and data streams + +[Endpoint documentation](https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-all-sample-configuration) + +```ts +client.indices.getAllSampleConfiguration() +``` + + ## client.indices.getDataLifecycle [_indices.get_data_lifecycle] Get data stream lifecycles. @@ -6489,7 +6524,7 @@ To target all data streams use `*` or `_all`. - **`data_retention` (Optional, string \| -1 \| 0)**: If defined, every document added to this data stream will be stored at least for this time frame. Any time after this duration the document could be deleted. When empty, every document in this data stream will be stored indefinitely. -- **`downsampling` (Optional, { rounds })**: The downsampling configuration to execute for the managed backing index after rollover. +- **`downsampling` (Optional, { after, fixed_interval }[])**: The downsampling configuration to execute for the managed backing index after rollover. - **`enabled` (Optional, boolean)**: If defined, it turns data stream lifecycle on/off (`true`/`false`) for this data stream. A data stream lifecycle that's disabled (enabled: `false`) will have no effect on the data stream. - **`expand_wildcards` (Optional, Enum("all" \| "open" \| "closed" \| "hidden" \| "none") \| Enum("all" \| "open" \| "closed" \| "hidden" \| "none")[])**: Type of data stream that wildcard patterns can match. @@ -7568,7 +7603,7 @@ client.inference.completion({ inference_id, input }) - **`inference_id` (string)**: The inference Id - **`input` (string \| string[])**: Inference input. Either a string or an array of strings. -- **`task_settings` (Optional, User-defined value)**: Optional task settings +- **`task_settings` (Optional, User-defined value)**: Task settings for the individual inference request. These settings are specific to the you specified and override the task settings specified when initializing the service. - **`timeout` (Optional, string \| -1 \| 0)**: Specifies the amount of time to wait for the inference request to complete. ## client.inference.delete [_inference.delete] @@ -7585,7 +7620,7 @@ client.inference.delete({ inference_id }) #### Request (object) [_request_inference.delete] - **`inference_id` (string)**: The inference identifier. - **`task_type` (Optional, Enum("sparse_embedding" \| "text_embedding" \| "rerank" \| "completion" \| "chat_completion"))**: The task type -- **`dry_run` (Optional, boolean)**: When true, the endpoint is not deleted and a list of ingest processors which reference this endpoint is returned. +- **`dry_run` (Optional, boolean)**: When true, checks the semantic_text fields and inference processors that reference the endpoint and returns them in a list, but does not delete the endpoint. - **`force` (Optional, boolean)**: When true, the inference endpoint is forcefully deleted even if it is still being used by ingest processors or semantic text fields. ## client.inference.get [_inference.get] @@ -7801,7 +7836,7 @@ client.inference.putAnthropic({ task_type, anthropic_inference_id, service, serv The only valid task type for the model to perform is `completion`. - **`anthropic_inference_id` (string)**: The unique identifier of the inference endpoint. - **`service` (Enum("anthropic"))**: The type of service supported for the specified task type. In this case, `anthropic`. -- **`service_settings` ({ api_key, model_id, rate_limit })**: Settings used to install the inference model. These settings are specific to the `watsonxai` service. +- **`service_settings` ({ api_key, model_id, rate_limit })**: Settings used to install the inference model. These settings are specific to the `anthropic` service. - **`chunking_settings` (Optional, { max_chunk_size, overlap, sentence_overlap, separator_group, separators, strategy })**: The chunking configuration object. - **`task_settings` (Optional, { max_tokens, temperature, top_k, top_p })**: Settings to configure the inference task. These settings are specific to the task type you specified. @@ -7824,7 +7859,7 @@ client.inference.putAzureaistudio({ task_type, azureaistudio_inference_id, servi - **`task_type` (Enum("completion" \| "rerank" \| "text_embedding"))**: The type of the inference task that the model will perform. - **`azureaistudio_inference_id` (string)**: The unique identifier of the inference endpoint. - **`service` (Enum("azureaistudio"))**: The type of service supported for the specified task type. In this case, `azureaistudio`. -- **`service_settings` ({ api_key, endpoint_type, target, provider, rate_limit })**: Settings used to install the inference model. These settings are specific to the `openai` service. +- **`service_settings` ({ api_key, endpoint_type, target, provider, rate_limit })**: Settings used to install the inference model. These settings are specific to the `azureaistudio` service. - **`chunking_settings` (Optional, { max_chunk_size, overlap, sentence_overlap, separator_group, separators, strategy })**: The chunking configuration object. - **`task_settings` (Optional, { do_sample, max_new_tokens, temperature, top_p, user, return_documents, top_n })**: Settings to configure the inference task. These settings are specific to the task type you specified. @@ -8346,7 +8381,7 @@ client.inference.sparseEmbedding({ inference_id, input }) - **`inference_id` (string)**: The inference Id - **`input` (string \| string[])**: Inference input. Either a string or an array of strings. -- **`task_settings` (Optional, User-defined value)**: Optional task settings +- **`task_settings` (Optional, User-defined value)**: Task settings for the individual inference request. These settings are specific to the you specified and override the task settings specified when initializing the service. - **`timeout` (Optional, string \| -1 \| 0)**: Specifies the amount of time to wait for the inference request to complete. ## client.inference.streamCompletion [_inference.stream_completion] @@ -8372,7 +8407,7 @@ client.inference.streamCompletion({ inference_id, input }) It can be a single string or an array. NOTE: Inference endpoints for the completion task type currently only support a single string as input. -- **`task_settings` (Optional, User-defined value)**: Optional task settings +- **`task_settings` (Optional, User-defined value)**: Task settings for the individual inference request. These settings are specific to the you specified and override the task settings specified when initializing the service. - **`timeout` (Optional, string \| -1 \| 0)**: The amount of time to wait for the inference request to complete. ## client.inference.textEmbedding [_inference.text_embedding] @@ -8400,7 +8435,7 @@ Accepted values depend on the configured inference service, refer to the relevan > info > The `input_type` parameter specified on the root level of the request body will take precedence over the `input_type` parameter specified in `task_settings`. -- **`task_settings` (Optional, User-defined value)**: Optional task settings +- **`task_settings` (Optional, User-defined value)**: Task settings for the individual inference request. These settings are specific to the you specified and override the task settings specified when initializing the service. - **`timeout` (Optional, string \| -1 \| 0)**: Specifies the amount of time to wait for the inference request to complete. ## client.inference.update [_inference.update] @@ -12996,7 +13031,8 @@ It must not be negative. By default, you cannot page through more than 10,000 hits using the `from` and `size` parameters. To page through more hits, use the `search_after` parameter. - **`sort` (Optional, string \| { _score, _doc, _geo_distance, _script } \| string \| { _score, _doc, _geo_distance, _script }[])**: The sort definition. -You can sort on `username`, `roles`, or `enabled`. +You can sort on `name`, `description`, `metadata`, `applications.application`, `applications.privileges`, +and `applications.resources`. In addition, sort can also be applied to the `_doc` field to sort by index order. - **`size` (Optional, number)**: The number of hits to return. It must not be negative. diff --git a/src/api/api/cat.ts b/src/api/api/cat.ts index 213701703..eef10f6c4 100644 --- a/src/api/api/cat.ts +++ b/src/api/api/cat.ts @@ -58,6 +58,13 @@ export default class Cat { 'master_timeout' ] }, + 'cat.circuit_breaker': { + path: [ + 'circuit_breaker_patterns' + ], + body: [], + query: [] + }, 'cat.component_templates': { path: [ 'name' @@ -251,7 +258,12 @@ export default class Cat { 'h', 's', 'local', - 'master_timeout' + 'master_timeout', + 'expand_wildcards', + 'allow_no_indices', + 'ignore_throttled', + 'ignore_unavailable', + 'allow_closed' ] }, 'cat.shards': { @@ -451,6 +463,61 @@ export default class Cat { return await this.transport.request({ path, method, querystring, body, meta }, options) } + /** + * Get circuit breakers statistics + * @see {@link https://www.elastic.co/docs/api/doc/elasticsearch#TODO | Elasticsearch API documentation} + */ + async circuitBreaker (this: That, params?: T.TODO, options?: TransportRequestOptionsWithOutMeta): Promise + async circuitBreaker (this: That, params?: T.TODO, options?: TransportRequestOptionsWithMeta): Promise> + async circuitBreaker (this: That, params?: T.TODO, options?: TransportRequestOptions): Promise + async circuitBreaker (this: That, params?: T.TODO, options?: TransportRequestOptions): Promise { + const { + path: acceptedPath + } = this[kAcceptedParams]['cat.circuit_breaker'] + + const userQuery = params?.querystring + const querystring: Record = userQuery != null ? { ...userQuery } : {} + + let body: Record | string | undefined + const userBody = params?.body + if (userBody != null) { + if (typeof userBody === 'string') { + body = userBody + } else { + body = { ...userBody } + } + } + + params = params ?? {} + for (const key in params) { + if (acceptedPath.includes(key)) { + continue + } else if (key !== 'body' && key !== 'querystring') { + querystring[key] = params[key] + } + } + + let method = '' + let path = '' + if (params.circuit_breaker_patterns != null) { + method = 'GET' + path = `/_cat/circuit_breaker/${encodeURIComponent(params.circuit_breaker_patterns.toString())}` + } else { + method = 'GET' + path = '/_cat/circuit_breaker' + } + const meta: TransportRequestMetadata = { + name: 'cat.circuit_breaker', + pathParts: { + circuit_breaker_patterns: params.circuit_breaker_patterns + }, + acceptedParams: [ + 'circuit_breaker_patterns' + ] + } + return await this.transport.request({ path, method, querystring, body, meta }, options) + } + /** * Get component templates. Get information about component templates in a cluster. Component templates are building blocks for constructing index templates that specify index mappings, settings, and aliases. IMPORTANT: CAT APIs are only intended for human consumption using the command line or Kibana console. They are not intended for use by applications. For application consumption, use the get component template API. * @see {@link https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-cat-component-templates | Elasticsearch API documentation} @@ -1434,7 +1501,12 @@ export default class Cat { 'h', 's', 'local', - 'master_timeout' + 'master_timeout', + 'expand_wildcards', + 'allow_no_indices', + 'ignore_throttled', + 'ignore_unavailable', + 'allow_closed' ] } return await this.transport.request({ path, method, querystring, body, meta }, options) diff --git a/src/api/api/indices.ts b/src/api/api/indices.ts index 316448364..b1fc1a736 100644 --- a/src/api/api/indices.ts +++ b/src/api/api/indices.ts @@ -399,6 +399,11 @@ export default class Indices { 'master_timeout' ] }, + 'indices.get_all_sample_configuration': { + path: [], + body: [], + query: [] + }, 'indices.get_data_lifecycle': { path: [ 'name' @@ -2054,7 +2059,7 @@ export default class Indices { } /** - * Downsample an index. Aggregate a time series (TSDS) index and store pre-computed statistical summaries (`min`, `max`, `sum`, `value_count` and `avg`) for each metric field grouped by a configured time interval. For example, a TSDS index that contains metrics sampled every 10 seconds can be downsampled to an hourly index. All documents within an hour interval are summarized and stored as a single document in the downsample index. NOTE: Only indices in a time series data stream are supported. Neither field nor document level security can be defined on the source index. The source index must be read only (`index.blocks.write: true`). + * Downsample an index. Downsamples a time series (TSDS) index and reduces its size by keeping the last value or by pre-aggregating metrics: - When running in `aggregate` mode, it pre-calculates and stores statistical summaries (`min`, `max`, `sum`, `value_count` and `avg`) for each metric field grouped by a configured time interval and their dimensions. - When running in `last_value` mode, it keeps the last value for each metric in the configured interval and their dimensions. For example, a TSDS index that contains metrics sampled every 10 seconds can be downsampled to an hourly index. All documents within an hour interval are summarized and stored as a single document in the downsample index. NOTE: Only indices in a time series data stream are supported. Neither field nor document level security can be defined on the source index. The source index must be read-only (`index.blocks.write: true`). * @see {@link https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-downsample | Elasticsearch API documentation} */ async downsample (this: That, params: T.IndicesDownsampleRequest, options?: TransportRequestOptionsWithOutMeta): Promise @@ -2673,6 +2678,50 @@ export default class Indices { return await this.transport.request({ path, method, querystring, body, meta }, options) } + /** + * Get sampling configurations for all indices and data streams + * @see {@link https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-all-sample-configuration | Elasticsearch API documentation} + */ + async getAllSampleConfiguration (this: That, params?: T.TODO, options?: TransportRequestOptionsWithOutMeta): Promise + async getAllSampleConfiguration (this: That, params?: T.TODO, options?: TransportRequestOptionsWithMeta): Promise> + async getAllSampleConfiguration (this: That, params?: T.TODO, options?: TransportRequestOptions): Promise + async getAllSampleConfiguration (this: That, params?: T.TODO, options?: TransportRequestOptions): Promise { + const { + path: acceptedPath + } = this[kAcceptedParams]['indices.get_all_sample_configuration'] + + const userQuery = params?.querystring + const querystring: Record = userQuery != null ? { ...userQuery } : {} + + let body: Record | string | undefined + const userBody = params?.body + if (userBody != null) { + if (typeof userBody === 'string') { + body = userBody + } else { + body = { ...userBody } + } + } + + params = params ?? {} + for (const key in params) { + if (acceptedPath.includes(key)) { + continue + } else if (key !== 'body' && key !== 'querystring') { + querystring[key] = params[key] + } + } + + const method = 'GET' + const path = '/_sample/config' + const meta: TransportRequestMetadata = { + name: 'indices.get_all_sample_configuration', + acceptedParams: [ + ] + } + return await this.transport.request({ path, method, querystring, body, meta }, options) + } + /** * Get data stream lifecycles. Get the data stream lifecycle configuration of one or more data streams. * @see {@link https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-indices-get-data-lifecycle | Elasticsearch API documentation} diff --git a/src/api/types.ts b/src/api/types.ts index a25f974c0..427c55aec 100644 --- a/src/api/types.ts +++ b/src/api/types.ts @@ -4417,8 +4417,8 @@ export interface QueryVectorBuilder { } export interface RRFRetriever extends RetrieverBase { - /** A list of child retrievers to specify which sets of returned top documents will have the RRF formula applied to them. */ - retrievers: RetrieverContainer[] + /** A list of child retrievers to specify which sets of returned top documents will have the RRF formula applied to them. Each retriever can optionally include a weight parameter. */ + retrievers: RRFRetrieverEntry[] /** This value determines how much influence documents in individual result sets per query have over the final ranked result set. */ rank_constant?: integer /** This value determines the size of the individual result sets per query. */ @@ -4427,6 +4427,15 @@ export interface RRFRetriever extends RetrieverBase { fields?: string[] } +export interface RRFRetrieverComponent { + /** The nested retriever configuration. */ + retriever: RetrieverContainer + /** Weight multiplier for this retriever's contribution to the RRF score. Higher values increase influence. Defaults to 1.0 if not specified. Must be non-negative. */ + weight?: float +} + +export type RRFRetrieverEntry = RetrieverContainer | RRFRetrieverComponent + export interface RankBase { } @@ -5827,7 +5836,7 @@ export interface AggregationsGeoLineAggregation { point: AggregationsGeoLinePoint /** The name of the numeric field to use as the sort key for ordering the points. * When the `geo_line` aggregation is nested inside a `time_series` aggregation, this field defaults to `@timestamp`, and any other value will result in error. */ - sort: AggregationsGeoLineSort + sort?: AggregationsGeoLineSort /** When `true`, returns an additional array of the sort values in the feature properties. */ include_sort?: boolean /** The order in which the line is sorted (ascending or descending). */ @@ -6335,7 +6344,7 @@ export interface AggregationsPercentilesAggregation extends AggregationsFormatMe * Set to `false` to disable this behavior. */ keyed?: boolean /** The percentiles to calculate. */ - percents?: double[] + percents?: double | double[] /** Uses the alternative High Dynamic Range Histogram algorithm to calculate percentiles. */ hdr?: AggregationsHdrMethod /** Sets parameters for the default TDigest algorithm used to calculate percentiles. */ @@ -14069,10 +14078,25 @@ export interface CatSegmentsRequest extends CatCatRequestBase { local?: boolean /** Period to wait for a connection to the master node. */ master_timeout?: Duration + /** Type of index that wildcard expressions can match. If the request can target data streams, this argument + * determines whether wildcard expressions match hidden data streams. Supports comma-separated values, + * such as open,hidden. */ + expand_wildcards?: ExpandWildcards + /** If false, the request returns an error if any wildcard expression, index alias, or _all value targets only + * missing or closed indices. This behavior applies even if the request targets other open indices. For example, + * a request targeting foo*,bar* returns an error if an index starts with foo but no index starts with bar. */ + allow_no_indices?: boolean + /** If true, concrete, expanded or aliased indices are ignored when frozen. */ + ignore_throttled?: boolean + /** If true, missing or closed indices are not included in the response. */ + ignore_unavailable?: boolean + /** If true, allow closed indices to be returned in the response otherwise if false, keep the legacy behaviour + * of throwing an exception if index pattern matches closed indices */ + allow_closed?: boolean /** All values in `body` will be added to the request body. */ - body?: string | { [key: string]: any } & { index?: never, h?: never, s?: never, local?: never, master_timeout?: never } + body?: string | { [key: string]: any } & { index?: never, h?: never, s?: never, local?: never, master_timeout?: never, expand_wildcards?: never, allow_no_indices?: never, ignore_throttled?: never, ignore_unavailable?: never, allow_closed?: never } /** All values in `querystring` will be added to the request querystring. */ - querystring?: { [key: string]: any } & { index?: never, h?: never, s?: never, local?: never, master_timeout?: never } + querystring?: { [key: string]: any } & { index?: never, h?: never, s?: never, local?: never, master_timeout?: never, expand_wildcards?: never, allow_no_indices?: never, ignore_throttled?: never, ignore_unavailable?: never, allow_closed?: never } } export type CatSegmentsResponse = CatSegmentsSegmentsRecord[] @@ -16027,7 +16051,7 @@ export interface ClusterAllocationExplainDiskUsage { } export interface ClusterAllocationExplainNodeAllocationExplanation { - deciders: ClusterAllocationExplainAllocationDecision[] + deciders?: ClusterAllocationExplainAllocationDecision[] node_attributes: Record node_decision: ClusterAllocationExplainDecision node_id: Id @@ -16035,7 +16059,7 @@ export interface ClusterAllocationExplainNodeAllocationExplanation { roles: NodeRoles store?: ClusterAllocationExplainAllocationStore transport_address: TransportAddress - weight_ranking: integer + weight_ranking?: integer } export interface ClusterAllocationExplainNodeDiskUsage { @@ -19457,13 +19481,15 @@ export interface IndicesDataStreamVisibility { export interface IndicesDownsampleConfig { /** The interval at which to aggregate the original time series index. */ fixed_interval: DurationLarge + /** The sampling method used to reduce the documents; it can be either `aggregate` or `last_value`. Defaults to `aggregate`. */ + sampling_method?: IndicesSamplingMethod } export interface IndicesDownsamplingRound { /** The duration since rollover when this downsampling round should execute */ after: Duration - /** The downsample configuration to execute. */ - config: IndicesDownsampleConfig + /** The downsample interval. */ + fixed_interval: DurationLarge } export interface IndicesFailureStore { @@ -19854,6 +19880,8 @@ export interface IndicesRetentionLease { period: Duration } +export type IndicesSamplingMethod = 'aggregate' | 'last_value' + export interface IndicesSearchIdle { after?: Duration } @@ -21451,7 +21479,7 @@ export interface IndicesPutDataLifecycleRequest extends RequestBase { * When empty, every document in this data stream will be stored indefinitely. */ data_retention?: Duration /** The downsampling configuration to execute for the managed backing index after rollover. */ - downsampling?: IndicesDataStreamLifecycleDownsampling + downsampling?: IndicesDownsamplingRound[] /** If defined, it turns data stream lifecycle on/off (`true`/`false`) for this data stream. A data stream lifecycle * that's disabled (enabled: `false`) will have no effect on the data stream. */ enabled?: boolean @@ -24174,7 +24202,7 @@ export interface InferenceRequestChatCompletion { * Requests should generally only add new messages from the user (role `user`). * The other message roles (`assistant`, `system`, or `tool`) should generally only be copied from the response to a previous completion request, such that the messages array is built up throughout a conversation. */ messages: InferenceMessage[] - /** The ID of the model to use. */ + /** The ID of the model to use. By default, the model ID is set to the value included when creating the inference endpoint. */ model?: string /** The upper bound limit for the number of tokens that can be generated for a completion request. */ max_completion_tokens?: long @@ -24418,7 +24446,7 @@ export interface InferenceCompletionRequest extends RequestBase { /** Inference input. * Either a string or an array of strings. */ input: string | string[] - /** Optional task settings */ + /** Task settings for the individual inference request. These settings are specific to the you specified and override the task settings specified when initializing the service. */ task_settings?: InferenceTaskSettings /** All values in `body` will be added to the request body. */ body?: string | { [key: string]: any } & { inference_id?: never, timeout?: never, input?: never, task_settings?: never } @@ -24433,7 +24461,7 @@ export interface InferenceDeleteRequest extends RequestBase { task_type?: InferenceTaskType /** The inference identifier. */ inference_id: Id - /** When true, the endpoint is not deleted and a list of ingest processors which reference this endpoint is returned. */ + /** When true, checks the semantic_text fields and inference processors that reference the endpoint and returns them in a list, but does not delete the endpoint. */ dry_run?: boolean /** When true, the inference endpoint is forcefully deleted even if it is still being used by ingest processors or semantic text fields. */ force?: boolean @@ -24618,7 +24646,7 @@ export interface InferencePutAnthropicRequest extends RequestBase { chunking_settings?: InferenceInferenceChunkingSettings /** The type of service supported for the specified task type. In this case, `anthropic`. */ service: InferenceAnthropicServiceType - /** Settings used to install the inference model. These settings are specific to the `watsonxai` service. */ + /** Settings used to install the inference model. These settings are specific to the `anthropic` service. */ service_settings: InferenceAnthropicServiceSettings /** Settings to configure the inference task. * These settings are specific to the task type you specified. */ @@ -24642,7 +24670,7 @@ export interface InferencePutAzureaistudioRequest extends RequestBase { chunking_settings?: InferenceInferenceChunkingSettings /** The type of service supported for the specified task type. In this case, `azureaistudio`. */ service: InferenceAzureAiStudioServiceType - /** Settings used to install the inference model. These settings are specific to the `openai` service. */ + /** Settings used to install the inference model. These settings are specific to the `azureaistudio` service. */ service_settings: InferenceAzureAiStudioServiceSettings /** Settings to configure the inference task. * These settings are specific to the task type you specified. */ @@ -25058,7 +25086,7 @@ export interface InferenceSparseEmbeddingRequest extends RequestBase { /** Inference input. * Either a string or an array of strings. */ input: string | string[] - /** Optional task settings */ + /** Task settings for the individual inference request. These settings are specific to the you specified and override the task settings specified when initializing the service. */ task_settings?: InferenceTaskSettings /** All values in `body` will be added to the request body. */ body?: string | { [key: string]: any } & { inference_id?: never, timeout?: never, input?: never, task_settings?: never } @@ -25078,7 +25106,7 @@ export interface InferenceStreamCompletionRequest extends RequestBase { * * NOTE: Inference endpoints for the completion task type currently only support a single string as input. */ input: string | string[] - /** Optional task settings */ + /** Task settings for the individual inference request. These settings are specific to the you specified and override the task settings specified when initializing the service. */ task_settings?: InferenceTaskSettings /** All values in `body` will be added to the request body. */ body?: string | { [key: string]: any } & { inference_id?: never, timeout?: never, input?: never, task_settings?: never } @@ -25107,7 +25135,7 @@ export interface InferenceTextEmbeddingRequest extends RequestBase { * > info * > The `input_type` parameter specified on the root level of the request body will take precedence over the `input_type` parameter specified in `task_settings`. */ input_type?: string - /** Optional task settings */ + /** Task settings for the individual inference request. These settings are specific to the you specified and override the task settings specified when initializing the service. */ task_settings?: InferenceTaskSettings /** All values in `body` will be added to the request body. */ body?: string | { [key: string]: any } & { inference_id?: never, timeout?: never, input?: never, input_type?: never, task_settings?: never } @@ -35252,7 +35280,8 @@ export interface SecurityQueryRoleRequest extends RequestBase { * To page through more hits, use the `search_after` parameter. */ from?: integer /** The sort definition. - * You can sort on `username`, `roles`, or `enabled`. + * You can sort on `name`, `description`, `metadata`, `applications.application`, `applications.privileges`, + * and `applications.resources`. * In addition, sort can also be applied to the `_doc` field to sort by index order. */ sort?: Sort /** The number of hits to return.