Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
68 changes: 37 additions & 31 deletions elasticsearch/_async/client/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -565,8 +565,8 @@ async def bulk(
"""
.. raw:: html

<p>Bulk index or delete documents.
Perform multiple <code>index</code>, <code>create</code>, <code>delete</code>, and <code>update</code> actions in a single request.
<p>Bulk index or delete documents.</p>
<p>Perform multiple <code>index</code>, <code>create</code>, <code>delete</code>, and <code>update</code> actions in a single request.
This reduces overhead and can greatly increase indexing speed.</p>
<p>If the Elasticsearch security features are enabled, you must have the following index privileges for the target data stream, index, or index alias:</p>
<ul>
Expand Down Expand Up @@ -771,8 +771,8 @@ async def clear_scroll(
"""
.. raw:: html

<p>Clear a scrolling search.
Clear the search context and results for a scrolling search.</p>
<p>Clear a scrolling search.</p>
<p>Clear the search context and results for a scrolling search.</p>


`<https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-clear-scroll>`_
Expand Down Expand Up @@ -825,8 +825,8 @@ async def close_point_in_time(
"""
.. raw:: html

<p>Close a point in time.
A point in time must be opened explicitly before being used in search requests.
<p>Close a point in time.</p>
<p>A point in time must be opened explicitly before being used in search requests.
The <code>keep_alive</code> parameter tells Elasticsearch how long it should persist.
A point in time is automatically closed when the <code>keep_alive</code> period has elapsed.
However, keeping points in time has a cost; close them as soon as they are no longer required for search requests.</p>
Expand Down Expand Up @@ -906,8 +906,8 @@ async def count(
"""
.. raw:: html

<p>Count search results.
Get the number of documents matching a query.</p>
<p>Count search results.</p>
<p>Get the number of documents matching a query.</p>
<p>The query can be provided either by using a simple query string as a parameter, or by defining Query DSL within the request body.
The query is optional. When no query is provided, the API uses <code>match_all</code> to count all the documents.</p>
<p>The count API supports multi-target syntax. You can run a single count API search across multiple data streams and indices.</p>
Expand Down Expand Up @@ -1643,11 +1643,11 @@ async def delete_by_query_rethrottle(
self,
*,
task_id: str,
requests_per_second: float,
error_trace: t.Optional[bool] = None,
filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
human: t.Optional[bool] = None,
pretty: t.Optional[bool] = None,
requests_per_second: t.Optional[float] = None,
) -> ObjectApiResponse[t.Any]:
"""
.. raw:: html
Expand All @@ -1665,9 +1665,13 @@ async def delete_by_query_rethrottle(
"""
if task_id in SKIP_IN_PATH:
raise ValueError("Empty value passed for parameter 'task_id'")
if requests_per_second is None:
raise ValueError("Empty value passed for parameter 'requests_per_second'")
__path_parts: t.Dict[str, str] = {"task_id": _quote(task_id)}
__path = f'/_delete_by_query/{__path_parts["task_id"]}/_rethrottle'
__query: t.Dict[str, t.Any] = {}
if requests_per_second is not None:
__query["requests_per_second"] = requests_per_second
if error_trace is not None:
__query["error_trace"] = error_trace
if filter_path is not None:
Expand All @@ -1676,8 +1680,6 @@ async def delete_by_query_rethrottle(
__query["human"] = human
if pretty is not None:
__query["pretty"] = pretty
if requests_per_second is not None:
__query["requests_per_second"] = requests_per_second
__headers = {"accept": "application/json"}
return await self.perform_request( # type: ignore[return-value]
"POST",
Expand All @@ -1703,8 +1705,8 @@ async def delete_script(
"""
.. raw:: html

<p>Delete a script or search template.
Deletes a stored script or search template.</p>
<p>Delete a script or search template.</p>
<p>Deletes a stored script or search template.</p>


`<https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-delete-script>`_
Expand Down Expand Up @@ -2015,8 +2017,8 @@ async def explain(
"""
.. raw:: html

<p>Explain a document match result.
Get information about why a specific document matches, or doesn't match, a query.
<p>Explain a document match result.</p>
<p>Get information about why a specific document matches, or doesn't match, a query.
It computes a score explanation for a query and a specific document.</p>


Expand Down Expand Up @@ -2419,8 +2421,8 @@ async def get_script(
"""
.. raw:: html

<p>Get a script or search template.
Retrieves a stored script or search template.</p>
<p>Get a script or search template.</p>
<p>Retrieves a stored script or search template.</p>


`<https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-get-script>`_
Expand Down Expand Up @@ -2656,8 +2658,8 @@ async def health_report(
"""
.. raw:: html

<p>Get the cluster health.
Get a report with the health status of an Elasticsearch cluster.
<p>Get the cluster health.</p>
<p>Get a report with the health status of an Elasticsearch cluster.
The report contains a list of indicators that compose Elasticsearch functionality.</p>
<p>Each indicator has a health status of: green, unknown, yellow or red.
The indicator will provide an explanation and metadata describing the reason for its current health status.</p>
Expand Down Expand Up @@ -2969,8 +2971,8 @@ async def info(
"""
.. raw:: html

<p>Get cluster info.
Get basic build, version, and cluster information.
<p>Get cluster info.</p>
<p>Get basic build, version, and cluster information.
::: In Serverless, this API is retained for backward compatibility only. Some response fields, such as the version number, should be ignored.</p>


Expand Down Expand Up @@ -3664,8 +3666,8 @@ async def put_script(
"""
.. raw:: html

<p>Create or update a script or search template.
Creates or updates a stored script or search template.</p>
<p>Create or update a script or search template.</p>
<p>Creates or updates a stored script or search template.</p>


`<https://www.elastic.co/docs/api/doc/elasticsearch/operation/operation-put-script>`_
Expand Down Expand Up @@ -3999,11 +4001,11 @@ async def reindex_rethrottle(
self,
*,
task_id: str,
requests_per_second: float,
error_trace: t.Optional[bool] = None,
filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
human: t.Optional[bool] = None,
pretty: t.Optional[bool] = None,
requests_per_second: t.Optional[float] = None,
) -> ObjectApiResponse[t.Any]:
"""
.. raw:: html
Expand All @@ -4027,9 +4029,13 @@ async def reindex_rethrottle(
"""
if task_id in SKIP_IN_PATH:
raise ValueError("Empty value passed for parameter 'task_id'")
if requests_per_second is None:
raise ValueError("Empty value passed for parameter 'requests_per_second'")
__path_parts: t.Dict[str, str] = {"task_id": _quote(task_id)}
__path = f'/_reindex/{__path_parts["task_id"]}/_rethrottle'
__query: t.Dict[str, t.Any] = {}
if requests_per_second is not None:
__query["requests_per_second"] = requests_per_second
if error_trace is not None:
__query["error_trace"] = error_trace
if filter_path is not None:
Expand All @@ -4038,8 +4044,6 @@ async def reindex_rethrottle(
__query["human"] = human
if pretty is not None:
__query["pretty"] = pretty
if requests_per_second is not None:
__query["requests_per_second"] = requests_per_second
__headers = {"accept": "application/json"}
return await self.perform_request( # type: ignore[return-value]
"POST",
Expand Down Expand Up @@ -6070,8 +6074,8 @@ async def update_by_query(
"""
.. raw:: html

<p>Update documents.
Updates documents that match the specified query.
<p>Update documents.</p>
<p>Updates documents that match the specified query.
If no query is specified, performs an update on every document in the data stream or index without modifying the source, which is useful for picking up mapping changes.</p>
<p>If the Elasticsearch security features are enabled, you must have the following index privileges for the target data stream, index, or alias:</p>
<ul>
Expand Down Expand Up @@ -6350,11 +6354,11 @@ async def update_by_query_rethrottle(
self,
*,
task_id: str,
requests_per_second: float,
error_trace: t.Optional[bool] = None,
filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
human: t.Optional[bool] = None,
pretty: t.Optional[bool] = None,
requests_per_second: t.Optional[float] = None,
) -> ObjectApiResponse[t.Any]:
"""
.. raw:: html
Expand All @@ -6372,9 +6376,13 @@ async def update_by_query_rethrottle(
"""
if task_id in SKIP_IN_PATH:
raise ValueError("Empty value passed for parameter 'task_id'")
if requests_per_second is None:
raise ValueError("Empty value passed for parameter 'requests_per_second'")
__path_parts: t.Dict[str, str] = {"task_id": _quote(task_id)}
__path = f'/_update_by_query/{__path_parts["task_id"]}/_rethrottle'
__query: t.Dict[str, t.Any] = {}
if requests_per_second is not None:
__query["requests_per_second"] = requests_per_second
if error_trace is not None:
__query["error_trace"] = error_trace
if filter_path is not None:
Expand All @@ -6383,8 +6391,6 @@ async def update_by_query_rethrottle(
__query["human"] = human
if pretty is not None:
__query["pretty"] = pretty
if requests_per_second is not None:
__query["requests_per_second"] = requests_per_second
__headers = {"accept": "application/json"}
return await self.perform_request( # type: ignore[return-value]
"POST",
Expand Down
38 changes: 38 additions & 0 deletions elasticsearch/_async/client/cat.py
Original file line number Diff line number Diff line change
Expand Up @@ -3301,10 +3301,20 @@ async def segments(
self,
*,
index: t.Optional[t.Union[str, t.Sequence[str]]] = None,
allow_closed: t.Optional[bool] = None,
allow_no_indices: t.Optional[bool] = None,
bytes: t.Optional[
t.Union[str, t.Literal["b", "gb", "kb", "mb", "pb", "tb"]]
] = None,
error_trace: t.Optional[bool] = None,
expand_wildcards: t.Optional[
t.Union[
t.Sequence[
t.Union[str, t.Literal["all", "closed", "hidden", "none", "open"]]
],
t.Union[str, t.Literal["all", "closed", "hidden", "none", "open"]],
]
] = None,
filter_path: t.Optional[t.Union[str, t.Sequence[str]]] = None,
format: t.Optional[str] = None,
h: t.Optional[
Expand Down Expand Up @@ -3355,6 +3365,8 @@ async def segments(
] = None,
help: t.Optional[bool] = None,
human: t.Optional[bool] = None,
ignore_throttled: t.Optional[bool] = None,
ignore_unavailable: t.Optional[bool] = None,
local: t.Optional[bool] = None,
master_timeout: t.Optional[t.Union[str, t.Literal[-1], t.Literal[0]]] = None,
pretty: t.Optional[bool] = None,
Expand All @@ -3378,6 +3390,14 @@ async def segments(
:param index: A comma-separated list of data streams, indices, and aliases used
to limit the request. Supports wildcards (`*`). To target all data streams
and indices, omit this parameter or use `*` or `_all`.
:param allow_closed: If true, allow closed indices to be returned in the response
otherwise if false, keep the legacy behaviour of throwing an exception if
index pattern matches closed indices
:param allow_no_indices: If false, the request returns an error if any wildcard
expression, index alias, or _all value targets only missing or closed indices.
This behavior applies even if the request targets other open indices. For
example, a request targeting foo*,bar* returns an error if an index starts
with foo but no index starts with bar.
:param bytes: Sets the units for columns that contain a byte-size value. Note
that byte-size value units work in terms of powers of 1024. For instance
`1kb` means 1024 bytes, not 1000 bytes. If omitted, byte-size values are
Expand All @@ -3386,12 +3406,20 @@ async def segments(
least `1.0`. If given, byte-size values are rendered as an integer with no
suffix, representing the value of the column in the chosen unit. Values that
are not an exact multiple of the chosen unit are rounded down.
:param expand_wildcards: Type of index that wildcard expressions can match. If
the request can target data streams, this argument determines whether wildcard
expressions match hidden data streams. Supports comma-separated values, such
as open,hidden.
:param format: Specifies the format to return the columnar data in, can be set
to `text`, `json`, `cbor`, `yaml`, or `smile`.
:param h: A comma-separated list of columns names to display. It supports simple
wildcards.
:param help: When set to `true` will output available columns. This option can't
be combined with any other query string option.
:param ignore_throttled: If true, concrete, expanded or aliased indices are ignored
when frozen.
:param ignore_unavailable: If true, missing or closed indices are not included
in the response.
:param local: If `true`, the request computes the list of selected nodes from
the local cluster state. If `false` the list of selected nodes are computed
from the cluster state of the master node. In both cases the coordinating
Expand All @@ -3416,10 +3444,16 @@ async def segments(
__path_parts = {}
__path = "/_cat/segments"
__query: t.Dict[str, t.Any] = {}
if allow_closed is not None:
__query["allow_closed"] = allow_closed
if allow_no_indices is not None:
__query["allow_no_indices"] = allow_no_indices
if bytes is not None:
__query["bytes"] = bytes
if error_trace is not None:
__query["error_trace"] = error_trace
if expand_wildcards is not None:
__query["expand_wildcards"] = expand_wildcards
if filter_path is not None:
__query["filter_path"] = filter_path
if format is not None:
Expand All @@ -3430,6 +3464,10 @@ async def segments(
__query["help"] = help
if human is not None:
__query["human"] = human
if ignore_throttled is not None:
__query["ignore_throttled"] = ignore_throttled
if ignore_unavailable is not None:
__query["ignore_unavailable"] = ignore_unavailable
if local is not None:
__query["local"] = local
if master_timeout is not None:
Expand Down
16 changes: 8 additions & 8 deletions elasticsearch/_async/client/ccr.py
Original file line number Diff line number Diff line change
Expand Up @@ -125,8 +125,8 @@ async def follow(
"""
.. raw:: html

<p>Create a follower.
Create a cross-cluster replication follower index that follows a specific leader index.
<p>Create a follower.</p>
<p>Create a cross-cluster replication follower index that follows a specific leader index.
When the API returns, the follower index exists and cross-cluster replication starts replicating operations from the leader index to the follower index.</p>


Expand Down Expand Up @@ -368,8 +368,8 @@ async def forget_follower(
"""
.. raw:: html

<p>Forget a follower.
Remove the cross-cluster replication follower retention leases from the leader.</p>
<p>Forget a follower.</p>
<p>Remove the cross-cluster replication follower retention leases from the leader.</p>
<p>A following index takes out retention leases on its leader index.
These leases are used to increase the likelihood that the shards of the leader index retain the history of operations that the shards of the following index need to run replication.
When a follower index is converted to a regular index by the unfollow API (either by directly calling the API or by index lifecycle management tasks), these leases are removed.
Expand Down Expand Up @@ -640,8 +640,8 @@ async def put_auto_follow_pattern(
"""
.. raw:: html

<p>Create or update auto-follow patterns.
Create a collection of cross-cluster replication auto-follow patterns for a remote cluster.
<p>Create or update auto-follow patterns.</p>
<p>Create a collection of cross-cluster replication auto-follow patterns for a remote cluster.
Newly created indices on the remote cluster that match any of the patterns are automatically configured as follower indices.
Indices on the remote cluster that were created before the auto-follow pattern was created will not be auto-followed even if they match the pattern.</p>
<p>This API can also be used to update auto-follow patterns.
Expand Down Expand Up @@ -853,8 +853,8 @@ async def resume_follow(
"""
.. raw:: html

<p>Resume a follower.
Resume a cross-cluster replication follower index that was paused.
<p>Resume a follower.</p>
<p>Resume a cross-cluster replication follower index that was paused.
The follower index could have been paused with the pause follower API.
Alternatively it could be paused due to replication that cannot be retried due to failures during following tasks.
When this API returns, the follower index will resume fetching operations from the leader index.</p>
Expand Down
Loading