diff --git a/docs/reference/analysis/tokenfilters/flatten-graph-tokenfilter.asciidoc b/docs/reference/analysis/tokenfilters/flatten-graph-tokenfilter.asciidoc index bcff83c5e9950..d75ce6f003979 100644 --- a/docs/reference/analysis/tokenfilters/flatten-graph-tokenfilter.asciidoc +++ b/docs/reference/analysis/tokenfilters/flatten-graph-tokenfilter.asciidoc @@ -4,8 +4,6 @@ Flatten graph ++++ -experimental[This functionality is marked as experimental in Lucene] - The `flatten_graph` token filter accepts an arbitrary graph token stream, such as that produced by <>, and flattens it into a single diff --git a/docs/reference/analysis/tokenizers/simplepattern-tokenizer.asciidoc b/docs/reference/analysis/tokenizers/simplepattern-tokenizer.asciidoc index 5da001640a027..67c0cefc98957 100644 --- a/docs/reference/analysis/tokenizers/simplepattern-tokenizer.asciidoc +++ b/docs/reference/analysis/tokenizers/simplepattern-tokenizer.asciidoc @@ -1,8 +1,6 @@ [[analysis-simplepattern-tokenizer]] === Simple Pattern Tokenizer -experimental[This functionality is marked as experimental in Lucene] - The `simple_pattern` tokenizer uses a regular expression to capture matching text as terms. The set of regular expression features it supports is more limited than the <> tokenizer, but the diff --git a/docs/reference/analysis/tokenizers/simplepatternsplit-tokenizer.asciidoc b/docs/reference/analysis/tokenizers/simplepatternsplit-tokenizer.asciidoc index 55be14c45638a..3f24233334e57 100644 --- a/docs/reference/analysis/tokenizers/simplepatternsplit-tokenizer.asciidoc +++ b/docs/reference/analysis/tokenizers/simplepatternsplit-tokenizer.asciidoc @@ -1,8 +1,6 @@ [[analysis-simplepatternsplit-tokenizer]] === Simple Pattern Split Tokenizer -experimental[This functionality is marked as experimental in Lucene] - The `simple_pattern_split` tokenizer uses a regular expression to split the input into terms at pattern matches. The set of regular expression features it supports is more limited than the <>