Skip to content

Commit d7ebe83

Browse files
Updates refs to LLMs (#5806)
* updates refs to LLMs * Update docs/AI-for-security/ai-security-assistant.asciidoc Co-authored-by: Joe Peeples <[email protected]> * Update docs/serverless/AI-for-security/ai-assistant.mdx Co-authored-by: Joe Peeples <[email protected]> --------- Co-authored-by: Joe Peeples <[email protected]>
1 parent 2815edd commit d7ebe83

File tree

4 files changed

+6
-7
lines changed

4 files changed

+6
-7
lines changed

docs/AI-for-security/ai-security-assistant.asciidoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ You must create a generative AI connector before you can use AI Assistant. AI As
4747
.Recommended models
4848
[sidebar]
4949
--
50-
While AI Assistant is compatible with many different models, our testing found increased quality with Azure 32k, and faster, more cost-effective responses with Claude 3 Haiku and OpenAI GPT4 Turbo. For more information, refer to the <<llm-performance-matrix>>.
50+
While AI Assistant is compatible with many different models, refer to the <<llm-performance-matrix>> to select models that perform well with your desired use cases.
5151
--
5252

5353
[discrete]

docs/AI-for-security/connect-to-azure-openai.asciidoc

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -68,9 +68,8 @@ Now, set up the Azure OpenAI model:
6868

6969
. From within your Azure OpenAI deployment, select **Model deployments**, then click **Manage deployments**.
7070
. On the **Deployments** page, select **Create new deployment**.
71-
. Under **Select a model**, choose `gpt-4` or `gpt-4-32k`.
72-
** If you select `gpt-4`, set the **Model version** to `0125-Preview`.
73-
** If you select `gpt-4-32k`, set the **Model version** to `default`.
71+
. Under **Select a model**, choose `gpt-4o` or `gpt-4 turbo`.
72+
. Set the model version to "Auto-update to default".
7473
+
7574
IMPORTANT: The models available to you depend on https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models#model-summary-table-and-region-availability[region availability]. For best results, use `GPT-4o 2024-05-13` with the maximum Tokens-Per-Minute (TPM) capacity. For more information on how different models perform for different tasks, refer to the <<llm-performance-matrix>>.
7675
+

docs/serverless/AI-for-security/ai-assistant.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ Elastic can automatically anonymize event data that you provide to AI Assistant
4444
You must create a generative AI connector before you can use AI Assistant. AI Assistant can connect to multiple large language model (LLM) providers so you can select the best model for your needs. To set up a connector, refer to <DocLink slug="/serverless/security/llm-connector-guides" text="LLM connector setup guides"/>.
4545

4646
<DocCallOut title="Recommended models">
47-
While AI Assistant is compatible with many different models, our testing found increased quality with Azure 32k, and faster, more cost-effective responses with Claude 3 Haiku and OpenAI GPT4 Turbo. For more information, refer to the <DocLink slug="/serverless/security/llm-performance-matrix" text="LLM performance matrix"/>.
47+
While AI Assistant is compatible with many different models, refer to the <DocLink slug="/serverless/security/llm-performance-matrix" text="LLM performance matrix"/> to select models that perform well with your desired use cases.
4848
</DocCallOut>
4949

5050
<div id="start-chatting"></div>

docs/serverless/AI-for-security/connect-to-azure-openai.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -47,8 +47,8 @@ Now, set up the Azure OpenAI model:
4747

4848
1. From within your Azure OpenAI deployment, select **Model deployments**, then click **Manage deployments**.
4949
2. On the **Deployments** page, select **Create new deployment**.
50-
3. Under **Select a model**, choose `gpt-4` or `gpt-4-32k`.
51-
4. Set the **Model version** to `0125-Preview` for `gpt-4` or `default` for `gpt-4-32k`.
50+
3. Under **Select a model**, choose `gpt-4o` or `gpt-4 turbo`.
51+
4. Set the model version to "Auto-update to default".
5252
5. Under **Deployment type**, select **Standard**.
5353
6. Name your deployment.
5454
7. Slide the **Tokens per Minute Rate Limit** to the maximum. The following example supports 80,000 TPM, but other regions might support higher limits.

0 commit comments

Comments
 (0)