Skip to content

Commit

Permalink
fixed text corrections as suggested
Browse files Browse the repository at this point in the history
  • Loading branch information
pkosanam committed Sep 20, 2023
1 parent b0dc370 commit 4933202
Show file tree
Hide file tree
Showing 6 changed files with 43 additions and 38 deletions.
17 changes: 9 additions & 8 deletions quickstarts/anyscale/config.yml
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
slug: anyscale
description: |
## Why should you monitor your usage of Anyscale?
Monitor your application powered by Anyscale language models to ensure, get visibility to what you send to Anyscale, responses received from Anyscale , latency, usage and errors. By monitoring the usage, you can infer the cost.
### Track the LLM's performance:
Monitor the input & output, latency and errors of your LLM provider. Track performance changes with the providers and versions of your LLM. Monitor usage to understand the cost, rate limits, and general performance.
Monitor your application powered by Anyscale language models to ensure, get visibility to what you send to Anyscale, responses received from Anyscale, latency, usage and errors. By monitoring the usage, you can infer the cost.
### Track the LLM's performance:
Monitor the input & output, latency and errors of your LLM provider. Track performance changes with the providers and versions of your LLM. Monitor usage to understand the cost, rate limits, and general performance.
### Track your app:
By tracking key metrics like latency, throughput, error rates, and input & output, you can gain insights into your LangChain app's performance and identify areas of improvement.
### Early issue detection:
Expand All @@ -12,29 +12,30 @@ description: |
## Comprehensive Anyscale monitoring quickstart
Our Anyscale quickstart provides metrics including error rate, input & output, latency, queries, and lets you integrate with different language models.
## What’s included in the Anyscale quickstart?
New Relic Anyscale monitoring quickstart provides a variety of pre-built dashboards, which will help you gain insights into the health and performance of your Anyscale usage. These reports include:
## What’s included in the Anyscale quickstart?
New Relic Anyscale monitoring quickstart provides a variety of pre-built dashboards, which will help you gain insights into the health and performance of your Anyscale usage. These reports include:
- Dashboards (average tokens, LLM completion’s details, chain’s details, tool details, top tool names and many more)
- Alerts (errors, request per model and response time)
summary: |
Improve the visibility of your Anyscale usage with New Relic Anyscale quickstart.
Improve the visibility of your Anyscale usage with New Relic Anyscale quickstart.
icon: logo.svg
level:
- New Relic
authors:
- New Relic
title: Anyscale
documentation:
- name: Anyscale integration documentation
- name: Anyscale integration documentation
description: |
Implement monitoring and instrumentation for your Anyscale app to ensure that your observability data is integrated into New Relic for effective performance analysis and insights.
Implement monitoring and instrumentation for your Anyscale app to ensure that your observability data is integrated into New Relic for effective performance analysis and insights.
url: https://github.com/newrelic/nr-openai-observability
dataSourceIds:
- langchain
keywords:
- anyscale
- large language model
- natural language processing
- nlp
- machine learning
- artificial intelligence
- ai
Expand Down
17 changes: 9 additions & 8 deletions quickstarts/jina-ai/config.yml
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
slug: jina-ai
description: |
## Why should you monitor your usage of Jina AI?
Monitor your application powered by Jina AI language models to ensure, get visibility to what you send to Jina AI, responses received from Jina AI , latency, usage and errors. By monitoring the usage, you can infer the cost.
### Track the LLM's performance:
Monitor the input & output, latency and errors of your LLM provider. Track performance changes with the providers and versions of your LLM. Monitor usage to understand the cost, rate limits, and general performance.
Monitor your application powered by Jina AI language models to ensure, get visibility to what you send to Jina AI, responses received from Jina AI, latency, usage and errors. By monitoring the usage, you can infer the cost.
### Track the LLM's performance:
Monitor the input & output, latency and errors of your LLM provider. Track performance changes with the providers and versions of your LLM. Monitor usage to understand the cost, rate limits, and general performance.
### Track your app:
By tracking key metrics like latency, throughput, error rates, and input & output, you can gain insights into your LangChain app's performance and identify areas of improvement.
### Early issue detection:
Expand All @@ -12,29 +12,30 @@ description: |
## Comprehensive Jina AI monitoring quickstart
Our Jina AI quickstart provides metrics including error rate, input & output, latency, queries, and lets you integrate with different language models.
## What’s included in the Jina AI quickstart?
New Relic Jina AI monitoring quickstart provides a variety of pre-built dashboards, which will help you gain insights into the health and performance of your Jina AI usage. These reports include:
## What’s included in the Jina AI quickstart?
New Relic Jina AI monitoring quickstart provides a variety of pre-built dashboards, which will help you gain insights into the health and performance of your Jina AI usage. These reports include:
- Dashboards (average tokens, LLM completion’s details, chain’s details, tool details, top tool names and many more)
- Alerts (errors, request per model and response time)
summary: |
Improve the visibility of your Jina AI usage with New Relic Jina AI quickstart.
Improve the visibility of your Jina AI usage with New Relic Jina AI quickstart.
icon: logo.svg
level:
- New Relic
authors:
- New Relic
title: Jina AI
documentation:
- name: Jina AI integration documentation
- name: Jina AI integration documentation
description: |
Implement monitoring and instrumentation for your Jina AI app to ensure that your observability data is integrated into New Relic for effective performance analysis and insights.
Implement monitoring and instrumentation for your Jina AI app to ensure that your observability data is integrated into New Relic for effective performance analysis and insights.
url: https://github.com/newrelic/nr-openai-observability
dataSourceIds:
- langchain
keywords:
- jina-ai
- large language model
- natural language processing
- nlp
- machine learning
- artificial intelligence
- ai
Expand Down
3 changes: 2 additions & 1 deletion quickstarts/myscale/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ description: |
### Track the query performance of your Vector DB
Track the behavior of your vector stores. Monitor the latency, queries, the number of documents retrieved, and the content of the documents so that you can evaluate their relevance.
### Track your app:
### Track your app:
By tracking key metrics like latency, throughput, error rates, and input & output, you can gain insights into your app's performance and identify areas of improvement.
Expand Down Expand Up @@ -47,6 +47,7 @@ keywords:
- ai
- chatbots
- natural language processing
- nlp
- machine learning
- deep learning
- aiops
Expand Down
17 changes: 9 additions & 8 deletions quickstarts/ollama/config.yml
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
slug: ollama
description: |
## Why should you monitor your usage of Ollama?
Monitor your application powered by Ollama language models to ensure, get visibility to what you send to Ollama, responses received from Ollama , latency, usage and errors. By monitoring the usage, you can infer the cost.
### Track the LLM's performance:
Monitor the input & output, latency and errors of your LLM provider. Track performance changes with the providers and versions of your LLM. Monitor usage to understand the cost, rate limits, and general performance.
Monitor your application powered by Ollama language models to ensure, get visibility to what you send to Ollama, responses received from Ollama, latency, usage and errors. By monitoring the usage, you can infer the cost.
### Track the LLM's performance:
Monitor the input & output, latency and errors of your LLM provider. Track performance changes with the providers and versions of your LLM. Monitor usage to understand the cost, rate limits, and general performance.
### Track your app:
By tracking key metrics like latency, throughput, error rates, and input & output, you can gain insights into your LangChain app's performance and identify areas of improvement.
### Early issue detection:
Expand All @@ -12,29 +12,30 @@ description: |
## Comprehensive Ollama monitoring quickstart
Our Ollama quickstart provides metrics including error rate, input & output, latency, queries, and lets you integrate with different language models.
## What’s included in the Ollama quickstart?
New Relic Ollama monitoring quickstart provides a variety of pre-built dashboards, which will help you gain insights into the health and performance of your Ollama usage. These reports include:
## What’s included in the Ollama quickstart?
New Relic Ollama monitoring quickstart provides a variety of pre-built dashboards, which will help you gain insights into the health and performance of your Ollama usage. These reports include:
- Dashboards (average tokens, LLM completion’s details, chain’s details, tool details, top tool names and many more)
- Alerts (errors, request per model and response time)
summary: |
Improve the visibility of your Ollama usage with New Relic Ollama quickstart.
Improve the visibility of your Ollama usage with New Relic Ollama quickstart.
icon: logo.png
level:
- New Relic
authors:
- New Relic
title: Ollama
documentation:
- name: Ollama integration documentation
- name: Ollama integration documentation
description: |
Implement monitoring and instrumentation for your Ollama app to ensure that your observability data is integrated into New Relic for effective performance analysis and insights.
Implement monitoring and instrumentation for your Ollama app to ensure that your observability data is integrated into New Relic for effective performance analysis and insights.
url: https://github.com/newrelic/nr-openai-observability
dataSourceIds:
- langchain
keywords:
- ollama
- large language model
- natural language processing
- nlp
- machine learning
- artificial intelligence
- ai
Expand Down
17 changes: 9 additions & 8 deletions quickstarts/openllm/config.yml
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
slug: openllm
description: |
## Why should you monitor your usage of OpenLLM?
Monitor your application powered by OpenLLM language models to ensure, get visibility to what you send to OpenLLM, responses received from OpenLLM , latency, usage and errors. By monitoring the usage, you can infer the cost.
### Track the LLM's performance:
Monitor the input & output, latency and errors of your LLM provider. Track performance changes with the providers and versions of your LLM. Monitor usage to understand the cost, rate limits, and general performance.
Monitor your application powered by OpenLLM language models to ensure, get visibility to what you send to OpenLLM, responses received from OpenLLM, latency, usage and errors. By monitoring the usage, you can infer the cost.
### Track the LLM's performance:
Monitor the input & output, latency and errors of your LLM provider. Track performance changes with the providers and versions of your LLM. Monitor usage to understand the cost, rate limits, and general performance.
### Track your app:
By tracking key metrics like latency, throughput, error rates, and input & output, you can gain insights into your LangChain app's performance and identify areas of improvement.
### Early issue detection:
Expand All @@ -12,29 +12,30 @@ description: |
## Comprehensive OpenLLM monitoring quickstart
Our OpenLLM quickstart provides metrics including error rate, input & output, latency, queries, and lets you integrate with different language models.
## What’s included in the OpenLLM quickstart?
New Relic OpenLLM monitoring quickstart provides a variety of pre-built dashboards, which will help you gain insights into the health and performance of your OpenLLM usage. These reports include:
## What’s included in the OpenLLM quickstart?
New Relic OpenLLM monitoring quickstart provides a variety of pre-built dashboards, which will help you gain insights into the health and performance of your OpenLLM usage. These reports include:
- Dashboards (average tokens, LLM completion’s details, chain’s details, tool details, top tool names and many more)
- Alerts (errors, request per model and response time)
summary: |
Improve the visibility of your OpenLLM usage with New Relic OpenLLM quickstart.
Improve the visibility of your OpenLLM usage with New Relic OpenLLM quickstart.
icon: logo.jpeg
level:
- New Relic
authors:
- New Relic
title: OpenLLM
documentation:
- name: OpenLLM integration documentation
- name: OpenLLM integration documentation
description: |
Implement monitoring and instrumentation for your OpenLLM app to ensure that your observability data is integrated into New Relic for effective performance analysis and insights.
Implement monitoring and instrumentation for your OpenLLM app to ensure that your observability data is integrated into New Relic for effective performance analysis and insights.
url: https://github.com/newrelic/nr-openai-observability
dataSourceIds:
- langchain
keywords:
- openllm
- large language model
- natural language processing
- nlp
- machine learning
- artificial intelligence
- ai
Expand Down
10 changes: 5 additions & 5 deletions quickstarts/starrocks/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,20 +7,20 @@ description: |
### Track the query performance of your Vector DB
Track the behavior of your vector stores. Monitor the latency, queries, the number of documents retrieved, and the content of the documents so that you can evaluate their relevance.
### Track your app:
### Track your app:
By tracking key metrics like latency, throughput, error rates, and input & output, you can gain insights into your app's performance and identify areas of improvement.
### What’s included in this quickstart?
New Relic LangChain monitoring quickstart provides a variety of pre-built dashboards, which will help you gain insights into the health and performance of your LangChain app. These reports include:
StarRocks uses the LangChain Vector Database to connect to New Relic. This quickstart provides a variety of a pre-built dashboard, which will help you gain insights into the health and performance of your AI app. These reports include:
- Vector searches
- Alerts for errors, search per vector store, and response time
- Identify popular queries, sources, and content
summary: |
Monitor your Vector search's performance and quality with New Relic StarRocks quickstart.
Monitor the performance and quality of your vector searches with New Relic SingleStoreDB quickstart.
icon: logo.png
level: New Relic
authors:
Expand All @@ -29,7 +29,7 @@ title: StarRocks
documentation:
- name: LangChain Vector Database integration documentation
description: |
Implement monitoring and instrumentation for your Vector store, and ensure that your observability data is integrated into New Relic for effective performance analysis and insights.
Implement monitoring and instrumentation for your vector data store, and ensure that your observability data is integrated into New Relic for effective performance analysis and insights.
url: https://github.com/newrelic/nr-openai-observability
dataSourceIds:
- langchain-vectordb
Expand Down

0 comments on commit 4933202

Please sign in to comment.