diff --git a/quickstarts/anyscale/config.yml b/quickstarts/anyscale/config.yml new file mode 100644 index 0000000000..967fca1475 --- /dev/null +++ b/quickstarts/anyscale/config.yml @@ -0,0 +1,55 @@ +slug: anyscale +description: | + ## Why should you monitor your usage of Anyscale? + Monitor your application powered by Anyscale language models to ensure, get visibility to what you send to Anyscale, responses received from Anyscale, latency, usage and errors. By monitoring the usage, you can infer the cost. + ### Track the LLM's performance: + Monitor the input & output, latency and errors of your LLM provider. Track performance changes with the providers and versions of your LLM. Monitor usage to understand the cost, rate limits, and general performance. + ### Track your app: + By tracking key metrics like latency, throughput, error rates, and input & output, you can gain insights into your LangChain app's performance and identify areas of improvement. + ### Early issue detection: + Detect and address issues early to prevent them from affecting model performance. + + ## Comprehensive Anyscale monitoring quickstart + Our Anyscale quickstart provides metrics including error rate, input & output, latency, queries, and lets you integrate with different language models. + + ## What’s included in the Anyscale quickstart? + New Relic Anyscale monitoring quickstart provides a variety of pre-built dashboards, which will help you gain insights into the health and performance of your Anyscale usage. These reports include: + - Dashboards (average tokens, LLM completion’s details, chain’s details, tool details, top tool names and many more) + - Alerts (errors, request per model and response time) +summary: | + Improve the visibility of your Anyscale usage with New Relic Anyscale quickstart. +icon: logo.svg +level: + - New Relic +authors: + - New Relic +title: Anyscale +documentation: + - name: Anyscale integration documentation + description: | + Implement monitoring and instrumentation for your Anyscale app to ensure that your observability data is integrated into New Relic for effective performance analysis and insights. + url: https://github.com/newrelic/nr-openai-observability +dataSourceIds: + - langchain +keywords: + - anyscale + - large language model + - natural language processing + - nlp + - machine learning + - artificial intelligence + - ai + - anyscale ai + - cloud native + - llm + - llm chains + - deep learning + - distributed computing + - kubernetes + - generative ai + - mlops + - NR1_addData +alertPolicies: + - langchain +dashboards: + - langchain \ No newline at end of file diff --git a/quickstarts/anyscale/logo.svg b/quickstarts/anyscale/logo.svg new file mode 100644 index 0000000000..2dc2553cfe --- /dev/null +++ b/quickstarts/anyscale/logo.svg @@ -0,0 +1 @@ + \ No newline at end of file diff --git a/quickstarts/jina-ai/config.yml b/quickstarts/jina-ai/config.yml new file mode 100644 index 0000000000..4edfe1b3ae --- /dev/null +++ b/quickstarts/jina-ai/config.yml @@ -0,0 +1,55 @@ +slug: jina-ai +description: | + ## Why should you monitor your usage of Jina AI? + Monitor your application powered by Jina AI language models to ensure, get visibility to what you send to Jina AI, responses received from Jina AI, latency, usage and errors. By monitoring the usage, you can infer the cost. + ### Track the LLM's performance: + Monitor the input & output, latency and errors of your LLM provider. Track performance changes with the providers and versions of your LLM. Monitor usage to understand the cost, rate limits, and general performance. + ### Track your app: + By tracking key metrics like latency, throughput, error rates, and input & output, you can gain insights into your LangChain app's performance and identify areas of improvement. + ### Early issue detection: + Detect and address issues early to prevent them from affecting model performance. + + ## Comprehensive Jina AI monitoring quickstart + Our Jina AI quickstart provides metrics including error rate, input & output, latency, queries, and lets you integrate with different language models. + + ## What’s included in the Jina AI quickstart? + New Relic Jina AI monitoring quickstart provides a variety of pre-built dashboards, which will help you gain insights into the health and performance of your Jina AI usage. These reports include: + - Dashboards (average tokens, LLM completion’s details, chain’s details, tool details, top tool names and many more) + - Alerts (errors, request per model and response time) +summary: | + Improve the visibility of your Jina AI usage with New Relic Jina AI quickstart. +icon: logo.svg +level: + - New Relic +authors: + - New Relic +title: Jina AI +documentation: + - name: Jina AI integration documentation + description: | + Implement monitoring and instrumentation for your Jina AI app to ensure that your observability data is integrated into New Relic for effective performance analysis and insights. + url: https://github.com/newrelic/nr-openai-observability +dataSourceIds: + - langchain +keywords: + - jina-ai + - large language model + - natural language processing + - nlp + - machine learning + - artificial intelligence + - ai + - jina ai + - cloud native + - llm + - llm chains + - generative ai + - jinaai + - multimodal ai + - neural search + - mlops + - NR1_addData +alertPolicies: + - langchain +dashboards: + - langchain \ No newline at end of file diff --git a/quickstarts/jina-ai/logo.svg b/quickstarts/jina-ai/logo.svg new file mode 100644 index 0000000000..79c4690d64 --- /dev/null +++ b/quickstarts/jina-ai/logo.svg @@ -0,0 +1,13 @@ + + + Company logo_dark + + + + + + + + + + \ No newline at end of file diff --git a/quickstarts/myscale/config.yml b/quickstarts/myscale/config.yml new file mode 100644 index 0000000000..9e240e1beb --- /dev/null +++ b/quickstarts/myscale/config.yml @@ -0,0 +1,62 @@ +slug: myscale +description: | + ## Why should you monitor your usage of MyScale? + + Monitor your vector searches on MyScale to get visibility on what you send to MyScale, responses retrieved from MyScale, latency, usage and errors. + + ### Track the query performance of your Vector DB + Track the behavior of your vector stores. Monitor the latency, queries, the number of documents retrieved, and the content of the documents so that you can evaluate their relevance. + + ### Track your app: + By tracking key metrics like latency, throughput, error rates, and input & output, you can gain insights into your app's performance and identify areas of improvement. + + + ### What’s included in this quickstart? + + MyScale uses the LangChain Vector Database to connect to New Relic. This quickstart provides a variety of a pre-built dashboard, which will help you gain insights into the health and performance of your AI app. These reports include: + + - Vector searches + - Alerts for errors, search per vector store, and response time + - Identify popular queries, sources, and content + +summary: | + Monitor the performance and quality of your vector searches with New Relic MyScale quickstart. +icon: logo.jpeg +level: New Relic +authors: + - New Relic +title: MyScale +documentation: + - name: LangChain Vector Database integration documentation + description: | + Implement monitoring and instrumentation for your vector data store, and ensure that your observability data is integrated into New Relic for effective performance analysis and insights. + url: https://github.com/newrelic/nr-openai-observability +dataSourceIds: + - langchain-vectordb +keywords: + - langchain + - vector store + - myscale + - vector search + - vectordb + - vector db + - my scale + - myscale ai + - my scale ai + - conversational ai + - ai + - chatbots + - natural language processing + - nlp + - machine learning + - deep learning + - aiops + - devops + - cloud computing + - intelligent automation + - mlops + - NR1_addData +dashboards: + - langchain-vectordb +alertPolicies: + - langchain-vectordb \ No newline at end of file diff --git a/quickstarts/myscale/logo.jpeg b/quickstarts/myscale/logo.jpeg new file mode 100644 index 0000000000..ef6e2871be Binary files /dev/null and b/quickstarts/myscale/logo.jpeg differ diff --git a/quickstarts/ollama/config.yml b/quickstarts/ollama/config.yml new file mode 100644 index 0000000000..6af11a881d --- /dev/null +++ b/quickstarts/ollama/config.yml @@ -0,0 +1,52 @@ +slug: ollama +description: | + ## Why should you monitor your usage of Ollama? + Monitor your application powered by Ollama language models to ensure, get visibility to what you send to Ollama, responses received from Ollama, latency, usage and errors. By monitoring the usage, you can infer the cost. + ### Track the LLM's performance: + Monitor the input & output, latency and errors of your LLM provider. Track performance changes with the providers and versions of your LLM. Monitor usage to understand the cost, rate limits, and general performance. + ### Track your app: + By tracking key metrics like latency, throughput, error rates, and input & output, you can gain insights into your LangChain app's performance and identify areas of improvement. + ### Early issue detection: + Detect and address issues early to prevent them from affecting model performance. + + ## Comprehensive Ollama monitoring quickstart + Our Ollama quickstart provides metrics including error rate, input & output, latency, queries, and lets you integrate with different language models. + + ## What’s included in the Ollama quickstart? + New Relic Ollama monitoring quickstart provides a variety of pre-built dashboards, which will help you gain insights into the health and performance of your Ollama usage. These reports include: + - Dashboards (average tokens, LLM completion’s details, chain’s details, tool details, top tool names and many more) + - Alerts (errors, request per model and response time) +summary: | + Improve the visibility of your Ollama usage with New Relic Ollama quickstart. +icon: logo.png +level: + - New Relic +authors: + - New Relic +title: Ollama +documentation: + - name: Ollama integration documentation + description: | + Implement monitoring and instrumentation for your Ollama app to ensure that your observability data is integrated into New Relic for effective performance analysis and insights. + url: https://github.com/newrelic/nr-openai-observability +dataSourceIds: + - langchain +keywords: + - ollama + - large language model + - natural language processing + - nlp + - machine learning + - artificial intelligence + - ai + - ollama ai + - cloud native + - llm + - llm chains + - generative ai + - mlops + - NR1_addData +alertPolicies: + - langchain +dashboards: + - langchain \ No newline at end of file diff --git a/quickstarts/ollama/logo.png b/quickstarts/ollama/logo.png new file mode 100644 index 0000000000..8cd2cf1ed8 Binary files /dev/null and b/quickstarts/ollama/logo.png differ diff --git a/quickstarts/openllm/config.yml b/quickstarts/openllm/config.yml new file mode 100644 index 0000000000..8ed94ee61d --- /dev/null +++ b/quickstarts/openllm/config.yml @@ -0,0 +1,55 @@ +slug: openllm +description: | + ## Why should you monitor your usage of OpenLLM? + Monitor your application powered by OpenLLM language models to ensure, get visibility to what you send to OpenLLM, responses received from OpenLLM, latency, usage and errors. By monitoring the usage, you can infer the cost. + ### Track the LLM's performance: + Monitor the input & output, latency and errors of your LLM provider. Track performance changes with the providers and versions of your LLM. Monitor usage to understand the cost, rate limits, and general performance. + ### Track your app: + By tracking key metrics like latency, throughput, error rates, and input & output, you can gain insights into your LangChain app's performance and identify areas of improvement. + ### Early issue detection: + Detect and address issues early to prevent them from affecting model performance. + + ## Comprehensive OpenLLM monitoring quickstart + Our OpenLLM quickstart provides metrics including error rate, input & output, latency, queries, and lets you integrate with different language models. + + ## What’s included in the OpenLLM quickstart? + New Relic OpenLLM monitoring quickstart provides a variety of pre-built dashboards, which will help you gain insights into the health and performance of your OpenLLM usage. These reports include: + - Dashboards (average tokens, LLM completion’s details, chain’s details, tool details, top tool names and many more) + - Alerts (errors, request per model and response time) +summary: | + Improve the visibility of your OpenLLM usage with New Relic OpenLLM quickstart. +icon: logo.jpeg +level: + - New Relic +authors: + - New Relic +title: OpenLLM +documentation: + - name: OpenLLM integration documentation + description: | + Implement monitoring and instrumentation for your OpenLLM app to ensure that your observability data is integrated into New Relic for effective performance analysis and insights. + url: https://github.com/newrelic/nr-openai-observability +dataSourceIds: + - langchain +keywords: + - openllm + - large language model + - natural language processing + - nlp + - machine learning + - artificial intelligence + - ai + - openllm ai + - cloud native + - llm + - llm chains + - deep learning + - distributed computing + - kubernetes + - generative ai + - mlops + - NR1_addData +alertPolicies: + - langchain +dashboards: + - langchain \ No newline at end of file diff --git a/quickstarts/openllm/logo.jpeg b/quickstarts/openllm/logo.jpeg new file mode 100644 index 0000000000..5518e1a6f3 Binary files /dev/null and b/quickstarts/openllm/logo.jpeg differ diff --git a/quickstarts/starrocks/config.yml b/quickstarts/starrocks/config.yml new file mode 100644 index 0000000000..06be1c11eb --- /dev/null +++ b/quickstarts/starrocks/config.yml @@ -0,0 +1,53 @@ +slug: starrocks +description: | + ## Why should you monitor your usage of StarRocks? + + Monitor your vector searches on StarRocks to get visibility on what you send to StarRocks, responses retrieved from StarRocks, latency, usage and errors. + + ### Track the query performance of your Vector DB + Track the behavior of your vector stores. Monitor the latency, queries, the number of documents retrieved, and the content of the documents so that you can evaluate their relevance. + + ### Track your app: + By tracking key metrics like latency, throughput, error rates, and input & output, you can gain insights into your app's performance and identify areas of improvement. + + + ### What’s included in this quickstart? + + StarRocks uses the LangChain Vector Database to connect to New Relic. This quickstart provides a variety of a pre-built dashboard, which will help you gain insights into the health and performance of your AI app. These reports include: + + - Vector searches + - Alerts for errors, search per vector store, and response time + - Identify popular queries, sources, and content + +summary: | + Monitor the performance and quality of your vector searches with New Relic StarRocks quickstart. +icon: logo.png +level: New Relic +authors: + - New Relic +title: StarRocks +documentation: + - name: LangChain Vector Database integration documentation + description: | + Implement monitoring and instrumentation for your vector data store, and ensure that your observability data is integrated into New Relic for effective performance analysis and insights. + url: https://github.com/newrelic/nr-openai-observability +dataSourceIds: + - langchain-vectordb +keywords: + - langchain + - vector store + - starrocks + - vector search + - vectordb + - vector db + - starrocks ai + - ai + - open source analytical database + - analytical database + - artificial intelligence + - mlops + - NR1_addData +dashboards: + - langchain-vectordb +alertPolicies: + - langchain-vectordb \ No newline at end of file diff --git a/quickstarts/starrocks/logo.png b/quickstarts/starrocks/logo.png new file mode 100644 index 0000000000..75c01a99a9 Binary files /dev/null and b/quickstarts/starrocks/logo.png differ