Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
29 commits
Select commit Hold shift + click to select a range
11c8dfd
First draft of a contnet and info restructure
clayton-cornell Jan 9, 2026
ec80268
Split topics up and reworked the weights
clayton-cornell Jan 9, 2026
ff4b689
Remove duplicate information
clayton-cornell Jan 9, 2026
f850769
Restore Introduction to the top if teh TOC
clayton-cornell Jan 9, 2026
f7e993c
Add xref linking to Get started topics
clayton-cornell Jan 9, 2026
5abf768
Convert from bullets to more active descriptions
clayton-cornell Jan 9, 2026
fd3d399
Rename topic
clayton-cornell Jan 9, 2026
a8ee90c
Simplify and remove redundancy
clayton-cornell Jan 9, 2026
34aeb56
Move relase and resource info to better locations
clayton-cornell Jan 9, 2026
538f7b8
Adjust topic order
clayton-cornell Jan 9, 2026
4b9de52
More adjusting topic order
clayton-cornell Jan 9, 2026
5ead8b6
Refactor info to simplify and improve flow
clayton-cornell Jan 22, 2026
01ff3bd
Link to video workshop and align better
clayton-cornell Jan 22, 2026
9d0199d
Fix variable syntax in headings
clayton-cornell Jan 22, 2026
5b84b51
Fix heading var syntax
clayton-cornell Jan 22, 2026
c6f4d8f
Fix up problems found in the code review
clayton-cornell Jan 26, 2026
53bb6ba
Update docs/sources/set-up/supported-platforms.md
clayton-cornell Jan 26, 2026
ad3ca52
Add new topic about requirements
clayton-cornell Jan 28, 2026
9891bd7
Tweak heading text
clayton-cornell Jan 28, 2026
7cb454f
Fix URL target
clayton-cornell Jan 28, 2026
034fca6
Add design expectations
clayton-cornell Jan 28, 2026
61e6f50
Cleanup, removed redundant sections, fixed flow
clayton-cornell Jan 28, 2026
43bae51
Simplified the supported platforms info
clayton-cornell Jan 28, 2026
d4ca8ec
Clarified the outbound connectivity
clayton-cornell Jan 29, 2026
4081f97
Simplify and cleanup content
clayton-cornell Jan 29, 2026
5ce7f03
Simplify some statements
clayton-cornell Jan 29, 2026
954357d
Merge branch 'main' into docs/add-alloy-for-beginners-videos
clayton-cornell Feb 10, 2026
bef9556
Merge branch 'main' into docs/add-alloy-for-beginners-videos
clayton-cornell Feb 11, 2026
6e4ebf3
Merge branch 'main' into docs/add-alloy-for-beginners-videos
clayton-cornell Feb 17, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
82 changes: 31 additions & 51 deletions docs/sources/introduction/_index.md
Original file line number Diff line number Diff line change
@@ -1,72 +1,52 @@
---
canonical: https://grafana.com/docs/alloy/latest/introduction/
description: Grafana Alloy is a flexible, high performance, vendor-neutral distribution of the OTel Collector
description: Grafana Alloy simplifies telemetry collection by combining metrics, logs, traces, and profiles into one powerful, vendor-neutral collector
menuTitle: Introduction
title: Introduction to Grafana Alloy
weight: 10
---

# Introduction to {{% param "FULL_PRODUCT_NAME" %}}

{{< param "PRODUCT_NAME" >}} is a flexible, high performance, vendor-neutral distribution of the [OpenTelemetry][] Collector.
It's fully compatible with the most popular open source observability standards such as OpenTelemetry and Prometheus.
{{< param "FULL_PRODUCT_NAME" >}} is an open source telemetry collector that simplifies how you gather and send observability data.
It's an [OpenTelemetry Collector distribution][OpenTelemetry] with built-in Prometheus pipelines and native support for Loki, Pyroscope, and other observability backends.
Comment thread
clayton-cornell marked this conversation as resolved.

{{< param "PRODUCT_NAME" >}} focuses on ease-of-use and the ability to adapt to the needs of power users.
{{< param "PRODUCT_NAME" >}} collects metrics, logs, traces, and profiles in one unified solution.
Instead of running separate collectors for each signal type, you configure a single tool that handles all your telemetry needs.
This approach reduces operational complexity while giving you the flexibility to send data to any compatible backend, whether that's Grafana Cloud, a self-managed Grafana stack, or other observability platforms.

{{< docs/learning-journeys title="Send logs to Grafana Cloud using Alloy" url="/docs/learning-journeys/send-logs-alloy-loki/" >}}

## Key features

Some of the key features of {{< param "PRODUCT_NAME" >}} include:

* **Custom components:** You can use {{< param "PRODUCT_NAME" >}} to create and share custom components.
Custom components combine a pipeline of existing components into a single, easy-to-understand component that's just a few lines long.
You can use pre-built custom components from the community, ones packaged by Grafana, or create your own.
* **Reusable components:** You can use the output of a component as the input for multiple other components.
* **Chained components:** You can chain components together to form a pipeline.
* **Single task per component:** The scope of each component is limited to one specific task.
* **GitOps compatibility:** {{< param "PRODUCT_NAME" >}} uses frameworks to pull configurations from Git, S3, HTTP endpoints, and just about any other source.
* **Clustering support:** {{< param "PRODUCT_NAME" >}} has native clustering support.
Clustering helps distribute the workload and ensures you have high availability.
You can quickly create horizontally scalable deployments with minimal resource and operational overhead.
* **Security:** {{< param "PRODUCT_NAME" >}} helps you manage authentication credentials and connect to HashiCorp Vaults or Kubernetes clusters to retrieve secrets.
* **Debugging utilities:** {{< param "PRODUCT_NAME" >}} provides troubleshooting support and an embedded [user interface][UI] to help you identify and resolve configuration problems.

## How does {{% param "PRODUCT_NAME" %}} work as an OpenTelemetry collector?

{{< figure src="/media/docs/alloy/flow-diagram-small-alloy.png" alt="Alloy flow diagram" >}}

### Collect
{{< youtube bFyGd_Sr5W4 >}}

{{< param "PRODUCT_NAME" >}} uses more than 120 components to collect telemetry data from applications, databases, and OpenTelemetry collectors.
{{< param "PRODUCT_NAME" >}} supports collection using multiple ecosystems, including OpenTelemetry and Prometheus.

Telemetry data can be either pushed to {{< param "PRODUCT_NAME" >}}, or {{< param "PRODUCT_NAME" >}} can pull it from your data sources.

### Transform

{{< param "PRODUCT_NAME" >}} processes data and transforms it for sending.

You can use transformations to inject extra metadata into telemetry or filter out unwanted data.

### Write
{{< docs/learning-journeys title="Send logs to Grafana Cloud using Alloy" url="/docs/learning-journeys/send-logs-alloy-loki/" >}}
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if sending logs is the "right" Learning Journey here. We don't have a single starting point LJ for Alloy. There are a bunch that use Alloy.. . https://grafana.com/docs/learning-journeys/ covering Linux, macOS, Windows, MYSQL, PostgreSQL, etc,


{{< param "PRODUCT_NAME" >}} sends data to OpenTelemetry-compatible databases or collectors, the Grafana stack, or Grafana Cloud.
## Get started
Comment thread
clayton-cornell marked this conversation as resolved.

{{< param "PRODUCT_NAME" >}} can also write alerting rules in compatible databases.
- [Install][Install] {{< param "PRODUCT_NAME" >}} on your platform
- Learn core [concepts][Concepts] including components, expressions, and pipelines
- Follow [tutorials][tutorials] for hands-on experience
- Explore [alloy-scenarios][scenarios] for real-world configuration examples
- Try the [Alloy for Beginners][beginners] workshop for interactive, scenario-based learning
- Explore the [component reference][reference] to see available components

## Next steps
## Learn more

* [Install][] {{< param "PRODUCT_NAME" >}}.
* Learn about the core [Concepts][] of {{< param "PRODUCT_NAME" >}}.
* Follow the [tutorials][] for hands-on learning about {{< param "PRODUCT_NAME" >}}.
* Learn how to [collect and forward data][Collect] with {{< param "PRODUCT_NAME" >}}.
* Check out the [reference][] documentation to find information about the {{< param "PRODUCT_NAME" >}} components, configuration blocks, and command line tools.
- [Why Alloy][Why Alloy]: Understand when {{< param "PRODUCT_NAME" >}} is the right choice
- [How Alloy works][How Alloy works]: Learn about the architecture and key capabilities
- [Requirements and expectations][Requirements]: Review deployment considerations and constraints
- [Supported platforms][Supported platforms]: Check platform compatibility
- [Estimate resource usage][Estimate resource usage]: Plan your deployment
- [Migrate from other collectors][migrate]: Move from OpenTelemetry Collector, Prometheus Agent, or Grafana Agent

[OpenTelemetry]: https://opentelemetry.io/ecosystem/distributions/
[OpenTelemetry]: https://opentelemetry.io/docs/collector/distributions/
[Install]: ../set-up/install/
[Concepts]: ../get-started/
[Collect]: ../collect/
[tutorials]: ../tutorials/
[reference]: ../reference/
[UI]: ../troubleshoot/debug/
[Why Alloy]: ./why-alloy/
[How Alloy works]: ./how-alloy-works/
[Requirements]: ./requirements/
[Supported platforms]: ../set-up/supported-platforms/
[Estimate resource usage]: ../set-up/estimate-resource-usage/
[migrate]: ../set-up/migrate/
[beginners]: https://github.com/grafana/Grafana-Alloy-for-Beginners
[scenarios]: https://github.com/grafana/alloy-scenarios
117 changes: 117 additions & 0 deletions docs/sources/introduction/how-alloy-works.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,117 @@
---
canonical: https://grafana.com/docs/alloy/latest/introduction/how-alloy-works/
description: Learn how Grafana Alloy works and where it fits in your observability architecture
menuTitle: How Alloy works
title: How Grafana Alloy works
weight: 220
---

# How {{% param "FULL_PRODUCT_NAME" %}} works

Understanding the architecture and design of {{< param "PRODUCT_NAME" >}} helps you use it effectively.

## Where {{% param "PRODUCT_NAME" %}} fits

A typical observability setup has three layers: data sources that generate telemetry, collection tools that gather and process it, and storage backends with visualization frontends for querying and exploring data.

{{< param "PRODUCT_NAME" >}} operates in the collection layer, sitting between your data sources and your storage backends.
It acts as the bridge between them, performing three main functions in your telemetry pipeline.

### Collect telemetry data

{{< param "PRODUCT_NAME" >}} gathers telemetry from any source in your infrastructure.
You can configure it to scrape Prometheus endpoints for metrics or set up receivers to accept data pushed via the OpenTelemetry protocol.
It tails log files and reads from system outputs to capture application and infrastructure logs.
Service discovery automatically finds resources in Kubernetes, Docker, or cloud environments without requiring static configuration.
You can also integrate with databases, message queues, and other systems to capture telemetry from specialized sources.

### Transform and process data

Processing telemetry before sending it to backends optimizes costs and improves data quality.
Create filters to drop unwanted data or redact sensitive information like tokens and credentials from logs before they reach storage.
Add labels, metadata, or contextual information to enrich your data—for example, extract a cloud provider name from instance IDs to create useful aggregation labels.
Standardize attribute names across services when different teams use inconsistent naming conventions.
Implement sampling strategies to reduce high-volume data while preserving the signal you need for troubleshooting.
Convert between formats, such as transforming Prometheus metrics to OpenTelemetry format, to ensure compatibility with your backends.
Define routing rules to send different types of data to different destinations based on your operational requirements.

### Send to backends

{{< param "PRODUCT_NAME" >}} delivers processed telemetry to any storage system you choose.
Send data to Grafana Cloud for managed observability, or export to your self-managed Grafana stack components.
Connect to any Prometheus-compatible database for metrics and any OpenTelemetry-compatible backend for all signal types.
Write to multiple destinations simultaneously, sending the same data to different systems or routing different data types to specialized backends.

## Component-based architecture

{{< param "PRODUCT_NAME" >}} uses modular [components][] that work like building blocks.
Each component performs a specific task, such as collecting metrics from Prometheus endpoints, receiving OpenTelemetry data, transforming and filtering telemetry, or sending data to backends.

You connect these components together to [build pipelines][] that match your exact requirements.
This modular approach makes configurations easier to understand, test, and maintain.

## Programmable pipelines

{{< param "PRODUCT_NAME" >}} uses a rich, [expression-based configuration language][syntax] that lets you reference data from one component in another, create dynamic configurations that respond to changing conditions, build reusable pipelines you can share across teams, and use built-in [functions][expressions] to transform and filter data.

## Custom and shareable pipelines

You can create [custom components][] that combine multiple components into a single, reusable unit.
Share these custom components with your team or the community through the [module system][modules].
Use pre-built modules from the community or create your own.

## Enterprise-ready features

As your systems grow more complex, {{< param "PRODUCT_NAME" >}} scales with you.
[Clustering][] lets you configure instances to form a cluster for automatic workload distribution and high availability.
Centralized configuration retrieves settings from remote servers for fleet management.
Kubernetes-native capabilities let you interact with Kubernetes resources directly without learning separate operators.

## Built-in debugging tools

{{< param "PRODUCT_NAME" >}} includes a [built-in user interface][debug] that helps you visualize your component pipelines, inspect component states and outputs, troubleshoot configuration issues, and monitor performance.

## Deployment patterns

Choose the [deployment pattern][deploy] that fits your architecture.

**Edge deployment:** Deploy {{< param "PRODUCT_NAME" >}} close to your data sources for minimal latency.
Run it as a DaemonSet in Kubernetes to collect from every node, install it on each host for infrastructure monitoring, or deploy it alongside applications for local processing.

**Gateway deployment:** Deploy {{< param "PRODUCT_NAME" >}} as a centralized gateway.
Configure your applications to send telemetry to {{< param "PRODUCT_NAME" >}} gateways, which process and forward data to backends.
Applications only need to know about the gateway endpoints.

**Hybrid deployment:** Combine edge and gateway approaches.
Deploy edge instances to handle initial collection and filtering close to sources, then forward to gateway instances for aggregation and final processing.
This pattern reduces bandwidth usage and enables centralized policy enforcement while maintaining local processing capabilities.

## Integrations

{{< param "PRODUCT_NAME" >}} integrates with Grafana Cloud and self-managed Grafana stacks, routing metrics to Mimir, logs to Loki, traces to Tempo, and profiles to Pyroscope.
It also works with the broader Prometheus ecosystem through full compatibility with the Prometheus exposition format and service discovery mechanisms, and with any OpenTelemetry-compatible backend through OTLP support.

You can also connect to other ecosystems, including InfluxDB, Elasticsearch, and cloud platforms like AWS, Google Cloud Platform, and Azure.

## Next steps

- Review [requirements and expectations][requirements] to understand deployment considerations
- [Install][Install] {{< param "PRODUCT_NAME" >}} to get started
- Learn core [concepts][Concepts] including components, expressions, and pipelines
- Follow [tutorials][tutorials] for hands-on experience
- Explore the [component reference][reference] to see available components

[requirements]: ../requirements/
[Install]: ../../set-up/install/
[Concepts]: ../../get-started/
[tutorials]: ../../tutorials/
[reference]: ../../reference/
[components]: ../../get-started/components/
[build pipelines]: ../../get-started/components/build-pipelines/
[syntax]: ../../get-started/syntax/
[expressions]: ../../get-started/expressions/
[custom components]: ../../get-started/components/custom-components/
[modules]: ../../get-started/modules/
[Clustering]: ../../get-started/clustering/
[debug]: ../../troubleshoot/debug/
[deploy]: ../../set-up/deploy/
Loading
Loading