Skip to content

Commit

Permalink
Update navigation and other remaining old docs (kubernetes#26)
Browse files Browse the repository at this point in the history
* Update navigation and other remaining old docs

* Add lightbox for fig and gallery

* Cleanup copy

* Update copy mistakes

* Update copy mistakes
  • Loading branch information
grampelberg authored Sep 19, 2018
1 parent 190408e commit e8184f1
Show file tree
Hide file tree
Showing 23 changed files with 182 additions and 90 deletions.
Original file line number Diff line number Diff line change
@@ -1,40 +1,35 @@
+++
date = "2018-07-31T12:00:00-07:00"
title = "Adding your service to the mesh"
title = "Adding your service"
[menu.l5d2docs]
name = "Adding Your Service"
weight = 6
+++

In order for your service to take advantage of Linkerd, it needs to be added
to the service mesh. This is done by using the Linkerd CLI to add the Linkerd
proxy sidecar to each pod. By doing this as a rolling update, the availability
of your application will not be affected.
In order for your service to take advantage of Linkerd, it needs to have the
proxy sidecar added to its resource definition. This is done by using the
Linkerd [CLI](../architecture/#cli) to update the definition and output YAML
that can be passed to `kubectl`. By using Kubernetes' rolling updates, the
availability of your application will not be affected.

## Prerequisites

* Applications that use protocols where the server sends data before the client
sends data may require additional configuration. See the
[Protocol support](#protocol-support) section below.
* gRPC applications that use grpc-go must use grpc-go version 1.3 or later due
to a [bug](https://github.com/grpc/grpc-go/issues/1120) in earlier versions.

## Adding your service

To add your service to the service mesh, run:
To add Linkerd to your service, run:

```bash
linkerd inject deployment.yml | kubectl apply -f -
linkerd inject deployment.yml \
| kubectl apply -f -
```

`deployment.yml` is the Kubernetes config file containing your
application. This will trigger a rolling update of your deployment, replacing
each pod with a new one that additionally contains the Linkerd sidecar proxy.
application. This will add the proxy sidecar along with an `initContainer` that
configures iptables to pass all traffic through the proxy. By applying this new
configuration via `kubectl`, a rolling update of your deployment will be
triggered replacing each pod with a new one.

You will know that your service has been successfully added to the service mesh
if its proxy status is green in the Linkerd dashboard.
if it's pods are reported to be meshed in the Meshed column of the Linkerd
dashboard.

{{< fig src="/images/2/dashboard-data-plane.png" title="linkerd dashboard" >}}
{{< fig src="/images/getting-started/stat.png" title="Dashboard" >}}

You can always get to the Linkerd dashboard by running:

Expand Down Expand Up @@ -80,7 +75,15 @@ linkerd inject deployment.yml --skip-inbound-ports=35 \
| kubectl apply -f -
```

## Inject Command Reference
## Inject Reference

For more information on how the inject command works and all of the parameters
that can be set, look at the [Inject Command Reference](../inject-reference)
that can be set, look at the [reference](../cli/inject/).

## Notes

* Applications that use protocols where the server sends data before the client
sends data may require additional configuration. See the
[Protocol support](#protocol-support) section above.
* gRPC applications that use grpc-go must use grpc-go version 1.3 or later due
to a [bug](https://github.com/grpc/grpc-go/issues/1120) in earlier versions.
Original file line number Diff line number Diff line change
Expand Up @@ -26,14 +26,15 @@ The control plane is made up of four components:
- Prometheus - All of the metrics exposed by Linkerd are scraped via Prometheus
and stored here. This is an instance of Prometheus that has been configured to
work specifically with the data that Linkerd generates. There are
[instructions](/2/prometheus) if you would like to integrate this with an
[instructions](/2/observability/prometheus/#exporting-metrics)
if you would like to integrate this with an
existing Prometheus installation.

- Grafana - Linkerd comes with many dashboards out of the box. The Grafana
component is used to render and display these dashboards. You can reach these
dashboards via links in the Linkerd dashboard itself.

{{< fig src="/images/architecture/control-plane.png" title="Control Plane" >}}
{{< fig src="/images/architecture/control-plane.png" title="Architecture" >}}

## Data Plane

Expand Down Expand Up @@ -121,8 +122,12 @@ The dashboards that are provided out of the box include:

## Prometheus

Prometheus is a cloud native monitoring solution that is used to store all the
Linkerd metrics. It is installed as part of the control plane and provides the
data used by the CLI, dashboard and Grafana.
Prometheus is a cloud native monitoring solution that is used to collect
and store all the Linkerd metrics. It is installed as part of the control plane
and provides the data used by the CLI, dashboard and Grafana.

{{< fig src="/images/architecture/metrics.png" title="Metrics Collection" >}}
The proxy exposes a `/metrics` endpoint for Prometheus to scrape on port 4191.
This is scraped every 10 seconds. These metrics are then available to all the
other Linkerd components, such as the CLI and dashboard.

{{< fig src="/images/architecture/prometheus.svg" title="Metrics Collection" >}}
File renamed without changes.
18 changes: 18 additions & 0 deletions linkerd.io/content/2/cli/_index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
+++
date = "2018-09-17T08:00:00-07:00"
title = "Overview"
[sitemap]
priority = 1.0
[menu.l5d2docs]
name = "CLI"
identifier = "cli"
weight = 7
+++

The Linkerd CLI is the primary way to interact with Linkerd. It can install the
control plane to your cluster, add the proxy to your service and provide
detailed metrics for how your service is performing.

As reference, check out the commands below:

{{% sectiontoc "cli" %}}
Original file line number Diff line number Diff line change
@@ -1,19 +1,22 @@
+++
date = "2018-08-28T08:00:00-07:00"
title = "Linkerd Inject Reference"
title = "Inject"
description = "The inject command makes it easy to add Linkerd to your service. It consumes Kubernetes resources in YAML format and adds the proxy sidecar. The output is in a format that can be immediately applied to the cluster via kubectl."
aliases = [
"/2/inject-reference/"
]
[menu.l5d2docs]
name = "Inject Command Reference"
weight = 7
name = "Inject"
parent = "cli"
+++

The `linkerd inject` command allows for a quick and reliable setup of the Linkerd Proxy in a Kubernetes Deployment.
This page is useful as a reference to help you understand what `linkerd inject` is doing under the hood,
as well as provide a reference for the flags that can be passed at the command line.
<br /><br />

If you run the command `linkerd inject -h` it will provide you with the same information as the table below:
<br /><br />
The `linkerd inject` command allows for a quick and reliable setup of the
Linkerd Proxy in a Kubernetes Deployment. This page is useful as a reference to
help you understand what `linkerd inject` is doing under the hood, as well as
provide a reference for the flags that can be passed at the command line.

If you run the command `linkerd inject -h` it will provide you with the same
information as the table below:

{.table .pure-table .table-striped .table-responsive .table-hover}
| Flag | Explanation | Example |
Expand All @@ -40,17 +43,18 @@ If you run the command `linkerd inject -h` it will provide you with the same inf
| `-l, --linkerd-namespace` | Namespace in which Linkerd is installed (default "linkerd"). If you modified the `linkerd install` command and adjusted the Kubernetes Namespace it was deployed into, you'll want to adjust it here. | `-l="default"` |
| `--verbose` | Turn on debug logging. Log all the things. (Especially those things that `linkerd inject` does.) | `--verbose` |

<br />

## What `linkerd inject` Is Doing

`linkerd inject` is modifying the Kubernetes Deployment manifest that is being passed to it
either as a file or as a stream to its standard in. It is adding two things:
either as a file or as a stream to its stdin. It is adding two things:

1. An Init Container (supported as of Kubernetes version 1.6 or greater)

1. A Linkerd Proxy sidecar container into each Pod belonging to your Deployment

The Init Container is responsible for pulling configuration (such as certificates) from the Kubernetes API/Linkerd Controller,
as well as provide configuration to the Linkerd Proxy container for its runtime.
The Init Container is responsible for pulling configuration (such as
certificates) from the Kubernetes API/Linkerd Controller, as well as providing
configuration to the Linkerd Proxy container for its runtime.

## Example Deployment

Expand Down Expand Up @@ -84,7 +88,10 @@ spec:
Now, we can run the `linkerd inject` command as follows:

```bash
$ linkerd inject --proxy-log-level="debug" --skip-outbound-ports=3306 deployment.yaml > deployment_with_linkerd.yaml
linkerd inject \
--proxy-log-level="debug" \
--skip-outbound-ports=3306 \
deployment.yaml > deployment_with_linkerd.yaml
```

The output of that file should look like the following:
Expand Down Expand Up @@ -176,7 +183,7 @@ spec:
terminationMessagePolicy: FallbackToLogsOnError
status: {}
---

```

Note here how the Init Container and Proxy Sidecar Container are added to the manifest with configuration we passed as command line flags.
Note here how the `initContainer` and `linkerd-proxy` sidecar are added to the
manifest with configuration we passed as command line flags.
Original file line number Diff line number Diff line change
@@ -1,8 +1,11 @@
+++
date = "2018-07-31T12:00:00-07:00"
title = "Example: debugging an app"
title = "Debugging a Failing Application"
aliases = [
"/2/debugging-an-app/"
]
[menu.l5d2docs]
name = "Example: Debugging"
name = "Debugging"
weight = 8
+++

Expand Down
File renamed without changes.
Original file line number Diff line number Diff line change
Expand Up @@ -121,12 +121,10 @@ the help for `install`. To do this, run:
linkerd install | kubectl apply -f -
```

`linkerd install` generates a list of Kubernetes resources. Run it
standalone if you would like to understand what is going on. This YAML can
be integrated with any kind of automation you would like to use with your
cluster. By piping the output of `linkerd install` into `kubectl`, the Linkerd
control plane resources will be added to your cluster and start running
immediately.
`linkerd install` generates a list of Kubernetes resources. Run it standalone if
you would like to understand what is going on. By piping the output of `linkerd
install` into `kubectl`, the Linkerd control plane resources will be added to
your cluster and start running immediately.

Depending on the speed of your internet connection, it may take a minute or two
for your Kubernetes cluster to pull the Linkerd images. While that’s happening,
Expand Down Expand Up @@ -253,7 +251,7 @@ This will show the "golden" metrics for each deployment:
- Request rates
- Latency distribution percentiles

To dig in a little further, it is possible `top` the running services in real
To dig in a little further, it is possible to `top` the running services in real
time and get an idea of what is happening on a per-path basis. To see this, you
can run:

Expand Down
File renamed without changes.
24 changes: 24 additions & 0 deletions linkerd.io/content/2/observability/_index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
+++
date = "2018-09-17T08:00:00-07:00"
title = "Overview"
weight = 1
[sitemap]
priority = 1.0
[menu.l5d2docs]
name = "Observability"
identifier = "observability"
weight = 9
+++

Linkerd provides extensive observability functionality. It automatically
instruments top-line metrics such as request volume, success rates, and latency
distributions. In addition to these top-line metrics, Linkerd provides real time
streams of the requests for all incoming and outgoing traffic.

To help visualize all this data, there is a [CLI](../architecture/#cli),
[dashboard](../architecture/#dashboard) and out of the box
[Grafana dashboards](../architecture/#grafana).

Some deep dive topics on metrics:

{{% sectiontoc "observability" %}}
Original file line number Diff line number Diff line change
@@ -1,11 +1,17 @@
+++
date = "2018-07-31T12:00:00-07:00"
title = "Exporting metrics to Prometheus"
title = "Prometheus"
description = "Prometheus collects and stores all the Linkerd metrics. It is a component of the control plane and can be integrated with existing metric systems, such as an existing Prometheus install."
aliases = [
"/2/prometheus/"
]
[menu.l5d2docs]
name = "Exporting to Prometheus"
weight = 9
name = "Prometheus"
parent = "observability"
+++

## Exporting Metrics

If you have an existing Prometheus cluster, it is very easy to export Linkerd's
rich telemetry data to your cluster. Simply add the following item to your
`scrape_configs` in your Prometheus config file (replace `{{.Namespace}}` with
Expand Down
Original file line number Diff line number Diff line change
@@ -1,9 +1,13 @@
+++
date = "2018-07-31T12:00:00-07:00"
title = "Metrics exported by the proxy"
title = "Proxy Metrics Reference"
description = "The Linkerd proxy natively exports Prometheus metrics for all incoming and outgoing traffic."
aliases = [
"/2/proxy-metrics/"
]
[menu.l5d2docs]
name = "Proxy Metrics"
weight = 10
name = "Proxy Metrics Reference"
parent = "observability"
+++

The Linkerd proxy exposes metrics that describe the traffic flowing through the
Expand All @@ -13,16 +17,18 @@ port (default: `:4191`) in the [Prometheus format][prom-format]:
## Protocol-Level Metrics

* `request_total`: A counter of the number of requests the proxy has received.
This is incremented when the request stream begins.
This is incremented when the request stream begins.

* `response_total`: A counter of the number of responses the proxy has received.
This is incremented when the response stream ends.
This is incremented when the response stream ends.

* `response_latency_ms`: A histogram of response latencies. This measurement
reflects the [time-to-first-byte][ttfb] (TTFB) by recording the elapsed time
between the proxy processing a request's headers and the first data frame of the
response. If a response does not include any data, the end-of-stream event is
used. The TTFB measurement is used so that Linkerd accurately reflects
application behavior when a server provides response headers immediately but is
slow to begin serving the response body.
reflects the [time-to-first-byte][ttfb] (TTFB) by recording the elapsed time
between the proxy processing a request's headers and the first data frame of the
response. If a response does not include any data, the end-of-stream event is
used. The TTFB measurement is used so that Linkerd accurately reflects
application behavior when a server provides response headers immediately but is
slow to begin serving the response body.

Note that latency measurements are not exported to Prometheus until the stream
_completes_. This is necessary so that latencies can be labeled with the appropriate
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,18 +20,22 @@ run Linkerd on. (See [Adding Your Service](../adding-your-service) for more.)
Once a service is running with Linkerd, you can use Linkerd's UI to inspect and
manipulate it.

You can [get started](../getting-started) in minutes!

## Architecture

{{< fig src="/images/architecture/control-plane.png" title="Architecture" >}}

Let’s take each of Linkerd's components in turn.

Linkerd's UI is comprised of a CLI (helpfully called `linkerd`) and a web UI.
The CLI runs on your local machine; the web UI is hosted by the control plane.

The Linkerd control plane runs on your cluster as a set of services that drive
the behavior of the data plane. These services accomplish various
things--aggregating telemetry data, providing a user-facing API, providing
control data to the data plane proxies, etc. On Kubernetes, they run in a
dedicated Kubernetes namespace (also called `linkerd` by default).
The Linkerd control plane is composed of a number of services that run on your
cluster and drive the behavior of the data plane. These services accomplish
various things--aggregating telemetry data, providing a user-facing API,
providing control data to the data plane proxies, etc. By default, they run in a
dedicated `linkerd` namespace.

Finally, Linkerd's data plane is comprised of ultralight, transparent proxies
that are deployed in front of a service. These proxies automatically handle all
Expand All @@ -41,6 +45,9 @@ receiving control signals from, a control plane. This design allows Linkerd to
measure and manipulate traffic to and from your service without introducing
excessive latency.

You can check out the [architecture](../architecture/) for more
details on the components, what they do and how it all fits together.

## Using Linkerd

To use Linkerd, you use the Linkerd CLI and the web UI. The CLI and the web UI
Expand All @@ -53,4 +60,3 @@ from a CI/CD system.)

A brief overview of the CLI’s functionality can be seen by running `linkerd
--help`.

Loading

0 comments on commit e8184f1

Please sign in to comment.