diff --git a/docs/root/start/sandboxes/cors.rst b/docs/root/start/sandboxes/cors.rst index 8e3ac24996ee1..9b122b92706a6 100644 --- a/docs/root/start/sandboxes/cors.rst +++ b/docs/root/start/sandboxes/cors.rst @@ -37,12 +37,19 @@ Ensure that you have a recent versions of ``docker`` and ``docker-compose``. A simple way to achieve this is via the `Docker Desktop `_. -**Step 2: Clone the Envoy repo and start all of our containers** +**Step 2: Clone the Envoy repo** -If you have not cloned the Envoy repo, clone it with ``git clone git@github.com:envoyproxy/envoy`` -or ``git clone https://github.com/envoyproxy/envoy.git`` +If you have not cloned the Envoy repo, clone it with: -Terminal 1 +``git clone git@github.com:envoyproxy/envoy`` + +or + +``git clone https://github.com/envoyproxy/envoy.git`` + +**Step 3: Start all of our containers** + +Switch to the ``frontend`` directory in the ``cors`` example, and start the containers: .. code-block:: console @@ -57,12 +64,13 @@ Terminal 1 frontend_front-envoy_1 /docker-entrypoint.sh /bin ... Up 10000/tcp, 0.0.0.0:8000->8000/tcp, 0.0.0.0:8001->8001/tcp frontend_frontend-service_1 /bin/sh -c /usr/local/bin/ ... Up 10000/tcp, 8000/tcp -Terminal 2 +Now, switch to the ``backend`` directory in the ``cors`` example, and start the containers: .. code-block:: console $ pwd envoy/examples/cors/backend + $ docker-compose pull $ docker-compose up --build -d $ docker-compose ps @@ -71,12 +79,13 @@ Terminal 2 backend_backend-service_1 /bin/sh -c /usr/local/bin/ ... Up 10000/tcp, 8000/tcp backend_front-envoy_1 /docker-entrypoint.sh /bin ... Up 10000/tcp, 0.0.0.0:8002->8000/tcp, 0.0.0.0:8003->8001/tcp -**Step 3: Test Envoy's CORS capabilities** +**Step 4: Test Envoy's CORS capabilities** -You can now open a browser to view your frontend service at ``localhost:8000``. +You can now open a browser to view your frontend service at http://localhost:8000. Results of the cross-origin request will be shown on the page under *Request Results*. -Your browser's CORS enforcement logs can be found in the console. + +Your browser's ``CORS`` enforcement logs can be found in the browser console. For example: @@ -85,13 +94,14 @@ For example: Access to XMLHttpRequest at 'http://192.168.99.100:8002/cors/disabled' from origin 'http://192.168.99.101:8000' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource. -**Step 4: Check stats of backend via admin** +**Step 5: Check stats of backend via admin** + +When Envoy runs, it can listen to ``admin`` requests if a port is configured. -When Envoy runs, it can listen to ``admin`` requests if a port is configured. In the example -configs, the backend admin is bound to port ``8003``. +In the example configs, the backend admin is bound to port ``8003``. -If you go to ``localhost:8003/stats`` you will be able to view -all of the Envoy stats for the backend. You should see the CORS stats for +If you browse to http://localhost:8003/stats you will be able to view +all of the Envoy stats for the backend. You should see the ``CORS`` stats for invalid and valid origins increment as you make requests from the frontend cluster. .. code-block:: none diff --git a/docs/root/start/sandboxes/csrf.rst b/docs/root/start/sandboxes/csrf.rst index 66268dd1e50ac..0893598b21d22 100644 --- a/docs/root/start/sandboxes/csrf.rst +++ b/docs/root/start/sandboxes/csrf.rst @@ -38,12 +38,19 @@ Ensure that you have a recent versions of ``docker`` and ``docker-compose``. A simple way to achieve this is via the `Docker Desktop `_. -**Step 2: Clone the Envoy repo and start all of our containers** +**Step 2: Clone the Envoy repo** -If you have not cloned the Envoy repo, clone it with ``git clone git@github.com:envoyproxy/envoy`` -or ``git clone https://github.com/envoyproxy/envoy.git`` +If you have not cloned the Envoy repo, clone it with: -Terminal 1 (samesite) +``git clone git@github.com:envoyproxy/envoy`` + +or + +``git clone https://github.com/envoyproxy/envoy.git`` + +**Step 3: Start all of our containers** + +Switch to the ``samesite`` directory in the ``csrf`` example, and start the containers: .. code-block:: console @@ -58,7 +65,7 @@ Terminal 1 (samesite) samesite_front-envoy_1 /docker-entrypoint.sh /bin ... Up 10000/tcp, 0.0.0.0:8000->8000/tcp, 0.0.0.0:8001->8001/tcp samesite_service_1 /bin/sh -c /usr/local/bin/ ... Up 10000/tcp, 8000/tcp -Terminal 2 (crosssite) +Now, switch to the ``crosssite`` directory in the ``csrf`` example, and start the containers: .. code-block:: console @@ -72,27 +79,19 @@ Terminal 2 (crosssite) crosssite_front-envoy_1 /bin/sh -c /usr/local/bin/ ... Up 10000/tcp, 0.0.0.0:8002->8000/tcp, 0.0.0.0:8003->8001/tcp crosssite_service_1 /docker-entrypoint.sh /bin ... Up 10000/tcp, 8000/tcp -**Step 3: Test Envoy's CSRF capabilities** - -You can now open a browser to view your ``crosssite`` frontend service. +**Step 4: Test Envoy's CSRF capabilities** -.. code-block:: console - - $ open "http://localhost:8002" +You can now open a browser at http://localhost:8002 to view your ``crosssite`` frontend service. Enter the IP of the ``samesite`` machine to demonstrate cross-site requests. Requests with the enabled enforcement will fail. By default this field will be populated with ``localhost``. -To demonstrate same-site requests open the frontend service for ``samesite`` and enter -the IP address of the ``samesite`` machine as the destination. - -.. code-block:: console - - $ open "http://localhost:8000" +To demonstrate same-site requests open the frontend service for ``samesite`` at http://localhost:8000 +and enter the IP address of the ``samesite`` machine as the destination. Results of the cross-site request will be shown on the page under *Request Results*. -Your browser's CSRF enforcement logs can be found in the console and in the +Your browser's ``CSRF`` enforcement logs can be found in the browser console and in the network tab. For example: @@ -102,14 +101,14 @@ For example: Failed to load resource: the server responded with a status of 403 (Forbidden) If you change the destination to be the same as one displaying the website and -set the CSRF enforcement to enabled the request will go through successfully. +set the ``CSRF`` enforcement to enabled the request will go through successfully. -**Step 4: Check stats of backend via admin** +**Step 5: Check stats of backend via admin** When Envoy runs, it can listen to ``admin`` requests if a port is configured. In the example configs, the backend admin is bound to port ``8001``. -If you go to ``localhost:8001/stats`` you will be able to view +If you browse to http://localhost:8001/stats you will be able to view all of the Envoy stats for the backend. You should see the CORS stats for invalid and valid origins increment as you make requests from the frontend cluster. diff --git a/docs/root/start/sandboxes/ext_authz.rst b/docs/root/start/sandboxes/ext_authz.rst index fd890c5562996..522d37392a04c 100644 --- a/docs/root/start/sandboxes/ext_authz.rst +++ b/docs/root/start/sandboxes/ext_authz.rst @@ -23,12 +23,21 @@ Ensure that you have a recent versions of ``docker`` and ``docker-compose``. A simple way to achieve this is via the `Docker Desktop `_. -**Step 2: Clone the Envoy repository and start all of our containers** +**Step 2: Clone the Envoy repo** -If you have not cloned the Envoy repository, clone it with ``git clone git@github.com:envoyproxy/envoy`` -or ``git clone https://github.com/envoyproxy/envoy.git``. +If you have not cloned the Envoy repo, clone it with: -To build this sandbox example and start the example services, run the following commands:: +``git clone git@github.com:envoyproxy/envoy`` + +or + +``git clone https://github.com/envoyproxy/envoy.git`` + +**Step 3: Start all of our containers** + +To build this sandbox example and start the example services, run the following commands: + +.. code-block:: console $ pwd envoy/examples/ext_authz @@ -44,6 +53,7 @@ To build this sandbox example and start the example services, run the following ext_authz_upstream-service_1 python3 /app/service/server.py Up .. note:: + This sandbox has multiple setup controlled by ``FRONT_ENVOY_YAML`` environment variable which points to the effective Envoy configuration to be used. The default value of ``FRONT_ENVOY_YAML`` can be defined in the ``.env`` file or provided inline when running the ``docker-compose up`` @@ -54,7 +64,9 @@ front-envoy with ext_authz HTTP filter with gRPC service ``V3`` (this is specifi The possible values of ``FRONT_ENVOY_YAML`` can be found inside the ``envoy/examples/ext_authz/config`` directory. -For example, to run Envoy with ext_authz HTTP filter with HTTP service will be:: +For example, to run Envoy with ext_authz HTTP filter with HTTP service will be: + +.. code-block:: console $ pwd envoy/examples/ext_authz @@ -64,9 +76,11 @@ For example, to run Envoy with ext_authz HTTP filter with HTTP service will be:: $ FRONT_ENVOY_YAML=config/http-service.yaml docker-compose up --build -d $ # Or you can update the .env file with the above FRONT_ENVOY_YAML value, so you don't have to specify it when running the "up" command. -**Step 3: Access the upstream-service behind the Front Envoy** +**Step 4: Access the upstream-service behind the Front Envoy** -You can now try to send a request to upstream-service via the front-envoy as follows:: +You can now try to send a request to upstream-service via the front-envoy as follows: + +.. code-block:: console $ curl -v localhost:8000/service * Trying 127.0.0.1... @@ -87,10 +101,13 @@ filter employed by Envoy rejected the call. To let the request reach the upstrea to provide a ``Bearer`` token via the ``Authorization`` header. .. note:: + A complete list of users is defined in ``envoy/examples/ext_authz/auth/users.json`` file. For example, the ``token1`` used in the below example is corresponding to ``user1``. -An example of successful requests can be observed as follows:: +An example of successful requests can be observed as follows: + +.. code-block:: console $ curl -v -H "Authorization: Bearer token1" localhost:8000/service * Trying 127.0.0.1... @@ -114,7 +131,9 @@ An example of successful requests can be observed as follows:: We can also employ `Open Policy Agent `_ server (with `envoy_ext_authz_grpc `_ plugin enabled) -as the authorization server. To run this example:: +as the authorization server. To run this example: + +.. code-block:: console $ pwd envoy/examples/ext_authz @@ -123,7 +142,9 @@ as the authorization server. To run this example:: $ docker-compose down $ FRONT_ENVOY_YAML=config/opa-service/v2.yaml docker-compose up --build -d -And sending a request to the upstream service (via the Front Envoy) gives:: +And sending a request to the upstream service (via the Front Envoy) gives: + +.. code-block:: console $ curl localhost:8000/service --verbose * Trying ::1... @@ -145,7 +166,9 @@ And sending a request to the upstream service (via the Front Envoy) gives:: Hello OPA from behind Envoy! From the logs, we can observe the policy decision message from the Open Policy Agent server (for -the above request against the defined policy in ``config/opa-service/policy.rego``):: +the above request against the defined policy in ``config/opa-service/policy.rego``): + +.. code-block:: console $ docker-compose logs ext_authz-opa-service | grep decision_id -A 30 ext_authz-opa-service_1 | "decision_id": "8143ca68-42d8-43e6-ade6-d1169bf69110", @@ -180,7 +203,9 @@ the above request against the defined policy in ``config/opa-service/policy.rego ext_authz-opa-service_1 | "method": "GET", ext_authz-opa-service_1 | "path": "/service", -Trying to send a request with method other than ``GET`` gives a rejection:: +Trying to send a request with method other than ``GET`` gives a rejection: + +.. code-block:: console $ curl -X POST localhost:8000/service --verbose * Trying ::1... diff --git a/docs/root/start/sandboxes/fault_injection.rst b/docs/root/start/sandboxes/fault_injection.rst index a091c2ada258a..c7034f37f9ec1 100644 --- a/docs/root/start/sandboxes/fault_injection.rst +++ b/docs/root/start/sandboxes/fault_injection.rst @@ -8,7 +8,7 @@ This simple example demonstrates Envoy's :ref:`fault injection `_. -**Step 2: Clone the Envoy repo and start all of our containers** +**Step 2: Clone the Envoy repo** -If you have not cloned the Envoy repo, clone it with ``git clone git@github.com:envoyproxy/envoy`` -or ``git clone https://github.com/envoyproxy/envoy.git`` +If you have not cloned the Envoy repo, clone it with: +``git clone git@github.com:envoyproxy/envoy`` + +or + +``git clone https://github.com/envoyproxy/envoy.git`` + +**Step 3: Start all of our containers** Terminal 1 @@ -37,7 +43,7 @@ Terminal 1 fault-injection_backend_1 gunicorn -b 0.0.0.0:80 htt Up 0.0.0.0:8080->80/tcp fault-injection_envoy_1 /docker-entrypoint.sh /usr Up 10000/tcp, 0.0.0.0:9211->9211/tcp, 0.0.0.0:9901->9901/tcp -**Step 3: Start sending continuous stream of HTTP requests** +**Step 4: Start sending continuous stream of HTTP requests** Terminal 2 @@ -50,7 +56,7 @@ Terminal 2 The script above (``send_request.sh``) sends a continuous stream of HTTP requests to Envoy, which in turn forwards the requests to the backend container. Fauilt injection is configured in Envoy but turned off (i.e. affects 0% of requests). Consequently, you should see a continuous sequence of HTTP 200 response codes. -**Step 4: Test Envoy's abort fault injection** +**Step 5: Test Envoy's abort fault injection** Turn on *abort* fault injection via the runtime using the commands below. @@ -72,7 +78,7 @@ Terminal 3 $ bash disable_abort_fault_injection.sh -**Step 5: Test Envoy's delay fault injection** +**Step 6: Test Envoy's delay fault injection** Turn on *delay* fault injection via the runtime using the commands below. @@ -93,7 +99,7 @@ Terminal 3 $ bash disable_delay_fault_injection.sh -**Step 5: Check the current runtime filesystem** +**Step 7: Check the current runtime filesystem** To see the current runtime filesystem overview: diff --git a/docs/root/start/sandboxes/front_proxy.rst b/docs/root/start/sandboxes/front_proxy.rst index 41baf801a5090..52c6c284a9add 100644 --- a/docs/root/start/sandboxes/front_proxy.rst +++ b/docs/root/start/sandboxes/front_proxy.rst @@ -16,11 +16,14 @@ Below you can see a graphic showing the docker compose deployment: All incoming requests are routed via the front Envoy, which is acting as a reverse proxy sitting on the edge of the ``envoymesh`` network. Port ``8080``, ``8443``, and ``8001`` are exposed by docker compose (see :repo:`/examples/front-proxy/docker-compose.yaml`) to handle ``HTTP``, ``HTTPS`` calls -to the services and requests to ``/admin`` respectively. Moreover, notice that all traffic routed -by the front Envoy to the service containers is actually routed to the service Envoys -(routes setup in :repo:`/examples/front-proxy/front-envoy.yaml`). In turn the service Envoys route -the request to the Flask app via the loopback address (routes setup in :repo:`/examples/front-proxy/service-envoy.yaml`). -This setup illustrates the advantage of running service Envoys collocated with your services: all +to the services and requests to ``/admin`` respectively. + +Moreover, notice that all traffic routed by the front Envoy to the service containers is actually +routed to the service Envoys (routes setup in :repo:`/examples/front-proxy/front-envoy.yaml`). + +In turn the service Envoys route the request to the Flask app via the loopback +address (routes setup in :repo:`/examples/front-proxy/service-envoy.yaml`). This +setup illustrates the advantage of running service Envoys collocated with your services: all requests are handled by the service Envoy, and efficiently routed to your services. Running the Sandbox @@ -31,14 +34,23 @@ as is described in the image above. **Step 1: Install Docker** -Ensure that you have a recent versions of ``docker`` and ``docker-compose`` installed. +Ensure that you have a recent versions of ``docker`` and ``docker-compose``. A simple way to achieve this is via the `Docker Desktop `_. -**Step 2: Clone the Envoy repo, and start all of our containers** +**Step 2: Clone the Envoy repo** + +If you have not cloned the Envoy repo, clone it with: + +``git clone git@github.com:envoyproxy/envoy`` + +or -If you have not cloned the Envoy repo, clone it with ``git clone git@github.com:envoyproxy/envoy`` -or ``git clone https://github.com/envoyproxy/envoy.git``:: +``git clone https://github.com/envoyproxy/envoy.git`` + +**Step 3: Start all of our containers** + +.. code-block:: console $ pwd envoy/examples/front-proxy @@ -46,17 +58,19 @@ or ``git clone https://github.com/envoyproxy/envoy.git``:: $ docker-compose up -d $ docker-compose ps - Name Command State Ports - ------------------------------------------------------------------------------------------------------------------------------------------------------ - front-proxy_front-envoy_1 /docker-entrypoint.sh /bin ... Up 10000/tcp, 0.0.0.0:8080->8080/tcp, 0.0.0.0:8001->8001/tcp, 0.0.0.0:8443->8443/tcp - front-proxy_service1_1 /bin/sh -c /usr/local/bin/ ... Up 10000/tcp, 8000/tcp - front-proxy_service2_1 /bin/sh -c /usr/local/bin/ ... Up 10000/tcp, 8000/tcp + Name Command State Ports + ------------------------------------------------------------------------------------------------------------------------------------------------------ + front-proxy_front-envoy_1 /docker-entrypoint.sh /bin ... Up 10000/tcp, 0.0.0.0:8080->8080/tcp, 0.0.0.0:8001->8001/tcp, 0.0.0.0:8443->8443/tcp + front-proxy_service1_1 /bin/sh -c /usr/local/bin/ ... Up 10000/tcp, 8000/tcp + front-proxy_service2_1 /bin/sh -c /usr/local/bin/ ... Up 10000/tcp, 8000/tcp -**Step 3: Test Envoy's routing capabilities** +**Step 4: Test Envoy's routing capabilities** You can now send a request to both services via the ``front-envoy``. -For ``service1``:: +For ``service1``: + +.. code-block:: console $ curl -v localhost:8080/service/1 * Trying ::1... @@ -76,7 +90,9 @@ For ``service1``:: < Hello from behind Envoy (service 1)! hostname: 36418bc3c824 resolvedhostname: 192.168.160.4 -For ``service2``:: +For ``service2``: + +.. code-block:: console $ curl -v localhost:8080/service/2 * Trying ::1... @@ -99,7 +115,9 @@ For ``service2``:: Notice that each request, while sent to the front Envoy, was correctly routed to the respective application. -We can also use ``HTTPS`` to call services behind the front Envoy. For example, calling ``service1``:: +We can also use ``HTTPS`` to call services behind the front Envoy. For example, calling ``service1``: + +.. code-block:: console $ curl https://localhost:8443/service/1 -k -v * Trying ::1... @@ -142,16 +160,20 @@ We can also use ``HTTPS`` to call services behind the front Envoy. For example, < Hello from behind Envoy (service 1)! hostname: 36418bc3c824 resolvedhostname: 192.168.160.4 -**Step 4: Test Envoy's load balancing capabilities** +**Step 5: Test Envoy's load balancing capabilities** -Now let's scale up our ``service1`` nodes to demonstrate the load balancing abilities of Envoy:: +Now let's scale up our ``service1`` nodes to demonstrate the load balancing abilities of Envoy: + +.. code-block:: console $ docker-compose scale service1=3 Creating and starting example_service1_2 ... done Creating and starting example_service1_3 ... done Now if we send a request to ``service1`` multiple times, the front Envoy will load balance the -requests by doing a round robin of the three ``service1`` machines:: +requests by doing a round robin of the three ``service1`` machines: + +.. code-block:: console $ curl -v localhost:8080/service/1 * Trying ::1... @@ -170,6 +192,7 @@ requests by doing a round robin of the three ``service1`` machines:: < x-envoy-upstream-service-time: 6 < Hello from behind Envoy (service 1)! hostname: 3dc787578c23 resolvedhostname: 192.168.160.6 + $ curl -v localhost:8080/service/1 * Trying 192.168.99.100... * Connected to 192.168.99.100 (192.168.99.100) port 8080 (#0) @@ -186,6 +209,7 @@ requests by doing a round robin of the three ``service1`` machines:: < date: Fri, 26 Aug 2018 19:40:22 GMT < Hello from behind Envoy (service 1)! hostname: 3a93ece62129 resolvedhostname: 192.168.160.5 + $ curl -v localhost:8080/service/1 * Trying 192.168.99.100... * Connected to 192.168.99.100 (192.168.99.100) port 8080 (#0) @@ -204,33 +228,42 @@ requests by doing a round robin of the three ``service1`` machines:: < Hello from behind Envoy (service 1)! hostname: 36418bc3c824 resolvedhostname: 192.168.160.4 -**Step 5: enter containers and curl services** +**Step 6: enter containers and curl services** In addition of using ``curl`` from your host machine, you can also enter the containers themselves and ``curl`` from inside them. To enter a container you can use ``docker-compose exec /bin/bash``. For example we can -enter the ``front-envoy`` container, and ``curl`` for services locally:: +enter the ``front-envoy`` container, and ``curl`` for services locally: + +.. code-block:: console + + $ docker-compose exec front-envoy /bin/bash + root@81288499f9d7:/# curl localhost:8080/service/1 + Hello from behind Envoy (service 1)! hostname: 85ac151715c6 resolvedhostname: 172.19.0.3 + root@81288499f9d7:/# curl localhost:8080/service/1 + Hello from behind Envoy (service 1)! hostname: 20da22cfc955 resolvedhostname: 172.19.0.5 + root@81288499f9d7:/# curl localhost:8080/service/1 + Hello from behind Envoy (service 1)! hostname: f26027f1ce28 resolvedhostname: 172.19.0.6 + root@81288499f9d7:/# curl localhost:8080/service/2 + Hello from behind Envoy (service 2)! hostname: 92f4a3737bbc resolvedhostname: 172.19.0.2 + +**Step 7: enter container and curl admin** + +When Envoy runs it also attaches an ``admin`` to your desired port. + +In the example configs the admin is bound to port ``8001``. + +We can ``curl`` it to gain useful information: - $ docker-compose exec front-envoy /bin/bash - root@81288499f9d7:/# curl localhost:8080/service/1 - Hello from behind Envoy (service 1)! hostname: 85ac151715c6 resolvedhostname: 172.19.0.3 - root@81288499f9d7:/# curl localhost:8080/service/1 - Hello from behind Envoy (service 1)! hostname: 20da22cfc955 resolvedhostname: 172.19.0.5 - root@81288499f9d7:/# curl localhost:8080/service/1 - Hello from behind Envoy (service 1)! hostname: f26027f1ce28 resolvedhostname: 172.19.0.6 - root@81288499f9d7:/# curl localhost:8080/service/2 - Hello from behind Envoy (service 2)! hostname: 92f4a3737bbc resolvedhostname: 172.19.0.2 +- ``/server_info`` provides information about the Envoy version you are running. +- ``/stats`` provides statistics about the Envoy server. -**Step 6: enter containers and curl admin** +In the example we can we can enter the ``front-envoy`` container to query admin: -When Envoy runs it also attaches an ``admin`` to your desired port. In the example -configs the admin is bound to port ``8001``. We can ``curl`` it to gain useful information. -For example you can ``curl`` ``/server_info`` to get information about the -Envoy version you are running. Additionally you can ``curl`` ``/stats`` to get -statistics. For example inside ``front-envoy`` we can get:: +.. code-block:: console - $ docker-compose exec front-envoy /bin/bash - root@e654c2c83277:/# curl localhost:8001/server_info + $ docker-compose exec front-envoy /bin/bash + root@e654c2c83277:/# curl localhost:8001/server_info .. code-block:: json @@ -276,27 +309,27 @@ statistics. For example inside ``front-envoy`` we can get:: "uptime_all_epochs": "188s" } -.. code-block:: text - - root@e654c2c83277:/# curl localhost:8001/stats - cluster.service1.external.upstream_rq_200: 7 - ... - cluster.service1.membership_change: 2 - cluster.service1.membership_total: 3 - ... - cluster.service1.upstream_cx_http2_total: 3 - ... - cluster.service1.upstream_rq_total: 7 - ... - cluster.service2.external.upstream_rq_200: 2 - ... - cluster.service2.membership_change: 1 - cluster.service2.membership_total: 1 - ... - cluster.service2.upstream_cx_http2_total: 1 - ... - cluster.service2.upstream_rq_total: 2 - ... +.. code-block:: console + + root@e654c2c83277:/# curl localhost:8001/stats + cluster.service1.external.upstream_rq_200: 7 + ... + cluster.service1.membership_change: 2 + cluster.service1.membership_total: 3 + ... + cluster.service1.upstream_cx_http2_total: 3 + ... + cluster.service1.upstream_rq_total: 7 + ... + cluster.service2.external.upstream_rq_200: 2 + ... + cluster.service2.membership_change: 1 + cluster.service2.membership_total: 1 + ... + cluster.service2.upstream_cx_http2_total: 1 + ... + cluster.service2.upstream_rq_total: 2 + ... Notice that we can get the number of members of upstream clusters, number of requests fulfilled by them, information about http ingress, and a plethora of other useful stats. diff --git a/docs/root/start/sandboxes/grpc_bridge.rst b/docs/root/start/sandboxes/grpc_bridge.rst index aa61e60742699..318eff7d91117 100644 --- a/docs/root/start/sandboxes/grpc_bridge.rst +++ b/docs/root/start/sandboxes/grpc_bridge.rst @@ -8,62 +8,143 @@ Envoy gRPC The gRPC bridge sandbox is an example usage of Envoy's :ref:`gRPC bridge filter `. -Included in the sandbox is a gRPC in-memory Key/Value store with a Python HTTP -client. The Python client makes HTTP/1 requests through the Envoy sidecar -process which are upgraded into HTTP/2 gRPC requests. Response trailers are then -buffered and sent back to the client as a HTTP/1 header payload. + +This is an example of a key-value store where an ``http``-based client CLI, written in ``Python``, +updates a remote store, written in ``Go``, using the stubs generated for both languages. + +The client send messages through a proxy that upgrades the HTTP requests from ``http/1.1`` to ``http/2``. + +``[client](http/1.1) -> [client-egress-proxy](http/2) -> [server-ingress-proxy](http/2) -> [server]`` Another Envoy feature demonstrated in this example is Envoy's ability to do authority base routing via its route configuration. -Building the Go service -~~~~~~~~~~~~~~~~~~~~~~~ -To build the Go gRPC service run:: +Running the Sandbox +~~~~~~~~~~~~~~~~~~~ - $ pwd - envoy/examples/grpc-bridge - $ script/bootstrap.sh - $ script/build.sh +The following documentation runs through the setup of the services. + +**Step 1: Install Docker** + +Ensure that you have a recent versions of ``docker`` and ``docker-compose``. + +A simple way to achieve this is via the `Docker Desktop `_. + +**Step 2: Clone the Envoy repo** + +If you have not cloned the Envoy repo, clone it with: + +``git clone git@github.com:envoyproxy/envoy`` -Note: ``build.sh`` requires that your Envoy codebase (or a working copy thereof) is in ``$GOPATH/src/github.com/envoyproxy/envoy``. +or -Docker compose -~~~~~~~~~~~~~~ +``git clone https://github.com/envoyproxy/envoy.git`` -To run the docker compose file, and set up both the Python and the gRPC containers -run:: +**Step 3: Generate the protocol stubs** + +A docker-compose file is provided that generates the stubs for both ``client`` and ``server`` from the +specification in the ``protos`` directory. + +Inspecting the ``docker-compose-protos.yaml`` file, you will see that it contains both the ``python`` +and ``go`` gRPC protoc commands necessary for generating the protocol stubs. + +Generate the stubs as follows: + +.. code-block:: console $ pwd envoy/examples/grpc-bridge - $ docker-compose pull - $ docker-compose up --build + $ docker-compose -f docker-compose-protos.yaml up + Starting grpc-bridge_stubs_python_1 ... done + Starting grpc-bridge_stubs_go_1 ... done + Attaching to grpc-bridge_stubs_go_1, grpc-bridge_stubs_python_1 + grpc-bridge_stubs_go_1 exited with code 0 + grpc-bridge_stubs_python_1 exited with code 0 + +You may wish to clean up left over containers with the following command: + +.. code-block:: console + + $ docker container prune + +You can view the generated ``kv`` modules for both the client and server in their +respective directories: + +.. code-block:: console + + $ ls -la client/kv/kv_pb2.py + -rw-r--r-- 1 mdesales CORP\Domain Users 9527 Nov 6 21:59 client/kv/kv_pb2.py + + $ ls -la server/kv/kv.pb.go + -rw-r--r-- 1 mdesales CORP\Domain Users 9994 Nov 6 21:59 server/kv/kv.pb.go + +These generated ``python`` and ``go`` stubs can be included as external modules. + +**Step 4: Start all of our containers** + +To build this sandbox example and start the example services, run the following commands: + +.. code-block:: console + + $ pwd + envoy/examples/grpc-bridge + $ docker-compose pull + $ docker-compose up --build -d + $ docker-compose ps + + Name Command State Ports + --------------------------------------------------------------------------------------------------------------------------------------- + grpc-bridge_grpc-client-proxy_1 /docker-entrypoint.sh /bin ... Up 10000/tcp, 0.0.0.0:9911->9911/tcp, 0.0.0.0:9991->9991/tcp + grpc-bridge_grpc-client_1 /bin/sh -c tail -f /dev/null Up + grpc-bridge_grpc-server-proxy_1 /docker-entrypoint.sh /bin ... Up 10000/tcp, 0.0.0.0:8811->8811/tcp, 0.0.0.0:8881->8881/tcp + grpc-bridge_grpc-server_1 /bin/sh -c /bin/server Up 0.0.0.0:8081->8081/tcp + Sending requests to the Key/Value store ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -To use the Python service and send gRPC requests:: +To use the Python service and send gRPC requests: + +.. code-block:: console $ pwd envoy/examples/grpc-bridge - # set a key + +Set a key: + +.. code-block:: console + $ docker-compose exec python /client/client.py set foo bar setf foo to bar - # get a key + +Get a key: + +.. code-block:: console + $ docker-compose exec python /client/client.py get foo bar - # modify an existing key +Modify an existing key: + +.. code-block:: console + $ docker-compose exec python /client/client.py set foo baz setf foo to baz - # get the modified key +Get the modified key: + +.. code-block:: console + $ docker-compose exec python /client/client.py get foo baz -In the running docker-compose container, you should see the gRPC service printing a record of its activity:: +In the running docker-compose container, you should see the gRPC service printing a record of its activity: + +.. code-block:: console + $ docker-compose logs grpc-server grpc_1 | 2017/05/30 12:05:09 set: foo = bar grpc_1 | 2017/05/30 12:05:12 get: foo grpc_1 | 2017/05/30 12:05:18 set: foo = baz diff --git a/docs/root/start/sandboxes/jaeger_native_tracing.rst b/docs/root/start/sandboxes/jaeger_native_tracing.rst index 5c41560d96c41..505ee4d4df423 100644 --- a/docs/root/start/sandboxes/jaeger_native_tracing.rst +++ b/docs/root/start/sandboxes/jaeger_native_tracing.rst @@ -17,8 +17,11 @@ native client instead of with Envoy's builtin Zipkin client has the following ad This sandbox is very similar to the front proxy architecture described above, with one difference: service1 makes an API call to service2 before returning a response. -The three containers will be deployed inside a virtual network called ``envoymesh``. (Note: the sandbox -only works on x86-64). +The three containers will be deployed inside a virtual network called ``envoymesh``. + +.. note:: + + The jaeger native tracing sandbox only works on x86-64. All incoming requests are routed via the front Envoy, which is acting as a reverse proxy sitting on the edge of the ``envoymesh`` network. Port ``8000`` is exposed @@ -45,11 +48,29 @@ Running the Sandbox ~~~~~~~~~~~~~~~~~~~ The following documentation runs through the setup of an Envoy cluster organized -as is described in the image above. +as is described above. + +**Step 1: Install Docker** + +Ensure that you have a recent versions of ``docker`` and ``docker-compose``. + +A simple way to achieve this is via the `Docker Desktop `_. + +**Step 2: Clone the Envoy repo** -**Step 1: Build the sandbox** +If you have not cloned the Envoy repo, clone it with: -To build this sandbox example, and start the example apps run the following commands:: +``git clone git@github.com:envoyproxy/envoy`` + +or + +``git clone https://github.com/envoyproxy/envoy.git`` + +**Step 3: Build the sandbox** + +To build this sandbox example, and start the example apps run the following commands: + +.. code-block:: console $ pwd envoy/examples/jaeger-native-tracing @@ -64,9 +85,11 @@ To build this sandbox example, and start the example apps run the following comm jaeger-native-tracing_service1_1 /start-service.sh Up 10000/tcp, 8000/tcp jaeger-native-tracing_service2_1 /start-service.sh Up 10000/tcp, 8000/tcp -**Step 2: Generate some load** +**Step 4: Generate some load** + +You can now send a request to service1 via the front-envoy as follows: -You can now send a request to service1 via the front-envoy as follows:: +.. code-block:: console $ curl -v localhost:8000/trace/1 * Trying 192.168.99.100... @@ -86,7 +109,7 @@ You can now send a request to service1 via the front-envoy as follows:: Hello from behind Envoy (service 1)! hostname: f26027f1ce28 resolvedhostname: 172.19.0.6 * Connection #0 to host 192.168.99.100 left intact -**Step 3: View the traces in Jaeger UI** +**Step 5: View the traces in Jaeger UI** Point your browser to http://localhost:16686 . You should see the Jaeger dashboard. Set the service to "front-proxy" and hit 'Find Traces'. You should see traces from the front-proxy. diff --git a/docs/root/start/sandboxes/jaeger_tracing.rst b/docs/root/start/sandboxes/jaeger_tracing.rst index ce73e6679ddb0..0de59e9213f58 100644 --- a/docs/root/start/sandboxes/jaeger_tracing.rst +++ b/docs/root/start/sandboxes/jaeger_tracing.rst @@ -34,11 +34,29 @@ Running the Sandbox ~~~~~~~~~~~~~~~~~~~ The following documentation runs through the setup of an Envoy cluster organized -as is described in the image above. +as is described above. -**Step 1: Build the sandbox** +**Step 1: Install Docker** -To build this sandbox example, and start the example apps run the following commands:: +Ensure that you have a recent versions of ``docker`` and ``docker-compose``. + +A simple way to achieve this is via the `Docker Desktop `_. + +**Step 2: Clone the Envoy repo** + +If you have not cloned the Envoy repo, clone it with: + +``git clone git@github.com:envoyproxy/envoy`` + +or + +``git clone https://github.com/envoyproxy/envoy.git`` + +**Step 3: Build the sandbox** + +To build this sandbox example, and start the example apps run the following commands: + +.. code-block:: console $ pwd envoy/examples/jaeger-tracing @@ -53,9 +71,11 @@ To build this sandbox example, and start the example apps run the following comm jaeger-tracing_service1_1 /bin/sh -c /usr/local/bin/ ... Up 10000/tcp, 8000/tcp jaeger-tracing_service2_1 /bin/sh -c /usr/local/bin/ ... Up 10000/tcp, 8000/tcp -**Step 2: Generate some load** +**Step 4: Generate some load** + +You can now send a request to service1 via the front-envoy as follows: -You can now send a request to service1 via the front-envoy as follows:: +.. code-block:: console $ curl -v localhost:8000/trace/1 * Trying 192.168.99.100... @@ -75,7 +95,7 @@ You can now send a request to service1 via the front-envoy as follows:: Hello from behind Envoy (service 1)! hostname: f26027f1ce28 resolvedhostname: 172.19.0.6 * Connection #0 to host 192.168.99.100 left intact -**Step 3: View the traces in Jaeger UI** +**Step 5: View the traces in Jaeger UI** Point your browser to http://localhost:16686 . You should see the Jaeger dashboard. Set the service to "front-proxy" and hit 'Find Traces'. You should see traces from the front-proxy. diff --git a/docs/root/start/sandboxes/load_reporting_service.rst b/docs/root/start/sandboxes/load_reporting_service.rst index c8ccf494959e2..f51bdb3192603 100644 --- a/docs/root/start/sandboxes/load_reporting_service.rst +++ b/docs/root/start/sandboxes/load_reporting_service.rst @@ -19,18 +19,25 @@ Running the Sandbox ~~~~~~~~~~~~~~~~~~~ The following documentation runs through the setup of an Envoy cluster organized -as is described in the image above. +as is described above. **Step 1: Install Docker** -Ensure that you have a recent version of ``docker`` and ``docker-compose`` installed. +Ensure that you have a recent versions of ``docker`` and ``docker-compose``. A simple way to achieve this is via the `Docker Desktop `_. -**Step 2: Clone the Envoy repo, and start all of our containers** +**Step 2: Clone the Envoy repo** -If you have not cloned the Envoy repo, clone it with ``git clone git@github.com:envoyproxy/envoy`` -or ``git clone https://github.com/envoyproxy/envoy.git`` +If you have not cloned the Envoy repo, clone it with: + +``git clone git@github.com:envoyproxy/envoy`` + +or + +``git clone https://github.com/envoyproxy/envoy.git`` + +**Step 3: Build the sandbox** Terminal 1 :: @@ -52,7 +59,7 @@ Terminal 2 :: load-reporting-service_http_service_2 /bin/sh -c /usr/local/bin/ ... Up 10000/tcp, 0.0.0.0:81->80/tcp, 0.0.0.0:8082->8081/tcp load-reporting-service_lrs_server_1 go run main.go Up 0.0.0.0:18000->18000/tcp -**Step 3: Start sending stream of HTTP requests** +**Step 4: Start sending stream of HTTP requests** Terminal 2 :: @@ -62,7 +69,7 @@ Terminal 2 :: The script above (``send_requests.sh``) sends requests randomly to each Envoy, which in turn forwards the requests to the backend service. -**Step 4: See Envoy Stats** +**Step 5: See Envoy Stats** You should see @@ -84,4 +91,3 @@ Terminal 1 :: ............................ lrs_server_1 | 2020/02/12 17:09:09 Got stats from cluster `http_service` node `0022a319e1e2` - cluster_name:"local_service" upstream_locality_stats: total_successful_requests:3 total_issued_requests:3 > load_report_interval: lrs_server_1 | 2020/02/12 17:09:09 Got stats from cluster `http_service` node `2417806c9d9a` - cluster_name:"local_service" upstream_locality_stats: total_successful_requests:9 total_issued_requests:9 > load_report_interval: - diff --git a/docs/root/start/sandboxes/lua.rst b/docs/root/start/sandboxes/lua.rst index 3a9b5c75cf91f..42492506e6461 100644 --- a/docs/root/start/sandboxes/lua.rst +++ b/docs/root/start/sandboxes/lua.rst @@ -20,12 +20,17 @@ Ensure that you have a recent versions of ``docker`` and ``docker-compose``. A simple way to achieve this is via the `Docker Desktop `_. -**Step 2: Clone the Envoy repo and start all of our containers** +**Step 2: Clone the Envoy repo** -If you have not cloned the Envoy repo, clone it with ``git clone git@github.com:envoyproxy/envoy`` -or ``git clone https://github.com/envoyproxy/envoy.git`` +If you have not cloned the Envoy repo, clone it with: -Terminal 1 +``git clone git@github.com:envoyproxy/envoy`` + +or + +``git clone https://github.com/envoyproxy/envoy.git`` + +**Step 3: Build the sandbox** .. code-block:: console @@ -40,7 +45,7 @@ Terminal 1 lua_proxy_1 /docker-entrypoint.sh /bin Up 10000/tcp, 0.0.0.0:8000->8000/tcp, 0.0.0.0:8001->8001/tcp lua_web_service_1 node ./index.js Up 0.0.0.0:8080->80/tcp -**Step 3: Send a request to the service** +**Step 4: Send a request to the service** The output from the ``curl`` command below should include the headers ``foo``. diff --git a/docs/root/start/sandboxes/mysql.rst b/docs/root/start/sandboxes/mysql.rst index b043a8faf90e4..4c194d6d37c3e 100644 --- a/docs/root/start/sandboxes/mysql.rst +++ b/docs/root/start/sandboxes/mysql.rst @@ -6,10 +6,11 @@ MySQL Filter In this example, we show how the :ref:`MySQL filter ` can be used with the Envoy proxy. The Envoy proxy configuration includes a MySQL filter that parses queries and collects MySQL-specific metrics. + Running the Sandboxes ~~~~~~~~~~~~~~~~~~~~~ -The following documentation runs through the setup of both services. +The following documentation runs through the setup of the services. **Step 1: Install Docker** @@ -17,11 +18,17 @@ Ensure that you have a recent versions of ``docker`` and ``docker-compose``. A simple way to achieve this is via the `Docker Desktop `_. -**Step 2: Clone the Envoy repo and start all of our containers** +**Step 2: Clone the Envoy repo** + +If you have not cloned the Envoy repo, clone it with: + +``git clone git@github.com:envoyproxy/envoy`` + +or -If you have not cloned the Envoy repo, clone it with ``git clone git@github.com:envoyproxy/envoy`` -or ``git clone https://github.com/envoyproxy/envoy.git`` +``git clone https://github.com/envoyproxy/envoy.git`` +**Step 3: Build the sandbox** Terminal 1 @@ -39,7 +46,7 @@ Terminal 1 mysql_proxy_1 /docker-entrypoint.sh /bin Up 10000/tcp, 0.0.0.0:1999->1999/tcp, 0.0.0.0:8001->8001/tcp -**Step 3: Issue commands using mysql** +**Step 4: Issue commands using mysql** Use ``mysql`` to issue some commands and verify they are routed via Envoy. Note that the current implementation of the protocol filter was tested with MySQL @@ -83,7 +90,7 @@ Terminal 1 mysql> exit Bye -**Step 4: Check egress stats** +**Step 5: Check egress stats** Check egress stats were updated. @@ -102,7 +109,7 @@ Terminal 1 mysql.egress_mysql.sessions: 1 mysql.egress_mysql.upgraded_to_ssl: 0 -**Step 5: Check TCP stats** +**Step 6: Check TCP stats** Check TCP stats were updated. @@ -121,4 +128,4 @@ Terminal 1 tcp.mysql_tcp.downstream_flow_control_resumed_reading_total: 0 tcp.mysql_tcp.idle_timeout: 0 tcp.mysql_tcp.upstream_flush_active: 0 - tcp.mysql_tcp.upstream_flush_total: 0 \ No newline at end of file + tcp.mysql_tcp.upstream_flush_total: 0 diff --git a/docs/root/start/sandboxes/redis.rst b/docs/root/start/sandboxes/redis.rst index 46aad117f1cbb..fcd3211af5729 100644 --- a/docs/root/start/sandboxes/redis.rst +++ b/docs/root/start/sandboxes/redis.rst @@ -5,10 +5,11 @@ Redis Filter In this example, we show how a :ref:`Redis filter ` can be used with the Envoy proxy. The Envoy proxy configuration includes a Redis filter that routes egress requests to redis server. + Running the Sandboxes ~~~~~~~~~~~~~~~~~~~~~ -The following documentation runs through the setup of both services. +The following documentation runs through the setup of the services. **Step 1: Install Docker** @@ -16,10 +17,17 @@ Ensure that you have a recent versions of ``docker`` and ``docker-compose``. A simple way to achieve this is via the `Docker Desktop `_. -**Step 2: Clone the Envoy repo and start all of our containers** +**Step 2: Clone the Envoy repo** + +If you have not cloned the Envoy repo, clone it with: + +``git clone git@github.com:envoyproxy/envoy`` + +or + +``git clone https://github.com/envoyproxy/envoy.git`` -If you have not cloned the Envoy repo, clone it with ``git clone git@github.com:envoyproxy/envoy`` -or ``git clone https://github.com/envoyproxy/envoy.git`` +**Step 3: Build the sandbox** Terminal 1 @@ -36,7 +44,7 @@ Terminal 1 redis_proxy_1 /docker-entrypoint.sh /bin Up 10000/tcp, 0.0.0.0:1999->1999/tcp, 0.0.0.0:8001->8001/tcp redis_redis_1 docker-entrypoint.sh redis Up 0.0.0.0:6379->6379/tcp -**Step 3: Issue Redis commands** +**Step 4: Issue Redis commands** Issue Redis commands using your favourite Redis client, such as ``redis-cli``, and verify they are routed via Envoy. @@ -53,11 +61,11 @@ Terminal 1 $ redis-cli -h localhost -p 1999 get bar "bar" -**Step 4: Verify egress stats** +**Step 5: Verify egress stats** Go to ``http://localhost:8001/stats?usedonly&filter=redis.egress_redis.command`` and verify the following stats: .. code-block:: none redis.egress_redis.command.get.total: 2 - redis.egress_redis.command.set.total: 2 \ No newline at end of file + redis.egress_redis.command.set.total: 2 diff --git a/docs/root/start/sandboxes/zipkin_tracing.rst b/docs/root/start/sandboxes/zipkin_tracing.rst index 649e78bffacd5..150089fb45fe6 100644 --- a/docs/root/start/sandboxes/zipkin_tracing.rst +++ b/docs/root/start/sandboxes/zipkin_tracing.rst @@ -34,11 +34,29 @@ Running the Sandbox ~~~~~~~~~~~~~~~~~~~ The following documentation runs through the setup of an Envoy cluster organized -as is described in the image above. +as is described above. -**Step 1: Build the sandbox** +**Step 1: Install Docker** -To build this sandbox example, and start the example apps run the following commands:: +Ensure that you have a recent versions of ``docker`` and ``docker-compose``. + +A simple way to achieve this is via the `Docker Desktop `_. + +**Step 2: Clone the Envoy repo** + +If you have not cloned the Envoy repo, clone it with: + +``git clone git@github.com:envoyproxy/envoy`` + +or + +``git clone https://github.com/envoyproxy/envoy.git`` + +**Step 3: Build the sandbox** + +To build this sandbox example, and start the example apps run the following commands: + +.. code-block:: console $ pwd envoy/examples/zipkin-tracing @@ -53,9 +71,11 @@ To build this sandbox example, and start the example apps run the following comm zipkin-tracing_service2_1 /bin/sh -c /usr/local/bin/ ... Up 10000/tcp, 8000/tcp zipkin-tracing_zipkin_1 /busybox/sh run.sh Up 9410/tcp, 0.0.0.0:9411->9411/tcp -**Step 2: Generate some load** +**Step 4: Generate some load** + +You can now send a request to service1 via the front-envoy as follows: -You can now send a request to service1 via the front-envoy as follows:: +.. code-block:: console $ curl -v localhost:8000/trace/1 * Trying 192.168.99.100... @@ -75,7 +95,7 @@ You can now send a request to service1 via the front-envoy as follows:: Hello from behind Envoy (service 1)! hostname: f26027f1ce28 resolvedhostname: 172.19.0.6 * Connection #0 to host 192.168.99.100 left intact -**Step 3: View the traces in Zipkin UI** +**Step 5: View the traces in Zipkin UI** Point your browser to http://localhost:9411 . You should see the Zipkin dashboard. Set the service to "front-proxy" and set the start time to a few minutes before