THIS REPO IS CURRENTLY IN PREVIEW. THE APIS ARE NOT FINAL AND ARE SUBJECT TO CHANGE WITHOUT NOTICE.
Libnasp is an open-source, lightweight library to expand service mesh capabilities to non-cloud environments by getting rid of the complexity of operating dedicated network proxies. It is not meant to be a complete service mesh replacement, but rather an extension. It integrates well with an Istio control plane, so applications using Libnasp can be handled as standard Istio workloads.
Libnasp offers the most common functionality of a sidecar proxy, so it eases the developer burden for traffic management, observability, and security. Its range of capabilities includes:
- Identity and network traffic security using mutual TLS
- Automatic traffic management features, like HTTP, or gRPC load balancing
- Transparent observability of network traffic, especially standard Istio metrics
- Dynamic configuration through xDS support
To learn more about why we created Libnasp, and where it could help, read our introduction blog.
Libnasp is primarily a library that can be included in application code to provide service mesh functionality without operating proxies. But Libnasp is also a hybrid model in that it still communicates with a central Istio control plane, and configuration is still declarative and centrally managed. The service discovery, certificate management, and configuration logic aren't handed out to the applications like in a pure library model. These remain the same as with standard Istio - though somewhat limited in functionality.
Therefore Libnasp has a minimal component running in a Kubernetes cluster next to an existing Istio control plane. This component is called Heimdall, and it is the main entry point for a Libnasp-enabled application to an existing service mesh. Heimdall automatically injects Istio configuration - like workload metadata, network and mesh settings, or where to receive certificates from - in applications that use Libnasp.
Libnasp workloads can run in or outside the service mesh, or the Kubernetes cluster.
The quick start will guide you through a common Libnasp setup, where an external, Libnasp-enabled application is connected to an Istio service mesh running in a Kubernetes cluster. At the end of the example you'll see how a Golang application running on your local machine can send HTTP requests to internal Kubernetes workloads through standard Istio service discovery, using mTLS, but without running an Envoy proxy next to it.
First, we'll need to set up the Kubernetes environment with Istio and the necessary Libnasp components.
To make it easier, we wrote a script that creates a local kind
cluster and installs all the requirements.
To get started, simply run the deploy-kind.sh
script in the test
directory:
./test/deploy-kind.sh
Once the script finished check if everything is up and running:
- A standard Istio installation with a control plane and a mesh expansion gateway. By default, Istio is managed by our istio-operator, but upstream Istio installations can also be used with Libnasp.
> k get pods -n istio-system
NAME READY STATUS RESTARTS AGE
istio-meshexpansion-icp-v115x-7c68bddf4c-zlphw 1/1 Running 0 69m
istio-operator-74c544fd8c-vr98s 2/2 Running 0 73m
istiod-icp-v115x-7577bf56d-q4r4s 1/1 Running 0 69m
- A Heimdall deployment and a Heimdall gatweway. Heimdall is the main entry point for a Libnasp-enabled application to an existing service mesh. Heimdall eliminates the need for developers to manually configure Istio settings for applications that use Libnasp by authenticating clients and automatically injecting configuration. Read more about Heimdall in its documentation.
> k get pods -n heimdall
NAME READY STATUS RESTARTS AGE
heimdall-f97745497-jvzws 3/3 Running 2 (69m ago) 69m
heimdall-gw-74c79d5c8-6r6qp 1/1 Running 0 69m
- An
echo
service that's running with an Istio sidecar proxy. It is a test deployment in our Kubernetes cluster we'll send HTTP requests from an external Libnasp-enabled client.
> k get pods -n testing
NAME READY STATUS RESTARTS AGE
echo-5d44b7fbd5-ldrcj 3/3 Running 0 77m
Now that we have our Kubernetes environment ready, we'll start a Golang client on our local machine and we'll send some HTTP requests to the echo
service in Kubernetes.
To do that, simply go into the examples/http
folder and run the following make
command:
make CLIENT_REQUEST_URL="http://echo.testing" run-client
You should see logs containing the (empty) response and a few interesting headers. They show that the request URI was the same service address that you would use inside the cluster and that our client has sent a client certificate that was accepted by the Istio proxy inside the cluster:
Hostname: echo-5d44b7fbd5-ldrcj
Pod Information:
-no pod information available-
Server values:
server_version=nginx: 1.20.2 - lua: 10020
Request Information:
client_address=127.0.0.6
method=GET
real path=/
query=
request_version=1.1
request_scheme=http
request_uri=http://echo.testing:8080/
Request Headers:
accept-encoding=gzip
host=echo.testing
user-agent=Go-http-client/1.1
x-b3-sampled=0
x-b3-spanid=d13c98a43a1705fb
x-b3-traceid=92f47b07527a04bed13c98a43a1705fb
x-forwarded-client-cert=By=spiffe://cluster.local/ns/testing/sa/default;Hash=df95762fffe354c4ba1e4e70130c8d9e3209f0f9946ef0ef9b28e66f05c11cc0;Subject="";URI=spiffe://cluster.local/ns/external/sa/test-http
x-forwarded-proto=https
x-request-id=ae071df5-e2f4-4ba0-a200-9e72b9827db5
Request Body:
-no body in request-
Let's see what's inside our Makefile and our Golang code to understand what happened above.
- Makefile
The Makefile is very simple: it calls our Golang application with go run
and passes it the CLIENT_REQUEST_URL
we've specified above.
The interesting part is that it gets a Libnasp authentication token from a Kubernetes secret and passes it to our application:
NASP_AUTH_TOKEN ?= $(shell kubectl -n external get secret -l nasp.k8s.cisco.com/workloadgroup=test-http -o jsonpath='{@.items[0].data.token}' | base64 -d)
The secret is used by the Libnasp library to get the necessary Istio configuration (what network, cluster, or mesh to join, or where to get the required certificates from) for the application from Heimdall.
But where is this secret coming from? If you take a look at the end of our init script, you'll see that we've created a few WorkloadGroup
resources.
These are standard Istio resources used to describe a set of non-k8s workloads.
Heimdall watches these resources and creates corresponding access tokens in Kubernetes secrets for the related external workloads.
For our example, Heimdall needs to be accessible from outside of the cluster. Usually, that is achieved through a Heimdall gateway, whose address is configurable through Libnasp. We haven't specified the address in the Makefile because we're using the default value to reach it.
- Golang code
Now let's see what we did in our Golang code to make the HTTP requests we're sending go through Libnasp.
First, we import the library:
import (
"github.com/cisco-open/libnasp/pkg/istio"
"github.com/cisco-open/libnasp/pkg/network"
)
Then we create and start an IstioIntegrationHandler
:
istioHandlerConfig := &istio.IstioIntegrationHandlerConfig{
UseTLS: true,
IstioCAConfigGetter: istio.IstioCAConfigGetterHeimdall(ctx, heimdallURL, authToken, "v1"),
}
iih, err := istio.NewIstioIntegrationHandler(istioHandlerConfig, logger)
if err != nil {
panic(err)
}
if err := iih.Run(ctx); err != nil {
panic(err)
}
And finally, we send the HTTP request through the Libnasp transport layer we get from the IstioIntegrationHandler
:
transport, err := iih.GetHTTPTransport(http.DefaultTransport)
if err != nil {
panic(err)
}
httpClient := &http.Client{
Transport: transport,
}
request, err := http.NewRequest("GET", url, nil)
if err != nil {
return err
}
response, err := httpClient.Do(request)
if err != nil {
return err
}
In the examples
directory we have similar examples of how to use Libnasp with HTTP, gRPC, or TCP connections from Golang.
All examples can be run either as servers or clients. Refer to the Makefiles
on how to run these examples, and check out the Golang code to learn how to use Libnasp for specific protocols.
The core code of Libnasp is written in Golang, therefore the main examples are also written in Go. However, Libnasp could also be imported from other languages with the help of C bindings generated from the core codebase. A thin shim layer still has to be written for specific platforms, but the core functionality is unchanged.
Currently, supported languages and frameworks are:
- Java frameworks: Spring and Nio
- Mobile platforms: iOS and Android
- Python
We use GitHub to track issues and accept contributions. If you'd like to raise an issue or open a pull request with changes, refer to our contribution guide.