Join our - Lead the Future of API Dev: A New Approach with Blackbird -Webinar on September 19thRegister Now

Back to blog
EDGE STACK API GATEWAY

Monitoring Envoy and Edge Stack on Kubernetes with the Prometheus Operator

Jake Beck
February 7, 2017 | 7 min read
Monitoring Envoy and Edge Stack on Kubernetes

With the Prometheus Operator

In the Kubernetes ecosystem, one of the emerging themes is how applications can best take advantage of the various capabilities of Kubernetes. The Kubernetes community has also introduced new concepts, such as Custom Resources, to make it easier to build Kubernetes-native software.

In late 2016, CoreOS introduced the Operator pattern and released the Prometheus Operator as a working pattern example. The Prometheus Operator automatically creates and manages Prometheus monitoring instances.

The operator model is especially powerful for cloud native organizations deploying multiple services. In this model, each team can deploy its own Prometheus instance as necessary instead of relying on a central SRE team to implement monitoring

Envoy, Ambassador, and Prometheus

In this tutorial, we'll show how the Prometheus Operator can be used to monitor an Envoy proxy deployed at the edge. Envoy is an open source L7 proxy. One of the many reasons for Envoy's growing popularity is its emphasis on observability. Envoy uses statsD as its output format.

Instead of using Envoy directly, we'll use Edge Stack. Edge Stack is a Kubernetes-native API Gateway built on Envoy. Similar to the Prometheus Operator, Ambassador configures and manages Envoy instances in Kubernetes so that the end user doesn't need to do that work directly.

Prerequisites

This tutorial assumes you're running Kubernetes 1.8 or later, with RBAC enabled.

Note: If you're running on Google Kubernetes Engine, you'll need to grant

cluster-admin
privileges to the account that will be installing Prometheus and Ambassador. You can do this with the commands below:

$ gcloud info | grep Account
Account: [username@example.org]
$ kubectl create clusterrolebinding my-cluster-admin-binding --clusterrole=cluster-admin --user=username@example.org

Deploy the Prometheus Operator

The Prometheus Operator is configured as a Kubernetes

deployment
. We'll first deploy the Prometheus operator.

apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: prometheus-operator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus-operator
subjects:
- kind: ServiceAccount
name: prometheus-operator
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: prometheus-operator
rules:
- apiGroups:
- extensions
resources:
- thirdpartyresources
verbs:
- "*"
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- "*"
- apiGroups:
- monitoring.coreos.com
resources:
- alertmanagers
- prometheuses
- servicemonitors
verbs:
- "*"
- apiGroups:
- apps
resources:
- statefulsets
verbs: ["*"]
- apiGroups: [""]
resources:
- configmaps
- secrets
verbs: ["*"]
- apiGroups: [""]
resources:
- pods
verbs: ["list", "delete"]
- apiGroups: [""]
resources:
- services
- endpoints
verbs: ["get", "create", "update"]
- apiGroups: [""]
resources:
- nodes
verbs: ["list", "watch"]
- apiGroups: [""]
resources:
- namespaces
verbs: ["list"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus-operator
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
k8s-app: prometheus-operator
name: prometheus-operator
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: prometheus-operator
spec:
containers:
- args:
- --kubelet-service=kube-system/kubelet
- --config-reloader-image=quay.io/coreos/configmap-reload:v0.0.1
image: quay.io/coreos/prometheus-operator:v0.15.0
name: prometheus-operator
ports:
- containerPort: 8080
name: http
resources:
limits:
cpu: 200m
memory: 100Mi
requests:
cpu: 100m
memory: 50Mi
serviceAccountName: prometheus-operator

kubectl apply -f prom-operator.yaml

We'll also want to create an additional

ServiceAccount
s for the actual Prometheus instances.

apiVersion: v1
kind: ServiceAccount
metadata:
name: prometheus
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: prometheus
rules:
- apiGroups: [""]
resources:
- nodes
- services
- endpoints
- pods
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources:
- configmaps
verbs: ["get"]
- nonResourceURLs: ["/metrics"]
verbs: ["get"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: prometheus
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: prometheus
subjects:
- kind: ServiceAccount
name: prometheus
namespace: default

kubectl apply -f prom-rbac.yaml

The Operator functions as your virtual SRE. At all times, the Prometheus operator insures that you have a set of Prometheus servers running with the appropriate configuration.

Deploy Ambassador

Ambassador also functions as your virtual SRE. At all times, Ambassador insures that you have a set of Envoy proxies running the appropriate configuration.

We're going to deploy Ambassador into Kubernetes. On each Ambassador pod, we'll also deploy an additional container that runs the Prometheus statsd exporter. The exporter will collect the statsd metrics emitted by Envoy over UDP, and proxy them to Prometheus over TCP in Prometheus metrics format.

---
apiVersion: v1
kind: Service
metadata:
labels:
service: ambassador-admin
name: ambassador-admin
spec:
type: NodePort
ports:
- name: ambassador-admin
port: 8877
targetPort: 8877
selector:
service: ambassador
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: ambassador
rules:
- apiGroups: [""]
resources:
- services
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources:
- configmaps
verbs: ["create", "update", "patch", "get", "list", "watch"]
- apiGroups: [""]
resources:
- secrets
verbs: ["get", "list", "watch"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: ambassador
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: ambassador
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ambassador
subjects:
- kind: ServiceAccount
name: ambassador
namespace: default
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ambassador
spec:
replicas: 1
template:
metadata:
labels:
service: ambassador
spec:
serviceAccountName: ambassador
containers:
- name: ambassador
image: datawire/ambassador:0.21.0
imagePullPolicy: Always
resources:
limits:
cpu: 1
memory: 400Mi
requests:
cpu: 200m
memory: 100Mi
env:
- name: AMBASSADOR_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
livenessProbe:
httpGet:
path: /ambassador/v0/check_alive
port: 8877
initialDelaySeconds: 3
periodSeconds: 3
readinessProbe:
httpGet:
path: /ambassador/v0/check_ready
port: 8877
initialDelaySeconds: 3
periodSeconds: 3
- name: statsd-sink
image: datawire/prom-statsd-exporter:0.6.0
restartPolicy: Always

kubectl apply -f ambassador-rbac.yaml

Ambassador is typically deployed as an API Gateway at the edge of your network. We'll deploy a service to map to the Ambassador

deployment
. Note: if you're not on AWS or GKE, you'll need to update the service below to be a
NodePort
instead of a
LoadBalancer
.

---
apiVersion: v1
kind: Service
metadata:
labels:
service: ambassador
name: ambassador
spec:
type: LoadBalancer
ports:
- name: ambassador
port: 80
targetPort: 80
selector:
service: ambassador

kubectl apply -f ambassador.yaml

You should now have a working Ambassador and StatsD/Prometheus exporter that is accessible from outside your cluster.

Configure Prometheus

We now have Ambassador/Envoy running, along with the Prometheus Operator. How do we hook this all together? Logically, all the metrics data flows from Envoy to Prometheus in the following way:

Configure Prometheus
Configure Prometheus

So far, we've deployed Envoy and the StatsD exporter, so now it's time to deploy the other components of this flow.

We'll first create a Kubernetes

service
that points to the StatsD exporter. We'll then create a
ServiceMonitor
that tells Prometheus to add the service as a target.

---
apiVersion: v1
kind: Service
metadata:
name: ambassador-monitor
labels:
service: ambassador-monitor
spec:
selector:
service: ambassador
type: ClusterIP
clusterIP: None
ports:
- name: prometheus-metrics
port: 9102
targetPort: 9102
protocol: TCP
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: ambassador-monitor
labels:
ambassador: monitoring
spec:
selector:
matchLabels:
service: ambassador-monitor
endpoints:
- port: prometheus-metrics

kubectl apply -f statsd-sink-svc.yaml

Next, we need to tell the Prometheus Operator to create a Prometheus cluster for us. The Prometheus cluster is configured to collect data from any

ServiceMonitor
with the
ambassador:monitoring
label.

apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: prometheus
spec:
serviceAccountName: prometheus
serviceMonitorSelector:
matchLabels:
ambassador: monitoring
resources:
requests:
memory: 400Mi

kubectl apply -f prometheus.yaml

Finally, we can create a service to expose Prometheus to the rest of the world. Again, if you're not on AWS or GKE, you'll want to use a

NodePort
instead.

apiVersion: v1
kind: Service
metadata:
name: prometheus
spec:
type: NodePort
ports:
- name: web
port: 9090
protocol: TCP
targetPort: web
selector:
prometheus: prometheus

kubectl apply -f prom-svc.yaml

Testing

We've now configured Prometheus to monitor Envoy, so now let's test this out. Get the external IP address for Prometheus.

$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ambassador 10.11.255.93 35.221.115.102 80:32079/TCP 3h
ambassador-admin 10.11.246.117 <nodes> 8877:30366/TCP 3h
ambassador-monitor None <none> 9102/TCP 3h
kubernetes 10.11.240.1 <none> 443/TCP 3h
prometheus 10.11.254.180 35.191.39.173 9090:32134/TCP 3h
prometheus-operated None <none> 9090/TCP 3h

In the example above, this is

35.191.39.173
. Now, go to http://$PROM_IP:9090 to see the Prometheus UI. You should see a number of metrics automatically populate in Prometheus.

Troubleshooting

If the above doesn't work, there are a few things to investigate:

  • Make sure all your pods are running (
    kubectl get pods
    )
  • Check the logs on the Prometheus cluster (
    kubectl logs $PROM_POD prometheus
    )
  • Check Ambassador diagnostics to verify Ambassador is working correctly

Get a service running in Envoy

The metrics so far haven't been very interesting, since we haven't routed any traffic through Envoy. We'll use Ambassador to set up a route from Envoy to the httpbin service. Ambassador is configured using Kubernetes annotations, so we'll do that here.

apiVersion: v1
kind: Service
metadata:
name: httpbin
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v0
kind: Mapping
name: httpbin_mapping
prefix: /httpbin/
service: httpbin.org:80
host_rewrite: httpbin.org
spec:
ports:
- port: 80

kubectl apply -f httpbin.yaml

Now, if we get the external IP address of Ambassador, we can route requests through Ambassador to the httpbin service:

$ kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ambassador 10.11.255.93 35.221.115.102 80:32079/TCP 3h
ambassador-admin 10.11.246.117 <nodes> 8877:30366/TCP 3h
ambassador-monitor None <none> 9102/TCP 3h
kubernetes 10.11.240.1 <none> 443/TCP 3h
prometheus 10.11.254.180 35.191.39.173 9090:32134/TCP 3h
prometheus-operated None <none> 9090/TCP 3h
$ curl http://35.221.115.102/httpbin/ip
{
"origin": "35.214.10.110"
}

Run a

curl
command a few times, as shown above. Going back to the Prometheus dashboard, you'll see that a bevy of new metrics that contain
httpbin
have appeared. Pick any of these metrics to explore further. For more information on Envoy stats, Matt Klein has written a detailed overview of Envoy's stats architecture. If you are interested in setting up a Grafana dashboard, Alex Gervais has published a sample Grafana/Ambassador dashboard.

Conclusion

Microservices, as you know, are distributed systems. The key to scaling distributed systems is creating loose coupling between each of the components. In a microservices architecture, the most painful source of coupling is actually organizational and not architectural. Design patterns such as the Prometheus Operator enable teams to be more self-sufficient, and reduce organizational coupling, enabling teams to code faster.

Next Steps