Kubernetes Ingress 101: NodePort, Load Balancers, and Ingress Controllers
What is Kubernetes ingress?
What is a NodePort?
What is a Load Balancer?
Ingress Controllers and Ingress Resources
Real-world ingress
Start with a load balancer
Service-specific ingress management
The Evolution of the Ingress API, Ingress v1, and the Gateway API
Summary
Kubernetes ingress with Ambassador Edge Stack
This article was updated in December 2021.
This article will introduce the three general strategies in Kubernetes for ingress, and the tradeoffs with each approach. I’ll then explore some of the more sophisticated requirements of an ingress strategy. Finally, I’ll give some guidelines on how to pick your Kubernetes ingress strategy.
What is Kubernetes ingress?
Kubernetes ingress is a collection of routing rules that govern how external users access services running in a Kubernetes cluster. However, in real-world Kubernetes deployments, there are frequently other considerations beyond routing for managing ingress.
We’ll discuss these requirements in more detail below.
Ingress in Kubernetes: In Kubernetes, there are three general approaches to exposing your application.
- Using a Kubernetes service of type , which exposes the application on a port across each of your nodes
NodePort
- Use a Kubernetes service of type , which creates an external load balancer that points to a Kubernetes service in your cluster
LoadBalancer
- Use a Kubernetes Ingress Resource
What is a NodePort?
A NodePort
NodePort
Every Kubernetes cluster supports
NodePort
NodePort
NodePort
The
NodePort
What is a Load Balancer?
Using a
LoadBalancer
The exact implementation of a
LoadBalancer
LoadBalancer
LoadBalancer
Ingress Controllers and Ingress Resources
Kubernetes supports a high level abstraction called Ingress, which allows simple host or URL based HTTP routing. An ingress is a core concept (in beta) of Kubernetes, but is always implemented by a third party proxy. These implementations are known as ingress controllers. An ingress controller is responsible for reading the Ingress Resource information and processing that data accordingly. Different ingress controllers have extended the specification in different ways to support additional use cases.
Ingress is tightly integrated into Kubernetes, meaning that your existing workflows around
kubectl
Real-world ingress
We’ve just covered the three basic patterns for routing external traffic to your Kubernetes cluster. However, we’ve only discussed how to route traffic to your cluster. Typically, though, your Kubernetes services will impose additional requirements on your ingress. Examples of this include:
- content-based routing, e.g., routing based on HTTP method, request headers, or other properties of the specific request
- resilience, e.g., rate limiting, timeouts
- support for multiple protocols, e.g., WebSockets or gRPC
- authentication
Unless you’re running a very simple cloud application, you’ll likely need support for some or all of these capabilities. And, importantly, many of these requirements may need to be managed at the service level, which means you want to manage these concerns inside Kubernetes.
Start with a load balancer
Regardless of your ingress strategy, you probably will need to start with an external load balancer. This load balancer will then route traffic to a Kubernetes service (or ingress) on your cluster that will perform service-specific routing. In this set up, your load balancer provides a stable endpoint (IP address) for external traffic to access.
Both ingress controllers and Kubernetes services require an external load balancer, and, as previously discussed,
NodePort
Service-specific ingress management
So the question for your ingress strategy is really about choosing the right way to manage traffic from your external load balancer to your services. What are your options?
- You can choose an ingress controller such as ingress-nginx or NGINX kubernetes-ingress
- You can choose an API Gateway deployed as a Kubernetes service such as Edge Stack (built on Envoy ) or Traefik.
- You can deploy your own using a custom configuration of NGINX, HAProxy, or Envoy.
Assuming you don’t want to deploy your own, how do you choose between an ingress controller and an API gateway? It comes down to actual capabilities.
So how do you choose between an ingress controller and an API gateway deployed as a Kubernetes service? Surprisingly, there are no fundamental differences!
The original motivation behind ingress was to create a standard API to manage how external traffic is routed to cluster services. However, the reality is that ingress isn’t actually a portable standard. The standard is imprecise (different ingress controllers have different semantics, e.g., behavior of trailing
/
The Evolution of the Ingress API, Ingress v1, and the Gateway API
Ever since the Ingress resource moved to its final location under the permanent
networking.k8s.io
Kubernetes 1.18, therefore, introduced 3 noteworthy changes:
- The new pathType field can specify how HTTP request paths should be matched.
- The new IngressClass resource can specify which Ingress should be handled by controllers. The IngressClass resource effectively replaces the annotation and allows for extension points using the
kubernetes.io/ingress.class
field.parameters
- Added support for wildcards hostnames.
More details of changes rationale can be found in this Kubernetes Enhancement Proposal, KEP for short. The KEP also notes some of the challenges in making a consistent standard for ingress across multiple implementations. Kubernetes 1.19 sees the introduction of Ingress and IngressClass in
networking.k8s.io/v1
networking.k8s.io/v1beta1
In 2020, the SIG-Network community convened a working group to evolve the Ingress v1 specification. Originally called the Service APIs working group, the group was renamed the Gateway API working group in February 2021. The Gateway API (https://kubernetes-sigs.github.io/gateway-api/) is a much richer set of APIs that will be added to Kubernetes. One of the core principles of the design is decoupling routes from the actually configuration of the gateway resource itself. This is very similar to the evolution of other ingress controllers have evolved (e.g., Ambassador with its
Mapping
0.4.0
v1beta1
So, at the end of the day, your choice for service-specific ingress management should depend on your specific requirements, and a specific implementation’s ability to meet your requirements. Different ingress controllers will have different functionality, just like API Gateways. Here are a few choices to consider:
- There are three different NGINX ingress controllers, with different feature sets and functionality.
- Traefik can also be deployed as an ingress controller, and exposes a subset of its functionality through Kubernetes annotations.
- Kong is a popular open source API gateway built on NGINX. However, because it supports many infrastructure platforms, it isn’t optimized for Kubernetes. For example, Kong requires a database, when Kubernetes provides an excellent persistent data store in etcd. Kong also is configured via REST, while Kubernetes embraces declarative configuration management.
- Edge Stack is built on the Envoy Proxy, and exposes a rich set of configuration options for your services, as well as support for external authentication services. Ambassador has been accepted as a CNCF Incubation project, Emissary-ingress.
Summary
Kubernetes ingress is a work-in-progress. Organizations appear to be converging on an external load balancer that sends external traffic to a service router (API Gateway, ingress controller). This service router is declaratively configured via Kubernetes annotations. If you’re interested in following the evolution of Kubernetes ingress, check out the Kubernetes Network SIG and the current plan in this KEP. To learn more about Kubernetes ingress and the options available for ingress controllers and API gateways, check out our resources on Kubernetes Ingress.
Kubernetes ingress with Ambassador Edge Stack
Edge Stack Kubernetes-native API Gateway built on the Envoy Proxy. Ambassador is easily configured via Kubernetes Custom Resource Definitions, and supports all the use cases mentioned in this article. Deploy Ambassador to Kubernetes in just a few simple steps and use Edge Stack as your Envoy-powered Kubernetes ingress controller.
Ambassador Edge Stack - Most Popular Kubernetes Native API Gateway
Ambassador Labs makes the Edge Stack. The most popular Kubernetes native API Gateway. Edge Stack contains an ingress controller. You can use the community version for free up to 5 requests per second. Or start with the enterprise version without limits.