Engineered for Availability
- Single container architecture ensures operational simplicity.
- Extensive metrics on the ingress controller ensure real-time visibility into controller performance.
- Rigorous automated performance and functional testing are integrated into the release process so that problems are found before release, not after.
High Performance and Scale
- Optimized for Kubernetes: Ensure low request latency even as you scale your workloads up and down.
- Scale to thousands of microservices: Battle tested to thousands of individual microservices and independent configurations.
- Designed for cloud-native applications: Route and load balance any type of traffic (including HTTP/1.1, HTTP/2, gRPC, WebSockets). Gain insight into your traffic with best-in-class observability.
- Web scale: Powering the Internet’s largest sites including AirBnb, Lyft, and Google.
What is Kubernetes Ingress?
Kubernetes ingress is a collection of routing rules that govern how external users access services running in a Kubernetes cluster.
In a typical Kubernetes application, pods run inside a cluster, and a load balancer runs outside. The load balancer takes connections from the internet and routes the traffic to an edge proxy inside your cluster. This edge proxy is then responsible for routing traffic into your pods.
The edge proxy is commonly called an ingress controller because it is commonly configured using ingress resources in Kubernetes, however, the edge proxy can also be configured with custom resource definitions (CRDs) or annotations.
Ambassador makes it very easy for us to manage endpoints across all our regions worldwide and is able to seamlessly adapt and work with every region’s 80 different endpoints, each with varying configuration requirements.
Staff Infrastructure Development Engineer