DocsTelepresence2.14Configuring intercept using specifications
Configuring intercept using specifications
This page references the different options available to the telepresence intercept specification.
With telepresence, you can provide a file to define how an intercept should work.
Root
Your intercept specification is where you can create a standard, easy to use, configuration to easily run pre and post tasks, start an intercept, and start your local application to handle the intercepted traffic.
There are many ways to configure your specification to suit your needs, the table below shows the possible options within your specifcation, and you can see the spec's schema, with all available options and formats, here.
Options | Description |
---|---|
name | Name of the specification. |
connection | Connection properties to use when Telepresence connects to the cluster. |
handlers | Local processes to handle traffic and/or setup . |
prerequisites | Things to set up prior to starting any intercepts, and tear things down once the intercept is complete. |
workloads | Remote workloads that are intercepted, keyed by workload name. |
Name
The name is optional. If you don't specify the name it will use the filename of the specification file.
Connection
The connection option is used to define how Telepresence connects to your cluster.
You can pass the most common parameters from telepresence connect command (telepresence connect --help
) using a camel case format.
Some of the most commonly used options include:
Options | Type | Format | Description |
---|---|---|---|
context | string | N/A | The kubernetes context to use |
mappedNamespaces | string list | [a-z0-9][a-z0-9-]{1,62} | The namespaces that Telepresence will be concerned with |
Handlers
A handler is code running locally.
It can receive traffic for an intercepted service, or can set up prerequisites to run before/after the intercept itself.
When it is intended as an intercept handler (i.e. to handle traffic), it's usually the service you're working on, or another dependency (database, another third party service, ...) running on your machine. A handler can be a Docker container, or an application running natively.
The sample below is creating an intercept handler, giving it the name echo-server
and using a docker container. The container will
automatically have access to the ports, environment, and mounted directories of the intercepted container.
If you don't want to use Docker containers, you can still configure your handlers to start via a regular script.
The snippet below shows how to create an handler called echo-server, that sets an environment variable of PORT=8080
and starts the application.
Keep in mind that an empty handler is still a valid handler. This is sometimes useful when you want to, for example, simulate an intercepted service going down:
The table belows defines the parameters that can be used within the handlers section.
Options | Type | Format | Description |
---|---|---|---|
name | string | [a-zA-Z][a-zA-Z0-9_-]* | Defines name of your handler that the intercepts use to reference it |
environment | map list | N/A | Environment Defines environment variables within your handler |
environment[*].name | string | [a-zA-Z_][a-zA-Z0-9_]* | The name of the environment variable |
environment[*].value | string | N/A | The value for the environment variable |
script | map | N/A | Tells the handler to run as a script, mutually exclusive to docker |
docker | map | N/A | Tells the handler to run as a docker container, mutually exclusive to script |
Script
The handler's script element defines the parameters:
Options | Type | Format | Description |
---|---|---|---|
run | string | N/A | The script to run. Can be multi-line |
shell | string | bash|sh|sh | Shell that will parse and run the script. Can be bash, zsh, or sh. Defaults to the value of theSHELL environment variable |
Docker
The handler's docker element defines the parameters. The build
and image
parameters are mutually exclusive:
Options | Type | Format | Description |
---|---|---|---|
build | map | N/A | Defines how to build the image from source using docker build command |
compose | map | N/A | Defines how to integrate with an existing Docker Compose file |
image | string | image | Defines which image to be used |
ports | int list | N/A | The ports which should be exposed to the host |
options | string list | N/A | Options for docker run options |
command | string | N/A | Optional command to run |
args | string list | N/A | Optional command arguments |
Build
The docker build element defines the parameters:
Options | Type | Format | Description |
---|---|---|---|
context | string | N/A | Defines either a path to a directory containing a Dockerfile, or a url to a git repository |
args | string list | N/A | Additional arguments for the docker build command. |
For additional informations on these parameters, please check the docker documentation.
Compose
The Docker Compose element defines the way to integrate with the tool of the same name.
Options | Type | Format | Description |
---|---|---|---|
context | string | N/A | An optional Docker context, meaning the path to / or the directory containing your docker compose file |
services | map list | The services to use with the Telepresence integration | |
spec | map | compose spec | Optional embedded docker compose specification. |
Service
The service describe how to integrate with each service from your Docker Compose file, and can be seen as an override
functionality. A service is normally not provided when you want to keep the original behavior, but can be provided for
documentation purposes using the local
behavior.
A service can be declared either as a property of compose
in the Intercept Specification, or as an x-telepresence
extension in the Docker compose specification. The syntax is the same in both cases, but the name
property must not be
used together with x-telepresence
because it is implicit.
Options | Type | Format | Description |
---|---|---|---|
name | string | [a-zA-Z][a-zA-Z0-9_-]* | The name of your service in the compose file |
behavior | string | interceptHandler|remote|local | Behavior of the service in context of the intercept. |
mapping | map | Optional mapping to cluster service. Only applicable for behavior: remote |
Behavior
Value | Description |
---|---|
interceptHandler | The service runs locally and will receive traffic from the intercepted pod. |
remote | The service will not run as part of docker compose. Instead, traffic is redirected to a service in the cluster. |
local | The service runs locally without modifications. This is the default. |
Mapping
Options | Type | Description |
---|---|---|
name | string | The name of the cluster service to link the compose service with |
namespace | string | The cluster namespace for service. This is optional and defaults to the namespace of the intercept |
Examples
Considering the following Docker Compose file:
This will use the myapp service as the interceptor.
This will prevent the service from running locally. DNS will point the service in the cluster with the same name.
Adding a mapping allows to select the cluster service more accurately, here by indicating to Telepresence that the postgres service should be mapped to the psql service in the big-data namespace.
As an alternative, the services
can instead be added as x-telepresence
extensions in the docker compose file:
Prerequisites
When creating an intercept specification there is an option to include prerequisites.
Prerequisites give you the ability to run scripts for setup, build binaries to run as your intercept handler, or many other use cases.
Prerequisites is an array, so it can handle many options prior to starting your intercept and running your intercept handlers.
The elements of the prerequisites
array correspond to handlers
.
The sample below is declaring that build-binary
and rm-binary
are two handlers; the first will be run before any intercepts,
the second will be run after cleaning up the intercepts.
If a prerequisite create succeeds, the corresponding delete is guaranteed to run even if the other steps in the spec fail.
The table below defines the parameters availble within the prerequistes section.
Options | Description |
---|---|
create | The name of a handler to run before the intercept |
delete | The name of a handler to run after the intercept |
Workloads
Workloads define the services in your cluster that will be intercepted.
The example below is creating an intercept on a service called echo-server
on port 8080.
It creates a personal intercept with the header of x-intercept-id: foo
, and routes its traffic to a handler called echo-server
This table defines the parameters available within a workload.
Options | Type | Format | Description | Default |
---|---|---|---|---|
name | string | [a-z][a-z0-9-]* | Name of the workload to intercept | N/A |
namespace | string | [a-z0-9][a-z0-9-]{1,62} | Namespace of workload to intercept | N/A |
intercepts | intercept list | N/A | The list of intercepts associated to the workload | N/A |
Intercepts
This table defines the parameters available for each intercept.
Options | Type | Format | Description | Default |
---|---|---|---|---|
enabled | boolean | N/A | If set to false, disables this intercept. | true |
headers | header list | N/A | Headers that will filter the intercept. | Auto generated |
service | name | [a-z][a-z0-9-]{1,62} | Name of service to intercept | N/A |
localPort | integersh|string | 0-65535 | The port for the service which is intercepted | N/A |
port | integer | 0-65535 | The port the service in the cluster is running on | N/A |
pathPrefix | string | N/A | Path prefix filter for the intercept. Defaults to "/" | / |
previewURL | boolean | N/A | Determine if a preview url should be created | true |
banner | boolean | N/A | Used in the preview url option, displays a banner on the preview page | true |
Header
You can define headers to filter the requests which should end up on your machine when intercepting.
Options | Type | Format | Description | Default |
---|---|---|---|---|
name | string | N/A | Name of the header | N/A |
value | string | N/A | Value of the header | N/A |
Templating
Telepresence specs also support templating of scripts, commands, arguments, environments, and intercept headers. All Go Builtin and Sprig template functions can be used. In addition, Telepresence also adds variables:
Telepresence template variables
Options | Type | Description |
---|---|---|
Telepresence.Username | string | The name of the user running the spec |
Usage
Running your specification from the CLI
After you've written your intercept specification you will want to run it.
To start your intercept, use this command:
This will validate and run your spec. In case you just want to validate it, you can do so by using this command:
Using and sharing your specification as a CRD
If you want to share specifications across your team or your organization. You can save specifications as CRDs inside your cluster.
Install CRD object in your cluster (one time installation) :
Then you need to deploy the specification in your cluster as a CRD:
So
echo-server
example looks like this:Then every person that is connected to the cluster can start your intercept by using this command:
You can also list available specifications:
Docker integration
Intercept specification can be used within the docker extension if you are using a YAML file and a docker runtime as handlers.
IDE Integration
You can integrate our JSON schemas into your IDE to give you autocompletion and hints while writing your intercept specification. There is two schemas available :
To then add the schema to your IDE follow the instructions for you given IDE, a few popular our listed below: VSCode GoLand