Docsright arrowTelepresenceright arrow2.14right arrowConfiguring intercept using specifications

13 min • read

Configuring intercept using specifications

This page references the different options available to the telepresence intercept specification.

With telepresence, you can provide a file to define how an intercept should work.

Root

Your intercept specification is where you can create a standard, easy to use, configuration to easily run pre and post tasks, start an intercept, and start your local application to handle the intercepted traffic.

There are many ways to configure your specification to suit your needs, the table below shows the possible options within your specifcation, and you can see the spec's schema, with all available options and formats, here.

OptionsDescription
nameName of the specification.
connectionConnection properties to use when Telepresence connects to the cluster.
handlersLocal processes to handle traffic and/or setup .
prerequisitesThings to set up prior to starting any intercepts, and tear things down once the intercept is complete.
workloadsRemote workloads that are intercepted, keyed by workload name.

Name

The name is optional. If you don't specify the name it will use the filename of the specification file.

Connection

The connection option is used to define how Telepresence connects to your cluster.

You can pass the most common parameters from telepresence connect command (telepresence connect --help) using a camel case format.

Some of the most commonly used options include:

OptionsTypeFormatDescription
contextstringN/AThe kubernetes context to use
mappedNamespacesstring list[a-z0-9][a-z0-9-]{1,62}The namespaces that Telepresence will be concerned with

Handlers

A handler is code running locally.

It can receive traffic for an intercepted service, or can set up prerequisites to run before/after the intercept itself.

When it is intended as an intercept handler (i.e. to handle traffic), it's usually the service you're working on, or another dependency (database, another third party service, ...) running on your machine. A handler can be a Docker container, or an application running natively.

The sample below is creating an intercept handler, giving it the name echo-server and using a docker container. The container will automatically have access to the ports, environment, and mounted directories of the intercepted container.

If you don't want to use Docker containers, you can still configure your handlers to start via a regular script. The snippet below shows how to create an handler called echo-server, that sets an environment variable of PORT=8080 and starts the application.

Keep in mind that an empty handler is still a valid handler. This is sometimes useful when you want to, for example, simulate an intercepted service going down:

The table belows defines the parameters that can be used within the handlers section.

OptionsTypeFormatDescription
namestring[a-zA-Z][a-zA-Z0-9_-]*Defines name of your handler that the intercepts use to reference it
environmentmap listN/AEnvironment Defines environment variables within your handler
environment[*].namestring[a-zA-Z_][a-zA-Z0-9_]*The name of the environment variable
environment[*].valuestringN/AThe value for the environment variable
scriptmapN/ATells the handler to run as a script, mutually exclusive to docker
dockermapN/ATells the handler to run as a docker container, mutually exclusive to script

Script

The handler's script element defines the parameters:

OptionsTypeFormatDescription
runstringN/AThe script to run. Can be multi-line
shellstringbash|sh|shShell that will parse and run the script. Can be bash, zsh, or sh. Defaults to the value of theSHELL environment variable

Docker

The handler's docker element defines the parameters. The build and image parameters are mutually exclusive:

OptionsTypeFormatDescription
buildmapN/ADefines how to build the image from source using docker build command
composemapN/ADefines how to integrate with an existing Docker Compose file
imagestringimageDefines which image to be used
portsint listN/AThe ports which should be exposed to the host
optionsstring listN/AOptions for docker run options
commandstringN/AOptional command to run
argsstring listN/AOptional command arguments

Build

The docker build element defines the parameters:

OptionsTypeFormatDescription
contextstringN/ADefines either a path to a directory containing a Dockerfile, or a url to a git repository
argsstring listN/AAdditional arguments for the docker build command.

For additional informations on these parameters, please check the docker documentation.

Compose

The Docker Compose element defines the way to integrate with the tool of the same name.

OptionsTypeFormatDescription
contextstringN/AAn optional Docker context, meaning the path to / or the directory containing your docker compose file
servicesmap listThe services to use with the Telepresence integration
specmapcompose specOptional embedded docker compose specification.
Service

The service describe how to integrate with each service from your Docker Compose file, and can be seen as an override functionality. A service is normally not provided when you want to keep the original behavior, but can be provided for documentation purposes using the local behavior.

A service can be declared either as a property of compose in the Intercept Specification, or as an x-telepresence extension in the Docker compose specification. The syntax is the same in both cases, but the name property must not be used together with x-telepresence because it is implicit.

OptionsTypeFormatDescription
namestring[a-zA-Z][a-zA-Z0-9_-]*The name of your service in the compose file
behaviorstringinterceptHandler|remote|localBehavior of the service in context of the intercept.
mappingmapOptional mapping to cluster service. Only applicable for behavior: remote
Behavior
ValueDescription
interceptHandlerThe service runs locally and will receive traffic from the intercepted pod.
remoteThe service will not run as part of docker compose. Instead, traffic is redirected to a service in the cluster.
localThe service runs locally without modifications. This is the default.
Mapping
OptionsTypeDescription
namestringThe name of the cluster service to link the compose service with
namespacestringThe cluster namespace for service. This is optional and defaults to the namespace of the intercept

Examples

Considering the following Docker Compose file:

This will use the myapp service as the interceptor.

This will prevent the service from running locally. DNS will point the service in the cluster with the same name.

Adding a mapping allows to select the cluster service more accurately, here by indicating to Telepresence that the postgres service should be mapped to the psql service in the big-data namespace.

As an alternative, the services can instead be added as x-telepresence extensions in the docker compose file:

Prerequisites

When creating an intercept specification there is an option to include prerequisites.

Prerequisites give you the ability to run scripts for setup, build binaries to run as your intercept handler, or many other use cases.

Prerequisites is an array, so it can handle many options prior to starting your intercept and running your intercept handlers. The elements of the prerequisites array correspond to handlers.

The sample below is declaring that build-binary and rm-binary are two handlers; the first will be run before any intercepts, the second will be run after cleaning up the intercepts.

If a prerequisite create succeeds, the corresponding delete is guaranteed to run even if the other steps in the spec fail.

The table below defines the parameters availble within the prerequistes section.

OptionsDescription
createThe name of a handler to run before the intercept
deleteThe name of a handler to run after the intercept

Workloads

Workloads define the services in your cluster that will be intercepted.

The example below is creating an intercept on a service called echo-server on port 8080. It creates a personal intercept with the header of x-intercept-id: foo, and routes its traffic to a handler called echo-server

This table defines the parameters available within a workload.

OptionsTypeFormatDescriptionDefault
namestring[a-z][a-z0-9-]*Name of the workload to interceptN/A
namespacestring[a-z0-9][a-z0-9-]{1,62}Namespace of workload to interceptN/A
interceptsintercept listN/AThe list of intercepts associated to the workloadN/A

Intercepts

This table defines the parameters available for each intercept.

OptionsTypeFormatDescriptionDefault
enabledbooleanN/AIf set to false, disables this intercept.true
headersheader listN/AHeaders that will filter the intercept.Auto generated
servicename[a-z][a-z0-9-]{1,62}Name of service to interceptN/A
localPortintegersh|string0-65535The port for the service which is interceptedN/A
portinteger0-65535The port the service in the cluster is running onN/A
pathPrefixstringN/APath prefix filter for the intercept. Defaults to "/"/
previewURLbooleanN/ADetermine if a preview url should be createdtrue
bannerbooleanN/AUsed in the preview url option, displays a banner on the preview pagetrue

You can define headers to filter the requests which should end up on your machine when intercepting.

OptionsTypeFormatDescriptionDefault
namestringN/AName of the headerN/A
valuestringN/AValue of the headerN/A

Templating

Telepresence specs also support templating of scripts, commands, arguments, environments, and intercept headers. All Go Builtin and Sprig template functions can be used. In addition, Telepresence also adds variables:

Telepresence template variables

OptionsTypeDescription
Telepresence.UsernamestringThe name of the user running the spec

Usage

Running your specification from the CLI

After you've written your intercept specification you will want to run it.

To start your intercept, use this command:

This will validate and run your spec. In case you just want to validate it, you can do so by using this command:

Using and sharing your specification as a CRD

If you want to share specifications across your team or your organization. You can save specifications as CRDs inside your cluster.

  1. Install CRD object in your cluster (one time installation) :

  2. Then you need to deploy the specification in your cluster as a CRD:

    So echo-server example looks like this:

    Then every person that is connected to the cluster can start your intercept by using this command:

    You can also list available specifications:

Docker integration

Intercept specification can be used within the docker extension if you are using a YAML file and a docker runtime as handlers.

IDE Integration

You can integrate our JSON schemas into your IDE to give you autocompletion and hints while writing your intercept specification. There is two schemas available :

To then add the schema to your IDE follow the instructions for you given IDE, a few popular our listed below: VSCode GoLand