Docsright arrowTelepresence OSSright arrow2.15right arrowTelepresence Quickstart

5 min • read

Telepresence Quickstart

Telepresence is an open source tool that enables you to set up remote development environments for Kubernetes where you can still use all of your favorite local tools like IDEs, debuggers, and profilers.


  • kubectl, the Kubernetes command-line tool, or the OpenShift Container Platform command-line interface, oc.

  • A Kubernetes Deployment and Service.

Install Telepresence on Your Machine

Install Telepresence by running the relevant commands below for your OS. If you are not the administrator of your cluster, you will need administrative RBAC permissions to install and use the Telepresence traffic-manager in your cluster.


To install Telepresence, Click here to download the Telepresence binary.

Once you have the binary downloaded and unzipped you will need to do a few things:

  1. Rename the binary from telepresence-windows-amd64.exe to telepresence.exe
  2. Move the binary to C:\Program Files (x86)\$USER\Telepresence\

Install Telepresence in Your Cluster

  1. Install the traffic manager into your cluster with telepresence helm install. More information about installing Telepresence can be found here. This will require root access on your machine.

Intercept Your Service

With Telepresence, you can create global intercepts that intercept all traffic going to a service in your remote cluster and route it to your local environment instead.

  1. Connect to your cluster with telepresence connect and connect to the Kubernetes API server:

    You now have access to your remote Kubernetes API server as if you were on the same network. You can now use any local tools to connect to any service in the cluster.

  2. Enter telepresence list and make sure the service you want to intercept is listed. For example:

  3. Get the name of the port you want to intercept on your service: kubectl get service <service name> --output yaml.

    For example:

  4. Intercept all traffic going to the service in your cluster: telepresence intercept <service-name> --port <local-port>[:<remote-port>] --env-file <path-to-env-file>.

    • For --port: specify the port the local instance of your service is running on. If the intercepted service exposes multiple ports, specify the port you want to intercept after a colon.
    • For --env-file: specify a file path for Telepresence to write the environment variables that are set in the pod. The example below shows Telepresence intercepting traffic going to service example-service. Requests now reach the service on port http in the cluster get routed to 8080 on the workstation and write the environment variables of the service to ~/example-service-intercept.env.
  5. Start your local environment using the environment variables retrieved in the previous step.

The following are some examples of how to pass the environment variables to your local process:

  • Docker: enter docker run and provide the path to the file using the --env-file argument. For more information about Docker run commands, see the Docker command-line reference documentation.
  • Visual Studio Code: specify the path to the environment variables file in the envFile field of your configuration.
  • JetBrains IDE (IntelliJ, WebStorm, PyCharm, GoLand, etc.): use the EnvFile plugin.
  1. Query the environment in which you intercepted a service and verify your local instance being invoked. All the traffic previously routed to your Kubernetes Service is now routed to your local environment

🎉 You've Unlocked a Faster Development Workflow for Kubernetes with Telepresence

Now, with Telepresence, you can:

  • Make changes on the fly and see them reflected when interacting with your remote Kubernetes environment, this is just like hot reloading, but it works across both local and remote environments.
  • Query services and microservice APIs that are only accessible in your remote cluster's network.
  • Set breakpoints in your IDE and re-route remote traffic to your local machine to investigate bugs with realistic user traffic and API calls.

What’s Next?