Convert Figma logo to code with AI

telepresenceio logotelepresence

Local development against a remote Kubernetes or OpenShift cluster

6,873
539
6,873
23

Top Related Projects

open source Kubernetes-native API gateway for microservices built on the Envoy Proxy

36,815

Connect, secure, control, and observe services.

10,998

Ultralight, security-first service mesh for Kubernetes. Main repo for Linkerd 2.x.

54,416

The Cloud Native Application Proxy

26,145

Cloud-native high-performance edge/middle/service proxy

30,298

Run Kubernetes locally

Quick Overview

Telepresence is an open-source tool for local development of Kubernetes microservices. It allows developers to run a single service locally while connecting to a remote Kubernetes cluster, enabling faster development cycles and easier debugging of distributed applications.

Pros

  • Seamless local development experience with remote Kubernetes clusters
  • Faster development cycles by avoiding constant container rebuilds and deploys
  • Easy debugging of microservices in complex distributed environments
  • Supports various programming languages and frameworks

Cons

  • Initial setup can be complex for some environments
  • May introduce additional network latency in certain scenarios
  • Requires careful configuration to ensure security in production environments
  • Limited support for some advanced Kubernetes features

Getting Started

  1. Install Telepresence:
# macOS
brew install datawire/blackbird/telepresence

# Linux
curl -s https://packagecloud.io/install/repositories/datawire/telepresence/script.deb.sh | sudo bash
sudo apt install telepresence
  1. Connect to your Kubernetes cluster:
telepresence connect
  1. Intercept a service:
telepresence intercept <service-name> --port <local-port>:<remote-port>
  1. Run your local service and start developing!

Competitor Comparisons

open source Kubernetes-native API gateway for microservices built on the Envoy Proxy

Pros of Emissary

  • Designed as a full-featured API Gateway and Ingress Controller
  • Offers advanced traffic management and routing capabilities
  • Provides built-in support for authentication and rate limiting

Cons of Emissary

  • Steeper learning curve due to more complex configuration options
  • May be overkill for simple development environments
  • Requires more resources to run compared to Telepresence

Code Comparison

Emissary configuration example:

---
apiVersion: getambassador.io/v3alpha1
kind: Mapping
metadata:
  name: example-mapping
spec:
  prefix: /example/
  service: example-service

Telepresence configuration example:

intercepts:
- name: example
  service: example-service
  port: 8080

Emissary focuses on defining API routes and services, while Telepresence emphasizes local development and service interception. Emissary's configuration is more detailed, reflecting its broader feature set as an API Gateway. Telepresence's configuration is simpler, tailored for development workflows and service proxying.

While both tools serve Kubernetes environments, they address different needs. Emissary is better suited for production-grade API management, whereas Telepresence excels in streamlining local development and testing against remote clusters.

36,815

Connect, secure, control, and observe services.

Pros of Istio

  • Comprehensive service mesh solution with advanced traffic management, security, and observability features
  • Supports multi-cluster and multi-cloud deployments
  • Provides robust load balancing, circuit breaking, and fault injection capabilities

Cons of Istio

  • Steeper learning curve and more complex setup compared to Telepresence
  • Higher resource overhead due to its extensive feature set
  • May introduce additional latency in certain scenarios

Code Comparison

Istio (configuring a virtual service):

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: my-service
spec:
  hosts:
  - my-service
  http:
  - route:
    - destination:
        host: my-service
        subset: v1

Telepresence (intercepting traffic):

telepresence intercept my-service --port 8080:80

Istio provides a more declarative approach to traffic management, while Telepresence offers a simpler command-line interface for local development and debugging. Istio's configuration is more verbose but offers greater flexibility, whereas Telepresence focuses on ease of use for developers working on microservices locally.

Both tools serve different purposes: Istio is a full-fledged service mesh for production environments, while Telepresence is primarily designed for local development and testing of microservices in Kubernetes clusters.

10,998

Ultralight, security-first service mesh for Kubernetes. Main repo for Linkerd 2.x.

Pros of Linkerd2

  • Provides a full-featured service mesh with advanced traffic management and observability
  • Offers automatic mTLS encryption and strong security features out-of-the-box
  • Lightweight and has minimal performance overhead compared to other service meshes

Cons of Linkerd2

  • Requires cluster-wide installation and modifications, which can be complex
  • May introduce additional latency for inter-service communication
  • Limited to Kubernetes environments, not suitable for local development

Code Comparison

Linkerd2 (installing the service mesh):

linkerd install | kubectl apply -f -
linkerd inject deployment.yaml | kubectl apply -f -

Telepresence (local development setup):

telepresence connect
telepresence intercept my-service --port 8080:80

While both projects aim to improve Kubernetes development and operations, they serve different purposes. Linkerd2 is a comprehensive service mesh for production environments, while Telepresence focuses on local development and debugging of Kubernetes applications.

54,416

The Cloud Native Application Proxy

Pros of Traefik

  • Designed as a full-featured reverse proxy and load balancer
  • Supports automatic HTTPS with Let's Encrypt integration
  • Offers dynamic configuration and service discovery

Cons of Traefik

  • Steeper learning curve for complex configurations
  • May be overkill for simple development environments
  • Requires more resources to run compared to Telepresence

Code Comparison

Traefik configuration (YAML):

http:
  routers:
    my-router:
      rule: "Host(`example.com`)"
      service: my-service
  services:
    my-service:
      loadBalancer:
        servers:
          - url: "http://localhost:8080"

Telepresence configuration (CLI):

telepresence connect
telepresence intercept my-service --port 8080:80

While Traefik is a powerful reverse proxy and load balancer, Telepresence focuses on local development and testing of Kubernetes services. Traefik excels in production environments, offering advanced routing and load balancing features. Telepresence, on the other hand, simplifies the development workflow by allowing developers to run services locally while connected to a remote Kubernetes cluster.

26,145

Cloud-native high-performance edge/middle/service proxy

Pros of Envoy

  • More versatile and can be used as a general-purpose proxy for various protocols
  • Highly performant and scalable, suitable for large-scale deployments
  • Extensive feature set for traffic management, observability, and security

Cons of Envoy

  • Steeper learning curve due to its complexity and wide range of features
  • Requires more configuration and setup compared to Telepresence
  • May be overkill for simple development scenarios

Code Comparison

Telepresence configuration example:

intercepts:
- name: example
  service: example-svc
  port: 8080

Envoy configuration example:

static_resources:
  listeners:
  - address:
      socket_address:
        address: 0.0.0.0
        port_value: 8080

While Telepresence focuses on local development and testing, Envoy is a full-featured proxy designed for production environments. Telepresence offers a simpler setup for developers working on microservices, whereas Envoy provides more advanced networking capabilities but requires more extensive configuration.

30,298

Run Kubernetes locally

Pros of Minikube

  • Provides a full-fledged local Kubernetes environment
  • Supports multiple hypervisors and container runtimes
  • Offers a more realistic Kubernetes experience for testing and development

Cons of Minikube

  • Requires more system resources compared to Telepresence
  • Setup and configuration can be more complex
  • May not be as suitable for rapid development cycles

Code Comparison

Minikube startup:

minikube start --driver=docker
kubectl create deployment hello-minikube --image=k8s.gcr.io/echoserver:1.10
kubectl expose deployment hello-minikube --type=NodePort --port=8080

Telepresence usage:

telepresence connect
telepresence intercept hello-world --port 8080:http

Key Differences

  • Minikube creates a full local Kubernetes cluster, while Telepresence connects to an existing cluster
  • Telepresence focuses on local development and debugging, whereas Minikube is more suited for testing and simulating production environments
  • Minikube requires more setup and resources, but provides a more comprehensive Kubernetes experience
  • Telepresence offers faster development cycles and easier integration with local development environments

Both tools serve different purposes in the Kubernetes ecosystem, with Minikube being more suitable for full cluster simulation and Telepresence excelling in rapid development and debugging scenarios.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Telepresence: fast, efficient local development for Kubernetes microservices

Artifact Hub Gurubase

Telepresence gives developers infinite scale development environments for Kubernetes.

Key benefits

With Telepresence:

  • You run your services locally, using your favorite IDE and other tools
  • Your workstation is connected to the cluster and can access to its services

This gives developers:

  • A fast local dev loop, with no waiting for a container build / push / deploy
  • Ability to use their favorite local tools (IDE, debugger, etc.)
  • Ability to run large-scale applications that can't run locally

Quick Start

A few quick ways to start using Telepresence:

  • Telepresence Quick Start: Quick Start
  • Install Telepresence: Install
  • Contributor's Guide: Guide
  • Meetings: Check out our community meeting schedule for opportunities to interact with Telepresence developers

Walkthrough

Install something in the cluster that Telepresence can engage with:

Start with an empty cluster:

$ kubectl create deploy hello --image=k8s.gcr.io/echoserver:1.9
deployment.apps/hello created
$ kubectl expose deploy hello --port 80 --target-port 8080
service/hello exposed
$ kubectl get ns,svc,deploy,po
NAME                        STATUS   AGE
namespace/default           Active   4d19h
namespace/kube-node-lease   Active   4d19h
namespace/kube-public       Active   4d19h
namespace/kube-system       Active   4d19h

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/hello        ClusterIP   10.98.148.129   <none>        80/TCP    112s
service/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP   4d19h

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/hello   1/1     1            1           2m47s

NAME                        READY   STATUS    RESTARTS   AGE
pod/hello-87f7f548f-djc8v   1/1     Running   0          1m47s

Check telepresence version

$ telepresence version
OSS Client : v2.23.0
Root Daemon: not running
User Daemon: not running

Setup Traffic Manager in the cluster

Install Traffic Manager in your cluster. By default, it will reside in the ambassador namespace:

$ telepresence helm install

Traffic Manager installed successfully

Establish a connection to the cluster (outbound traffic)

Let telepresence connect:

$ telepresence connect
 ✔ Connected to context rancher-desktop, namespace default (https://127.0.0.1:6443)       2.4s 

A session is now active and outbound connections will be routed to the cluster. I.e. your laptop is logically "inside" a namespace in the cluster.

Since telepresence connected to the default namespace, all services in that namespace can now be reached directly by their name. You can of course also use namespaced names, e.g. curl hello.default.

$ curl hello

Hostname: hello-87f7f548f-djc8v

Pod Information:
	-no pod information available-

Server values:
	server_version=nginx: 1.13.3 - lua: 10008

Request Information:
	client_address=10.1.5.190
	method=GET
	real path=/
	query=
	request_version=1.1
	request_scheme=http
	request_uri=http://hello:8080/

Request Headers:
	accept=*/*
	host=hello
	user-agent=curl/8.9.1

Request Body:
	-no body in request-

Intercept the service. I.e. redirect traffic to it to our laptop (inbound traffic)

Add an intercept for the hello deployment on port 9000. Here, we also start a service listening on that port:

$ telepresence intercept hello --port 9000 -- python3 -m http.server 9000
 ✔ Intercepted                                                                              2.1s 
Using Deployment hello
   Intercept name    : hello
   State             : ACTIVE
   Workload kind     : Deployment
   Intercepting      : 10.1.5.196 -> 127.0.0.1
       8080 -> 9000 TCP
   Volume Mount Point: /tmp/telfs-629530207
Serving HTTP on 0.0.0.0 port 9000 (http://0.0.0.0:9000/) ...

The python -m httpserver is now started on port 9000 and will run until terminated by <ctrl>-C. Access it from a browser using http://hello/ or use curl from another terminal. With curl, it presents a html listing from the directory where the server was started. Something like:

$ curl hello
<!DOCTYPE HTML>
<html lang="en">
<head>
<meta charset="utf-8">
<title>Directory listing for /</title>
</head>
<body>
<h1>Directory listing for /</h1>
<hr>
<ul>
<li><a href="file1.txt">file1.txt</a></li>
<li><a href="file2.txt">file2.txt</a></li>
</ul>
<hr>
</body>
</html>

Observe that the python service reports that it's being accessed:

127.0.0.1 - - [03/Apr/2025 09:44:57] "GET / HTTP/1.1" 200 -

Clean-up and close daemon processes

End the service with <ctrl>-C and then try curl hello or http://hello again. The intercept is gone, and the echo service responds as normal.

Now end the session too. Your desktop no longer has access to the cluster internals.

$ telepresence quit
 ✔ Disconnected                                                                           0.1s 
$ curl hello
curl: (6) Could not resolve host: hello

The telepresence daemons are still running in the background, which is harmless. You'll need to stop them before you upgrade telepresence. That's done by passing the option -s (stop all local telepresence daemons) to the quit command.

$ telepresence quit -s
 ✔ Quit                                                                                   0.3s 

What got installed in the cluster?

Telepresence installs the Traffic Manager in your cluster if it is not already present. This deployment remains unless you uninstall it.

Telepresence injects the Traffic Agent as an additional container into the pods of the workload you intercept, and will optionally install an init-container to route traffic through the agent (the init-container is only injected when the service is headless or uses a numerical targetPort). The modifications persist unless you uninstall them.

At first glance, we can see that the deployment is installed ...

$ kubectl get svc,deploy,pod
NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/hello        ClusterIP   10.102.244.61   <none>        80/TCP    10m
service/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP   4d20h

NAME                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/hello   1/1     1            1           11m

NAME                        READY   STATUS    RESTARTS   AGE
pod/hello-87f7f548f-mdg8d   2/2     Running   0          6m36s

... and that the traffic-manager is installed in the "ambassador" namespace.

$ kubectl -n ambassador get svc,deploy,pod
NAME                      TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
service/agent-injector    ClusterIP   10.107.17.143   <none>        443/TCP    31m
service/traffic-manager   ClusterIP   None            <none>        8081/TCP   31m

NAME                              READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/traffic-manager   1/1     1            1           31m

NAME                                   READY   STATUS    RESTARTS   AGE
pod/traffic-manager-7cc6668576-hmlzz   1/1     Running   0          31m

The traffic-agent is installed too, in the hello pod. Here together with an init-container, because the service is using a numerical targetPort.

$ kubectl describe pod hello-774455b6f5-6x6vs
Name:             hello-87f7f548f-mdg8d
Namespace:        default
Priority:         0
Service Account:  default
Node:             lima-rancher-desktop/192.168.65.3
Start Time:       Thu, 03 Apr 2025 09:43:37 +0200
Labels:           app=hello
                  pod-template-hash=87f7f548f
                  telepresence.io/workloadEnabled=true
                  telepresence.io/workloadKind=Deployment
                  telepresence.io/workloadName=hello
Annotations:      telepresence.io/agent-config:
                    {"agentName":"hello","namespace":"default","logLevel":"debug","workloadName":"hello","workloadKind":"Deployment","managerHost":"traffic-ma...
                  telepresence.io/inject-traffic-agent: enabled
Status:           Running
IP:               10.1.5.196
IPs:
  IP:           10.1.5.196
Controlled By:  ReplicaSet/hello-87f7f548f
Init Containers:
  tel-agent-init:
    Container ID:  docker://f3203943fb97414bee8c3ad4b11237895a8165df7aa39a8f88741b4093e491be
    Image:         local/tel2:2.23.0-alpha.0
    Image ID:      docker-pullable://tel2@sha256:0f81a553bb223f4cfe97973d585586439451e120eb2ed8e35d0fe9266b22fd6d
    Port:          <none>
    Host Port:     <none>
    Args:
      agent-init
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Thu, 03 Apr 2025 09:43:38 +0200
      Finished:     Thu, 03 Apr 2025 09:43:38 +0200
    Ready:          True
    Restart Count:  0
    Environment:
      LOG_LEVEL:     debug
      AGENT_CONFIG:   (v1:metadata.annotations['telepresence.io/agent-config'])
      POD_IP:         (v1:status.podIP)
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zgfs5 (ro)
Containers:
  echoserver:
    Container ID:   docker://2ccc7a81bfe7d1f666af7b17c6415631af2f1bfdb6cb147a0ef7a345f528ac49
    Image:          registry.k8s.io/echoserver:1.9
    Image ID:       docker-pullable://registry.k8s.io/echoserver@sha256:10f4dbc8eeeb8806d9b3a261b2473b77ca357b290a15d91ce5a0ca5e6164b535
    Port:           <none>
    Host Port:      <none>
    State:          Running
      Started:      Thu, 03 Apr 2025 09:43:39 +0200
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zgfs5 (ro)
  traffic-agent:
    Container ID:  docker://b4f2279e58aacdf3426c80381c29f7cc214729a7d44a40acd1a566d778d84cfa
    Image:         local/tel2:2.23.0-alpha.0
    Image ID:      docker-pullable://tel2@sha256:0f81a553bb223f4cfe97973d585586439451e120eb2ed8e35d0fe9266b22fd6d
    Port:          9900/TCP
    Host Port:     0/TCP
    Args:
      agent
    State:          Running
      Started:      Thu, 03 Apr 2025 09:43:39 +0200
    Ready:          True
    Restart Count:  0
    Readiness:      exec [/bin/stat /tmp/agent/ready] delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      AGENT_CONFIG:         (v1:metadata.annotations['telepresence.io/agent-config'])
      _TEL_AGENT_POD_IP:    (v1:status.podIP)
      _TEL_AGENT_POD_UID:   (v1:metadata.uid)
      _TEL_AGENT_NAME:     hello-87f7f548f-mdg8d (v1:metadata.name)
    Mounts:
      /tel_app_exports from export-volume (rw)
      /tel_app_mounts/echoserver/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zgfs5 (ro)
      /tmp from tel-agent-tmp (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zgfs5 (ro)
Conditions:
  Type                        Status
  PodReadyToStartContainers   True 
  Initialized                 True 
  Ready                       True 
  ContainersReady             True 
  PodScheduled                True 
Volumes:
  kube-api-access-zgfs5:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
  export-volume:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  tel-agent-tmp:
    Type:        EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:      
    SizeLimit:   <unset>
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age    From               Message
  ----    ------     ----   ----               -------
  Normal  Scheduled  8m     default-scheduler  Successfully assigned default/hello-87f7f548f-mdg8d to lima-rancher-desktop
  Normal  Pulled     7m59s  kubelet            Container image "local/tel2:2.23.0-alpha.0" already present on machine
  Normal  Created    7m59s  kubelet            Created container: tel-agent-init
  Normal  Started    7m59s  kubelet            Started container tel-agent-init
  Normal  Pulled     7m58s  kubelet            Container image "registry.k8s.io/echoserver:1.9" already present on machine
  Normal  Created    7m58s  kubelet            Created container: echoserver
  Normal  Started    7m58s  kubelet            Started container echoserver
  Normal  Pulled     7m58s  kubelet            Container image "local/tel2:2.23.0-alpha.0" already present on machine
  Normal  Created    7m58s  kubelet            Created container: traffic-agent
  Normal  Started    7m58s  kubelet            Started container traffic-agent

Uninstalling

You can uninstall the traffic-agent from specific deployments or from all deployments. Or you can choose to uninstall everything in which case the traffic-manager and all traffic-agents will be uninstalled.

$ telepresence helm uninstall

will remove everything that was automatically installed by telepresence from the cluster.

$ telepresence uninstall hello

will remove the traffic-agent and the configmap entry.

Troubleshooting

The telepresence background processes daemon and connector both produces log files that can be very helpful when problems are encountered. The files are named daemon.log and connector.log. The location of the logs differ depending on what platform that is used:

  • macOS ~/Library/Logs/telepresence
  • Linux ~/.cache/telepresence/logs
  • Windows "%USERPROFILE%\AppData\Local\logs"

How it works

When Telepresence 2 connects to a Kubernetes cluster, it:

  1. Ensures Traffic Manager is installed in the cluster.
  2. Looks for the relevant subnets in the kubernetes cluster.
  3. Creates a Virtual Network Interface (VIF).
  4. Assigns the cluster's subnets to the VIF.
  5. Binds itself to VIF and starts routing traffic to the traffic-manager, or a traffic-agent if one is present.
  6. Starts listening for, and serving DNS requests, by passing a selected portion to the traffic-manager or traffic-agent.

When a locally running application makes a network request to a service in the cluster, Telepresence will resolve the name to an address within the cluster. The operating system then sees that the TUN device has an address in the same subnet as the address of the outgoing packets and sends them to tel0. Telepresence is on the other side of tel0 and picks up the packets, injecting them into the cluster through a gRPC connection with Traffic Manager.

Troubleshooting

Visit the troubleshooting section in the Telepresence documentation for more advice: Troubleshooting

Or discuss with the community in the CNCF Slack in the #telepresence-oss channel.