archgw
The AI-native proxy server for agents. Arch handles the pesky heavy lifting in building agentic apps - routing prompts to agents or specifc tools, clarifying user inputs, unifying access and observability to any LLM - so you can build smarter and ship faster.
Top Related Projects
Manages Envoy Proxy as a Standalone or Kubernetes-based Application Gateway
Connect, secure, control, and observe services.
The Cloud Native Application Proxy
🦍 The Cloud-Native API Gateway and AI Gateway.
Contour is a Kubernetes ingress controller using Envoy proxy.
Quick Overview
ArchGW is an open-source API gateway designed for microservices architectures. It provides a scalable and secure solution for managing API traffic, authentication, and authorization in distributed systems. ArchGW aims to simplify the process of building and maintaining microservices-based applications.
Pros
- Lightweight and efficient, optimized for microservices architectures
- Built-in support for authentication and authorization
- Easily extensible through plugins and custom modules
- Designed with scalability in mind, suitable for high-traffic applications
Cons
- Relatively new project, may lack extensive community support
- Documentation could be more comprehensive
- Limited out-of-the-box integrations compared to some established API gateways
- May require additional configuration for complex deployment scenarios
Code Examples
// Initialize ArchGW
gw := archgw.New()
// Configure a route
gw.AddRoute("/api/users", "http://user-service:8080")
// Add authentication middleware
gw.Use(archgw.JWTAuth(secretKey))
// Start the gateway
gw.Start(":8000")
This example demonstrates how to initialize ArchGW, configure a route, add JWT authentication middleware, and start the gateway.
// Custom rate limiting middleware
func customRateLimit(next http.Handler) http.Handler {
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
// Implement custom rate limiting logic here
if exceedsLimit(r) {
http.Error(w, "Rate limit exceeded", http.StatusTooManyRequests)
return
}
next.ServeHTTP(w, r)
})
}
// Add custom middleware to ArchGW
gw.Use(customRateLimit)
This example shows how to create and add a custom rate limiting middleware to ArchGW.
// Configure CORS
corsOptions := archgw.CORSOptions{
AllowedOrigins: []string{"https://example.com"},
AllowedMethods: []string{"GET", "POST", "PUT", "DELETE"},
AllowedHeaders: []string{"Content-Type", "Authorization"},
}
gw.UseCORS(corsOptions)
This example demonstrates how to configure CORS (Cross-Origin Resource Sharing) settings for ArchGW.
Getting Started
To get started with ArchGW, follow these steps:
-
Install ArchGW:
go get github.com/katanemo/archgw
-
Create a new Go file (e.g.,
main.go
) and add the following code:package main import "github.com/katanemo/archgw" func main() { gw := archgw.New() gw.AddRoute("/api", "http://backend-service:8080") gw.Start(":8000") }
-
Run the application:
go run main.go
This will start ArchGW on port 8000, routing requests from /api
to your backend service.
Competitor Comparisons
Manages Envoy Proxy as a Standalone or Kubernetes-based Application Gateway
Pros of Gateway
- More mature and widely adopted project with a larger community
- Extensive documentation and examples for various use cases
- Built on top of the battle-tested Envoy proxy, providing robust performance and features
Cons of Gateway
- Steeper learning curve due to its complexity and extensive feature set
- Heavier resource footprint compared to lighter alternatives
- Configuration can be verbose and require more setup time
Code Comparison
Gateway configuration example:
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
name: example-gateway
spec:
gatewayClassName: envoy
listeners:
- name: http
port: 80
protocol: HTTP
Archgw configuration example:
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
name: example-gateway
spec:
gatewayClassName: archgw
listeners:
- name: http
port: 80
protocol: HTTP
Both projects implement the Kubernetes Gateway API, but Gateway offers more advanced features and customization options, while Archgw aims for simplicity and ease of use. Gateway is better suited for complex, high-traffic environments, whereas Archgw may be more appropriate for smaller-scale deployments or teams looking for a lightweight solution.
Connect, secure, control, and observe services.
Pros of Istio
- Mature and widely adopted service mesh solution with extensive features
- Strong community support and regular updates
- Comprehensive traffic management and security capabilities
Cons of Istio
- Complex setup and configuration process
- Higher resource overhead compared to lighter alternatives
- Steep learning curve for new users
Code Comparison
Istio configuration example:
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: reviews-route
spec:
hosts:
- reviews.prod.svc.cluster.local
http:
- route:
- destination:
host: reviews.prod.svc.cluster.local
subset: v2
ArchGW configuration example:
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
name: http-route
spec:
parentRefs:
- name: example-gateway
rules:
- matches:
- path:
type: PathPrefix
value: /
While both projects aim to improve service networking, Istio offers a more comprehensive service mesh solution with advanced features, whereas ArchGW focuses on providing a simpler, Kubernetes-native API Gateway. Istio's configuration tends to be more complex, while ArchGW aims for a more straightforward approach aligned with Kubernetes Gateway API standards.
The Cloud Native Application Proxy
Pros of Traefik
- More mature and widely adopted project with a larger community
- Extensive feature set including automatic HTTPS, service discovery, and load balancing
- Better documentation and examples for various use cases
Cons of Traefik
- Can be complex to configure for advanced scenarios
- Higher resource usage compared to simpler reverse proxies
- Steeper learning curve for newcomers
Code Comparison
Traefik configuration (YAML):
http:
routers:
my-router:
rule: "Host(`example.com`)"
service: my-service
services:
my-service:
loadBalancer:
servers:
- url: "http://backend1:8080"
- url: "http://backend2:8080"
ArchGW configuration (JSON):
{
"routes": [
{
"path": "/",
"upstream": "http://backend:8080"
}
]
}
Summary
Traefik is a more feature-rich and mature reverse proxy solution, offering advanced capabilities like automatic HTTPS and service discovery. However, it can be more complex to set up and may consume more resources. ArchGW, on the other hand, appears to be a simpler solution with a focus on API gateway functionality. The choice between the two depends on specific project requirements and the desired level of complexity.
🦍 The Cloud-Native API Gateway and AI Gateway.
Pros of Kong
- Mature and widely adopted API gateway with extensive documentation
- Large ecosystem of plugins and integrations
- Supports multiple deployment options (Kubernetes, cloud, on-premises)
Cons of Kong
- Can be complex to set up and configure for smaller projects
- Resource-intensive, may require significant infrastructure
- Steeper learning curve for newcomers
Code Comparison
Kong (Lua):
local plugin = {
name = "my-custom-plugin",
priority = 1000,
version = "1.0",
}
function plugin:access(conf)
kong.service.request.set_header("X-Custom-Header", "Hello World")
end
return plugin
Archgw (Go):
func (p *Plugin) ProcessRequest(ctx context.Context, req *http.Request) (*http.Request, error) {
req.Header.Set("X-Custom-Header", "Hello World")
return req, nil
}
Key Differences
- Kong is written in Lua and uses OpenResty, while Archgw is written in Go
- Kong offers a more extensive feature set, but Archgw may be simpler for basic use cases
- Archgw focuses on cloud-native environments, while Kong supports various deployment options
Use Cases
- Kong: Large-scale enterprise applications with complex API management needs
- Archgw: Cloud-native applications requiring a lightweight, easy-to-deploy gateway
Contour is a Kubernetes ingress controller using Envoy proxy.
Pros of Contour
- More mature and widely adopted project with a larger community
- Offers advanced traffic routing and load balancing features
- Supports multiple protocols including HTTP, HTTPS, and gRPC
Cons of Contour
- More complex setup and configuration compared to ArchGW
- Requires more resources to run and maintain
- May be overkill for simpler use cases or smaller deployments
Code Comparison
ArchGW configuration example:
apiVersion: gateway.networking.k8s.io/v1beta1
kind: Gateway
metadata:
name: example-gateway
spec:
gatewayClassName: archgw
listeners:
- name: http
port: 80
protocol: HTTP
Contour configuration example:
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
name: example-proxy
spec:
virtualhost:
fqdn: example.com
routes:
- conditions:
- prefix: /
services:
- name: example-service
port: 80
The code examples show that ArchGW uses the standard Gateway API, while Contour uses its custom HTTPProxy resource for configuration. This difference reflects Contour's more advanced features and flexibility, but also its increased complexity compared to ArchGW's simpler approach.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME

The intelligent (edge and LLM) proxy server for agentic applications.
Move faster by letting Arch handle the pesky heavy lifting in building agents: fast input clarification, agent routing, seamless integration of prompts with tools for common tasks, and unified access and observability of LLMs.
Quickstart ⢠Demos ⢠Build agentic apps with Arch ⢠Use Arch as an LLM router ⢠Documentation ⢠Contact
Overview
Past the thrill of an AI demo, have you found yourself hitting these walls? You know, the all too familiar ones:
- You go from one BIG prompt to specialized prompts, but get stuck building routing and handoff code?
- You want use new LLMs, but struggle to quickly and safely add LLMs without writing integration code?
- You're bogged down with prompt engineering just to clarify user intent and validate inputs effectively?
- You're wasting cycles choosing and integrating code for observability instead of it happening transparently?
And you think to youself, can't I move faster by focusing on higher-level objectives in a language/framework agnostic way? Well, you can! Arch Gateway was built by the contributors of Envoy Proxy with the belief that:
Prompts are nuanced and opaque user requests, which require the same capabilities as traditional HTTP requests including secure handling, intelligent routing, robust observability, and integration with backend (API) systems to improve speed and accuracy for common agentic scenarios â all outside core application logic.*
Core Features:
ð¦ Routing
. Engineered with purpose-built LLMs for fast (<100ms) agent routing and hand-off scenariosâ¡ Tools Use
: For common agentic scenarios let Arch instantly clarfiy and convert prompts to tools/API calls⨠Guardrails
: Centrally configure and prevent harmful outcomes and ensure safe user interactionsð Access to LLMs
: Centralize access and traffic to LLMs with smart retries for continuous availabilityðµ Observability
: W3C compatible request tracing and LLM metrics that instantly plugin with popular tools𧱠Built on Envoy
: Arch runs alongside app servers as a containerized process, and builds on top of Envoy's proven HTTP management and scalability features to handle ingress and egress traffic related to prompts and LLMs.
High-Level Sequence Diagram:
Jump to our docs to learn how you can use Arch to improve the speed, security and personalization of your GenAI apps.
[!IMPORTANT] Today, the function calling LLM (Arch-Function) designed for the agentic and RAG scenarios is hosted free of charge in the US-central region. To offer consistent latencies and throughput, and to manage our expenses, we will enable access to the hosted version via developers keys soon, and give you the option to run that LLM locally. For more details see this issue #258
Contact
To get in touch with us, please join our discord server. We will be monitoring that actively and offering support there.
Demos
- Sample App: Weather Forecast Agent - A sample agentic weather forecasting app that highlights core function calling capabilities of Arch.
- Sample App: Network Operator Agent - A simple network device switch operator agent that can retrive device statistics and reboot them.
- User Case: Connecting to SaaS APIs - Connect 3rd party SaaS APIs to your agentic chat experience.
Quickstart
Follow this quickstart guide to use arch gateway to build a simple AI agent. Laster in the section we will see how you can Arch Gateway to manage access keys, provide unified access to upstream LLMs and to provide e2e observability.
Prerequisites
Before you begin, ensure you have the following:
- Docker System (v24)
- Docker compose (v2.29)
- Python (v3.12)
Arch's CLI allows you to manage and interact with the Arch gateway efficiently. To install the CLI, simply run the following command:
[!TIP] We recommend that developers create a new Python virtual environment to isolate dependencies before installing Arch. This ensures that archgw and its dependencies do not interfere with other packages on your system.
$ python -m venv venv
$ source venv/bin/activate # On Windows, use: venv\Scripts\activate
$ pip install archgw==0.2.4
Build AI Agent with Arch Gateway
In following quickstart we will show you how easy it is to build AI agent with Arch gateway. We will build a currency exchange agent using following simple steps. For this demo we will use https://api.frankfurter.dev/
to fetch latest price for currencies and assume USD as base currency.
Step 1. Create arch config file
Create arch_config.yaml
file with following content,
version: v0.1
listener:
address: 0.0.0.0
port: 10000
message_format: huggingface
connect_timeout: 0.005s
llm_providers:
- name: gpt-4o
access_key: $OPENAI_API_KEY
provider: openai
model: gpt-4o
system_prompt: |
You are a helpful assistant.
prompt_guards:
input_guards:
jailbreak:
on_exception:
message: Looks like you're curious about my abilities, but I can only provide assistance for currency exchange.
prompt_targets:
- name: currency_exchange
description: Get currency exchange rate from USD to other currencies
parameters:
- name: currency_symbol
description: the currency that needs conversion
required: true
type: str
in_path: true
endpoint:
name: frankfurther_api
path: /v1/latest?base=USD&symbols={currency_symbol}
system_prompt: |
You are a helpful assistant. Show me the currency symbol you want to convert from USD.
- name: get_supported_currencies
description: Get list of supported currencies for conversion
endpoint:
name: frankfurther_api
path: /v1/currencies
endpoints:
frankfurther_api:
endpoint: api.frankfurter.dev:443
protocol: https
Step 2. Start arch gateway with currency conversion config
$ archgw up arch_config.yaml
2024-12-05 16:56:27,979 - cli.main - INFO - Starting archgw cli version: 0.1.5
...
2024-12-05 16:56:28,485 - cli.utils - INFO - Schema validation successful!
2024-12-05 16:56:28,485 - cli.main - INFO - Starging arch model server and arch gateway
...
2024-12-05 16:56:51,647 - cli.core - INFO - Container is healthy!
Once the gateway is up you can start interacting with at port 10000 using openai chat completion API.
Some of the sample queries you can ask could be what is currency rate for gbp?
or show me list of currencies for conversion
.
Step 3. Interacting with gateway using curl command
Here is a sample curl command you can use to interact,
$ curl --header 'Content-Type: application/json' \
--data '{"messages": [{"role": "user","content": "what is exchange rate for gbp"}]}' \
http://localhost:10000/v1/chat/completions | jq ".choices[0].message.content"
"As of the date provided in your context, December 5, 2024, the exchange rate for GBP (British Pound) from USD (United States Dollar) is 0.78558. This means that 1 USD is equivalent to 0.78558 GBP."
And to get list of supported currencies,
$ curl --header 'Content-Type: application/json' \
--data '{"messages": [{"role": "user","content": "show me list of currencies that are supported for conversion"}]}' \
http://localhost:10000/v1/chat/completions | jq ".choices[0].message.content"
"Here is a list of the currencies that are supported for conversion from USD, along with their symbols:\n\n1. AUD - Australian Dollar\n2. BGN - Bulgarian Lev\n3. BRL - Brazilian Real\n4. CAD - Canadian Dollar\n5. CHF - Swiss Franc\n6. CNY - Chinese Renminbi Yuan\n7. CZK - Czech Koruna\n8. DKK - Danish Krone\n9. EUR - Euro\n10. GBP - British Pound\n11. HKD - Hong Kong Dollar\n12. HUF - Hungarian Forint\n13. IDR - Indonesian Rupiah\n14. ILS - Israeli New Sheqel\n15. INR - Indian Rupee\n16. ISK - Icelandic Króna\n17. JPY - Japanese Yen\n18. KRW - South Korean Won\n19. MXN - Mexican Peso\n20. MYR - Malaysian Ringgit\n21. NOK - Norwegian Krone\n22. NZD - New Zealand Dollar\n23. PHP - Philippine Peso\n24. PLN - Polish ZÅoty\n25. RON - Romanian Leu\n26. SEK - Swedish Krona\n27. SGD - Singapore Dollar\n28. THB - Thai Baht\n29. TRY - Turkish Lira\n30. USD - United States Dollar\n31. ZAR - South African Rand\n\nIf you want to convert USD to any of these currencies, you can select the one you are interested in."
Use Arch Gateway as LLM Router
Step 1. Create arch config file
Arch operates based on a configuration file where you can define LLM providers, prompt targets, guardrails, etc. Below is an example configuration that defines openai and mistral LLM providers.
Create arch_config.yaml
file with following content:
version: v0.1
listener:
address: 0.0.0.0
port: 10000
message_format: huggingface
connect_timeout: 0.005s
llm_providers:
- name: gpt-4o
access_key: $OPENAI_API_KEY
provider: openai
model: gpt-4o
default: true
- name: ministral-3b
access_key: $MISTRAL_API_KEY
provider: openai
model: ministral-3b-latest
Step 2. Start arch gateway
Once the config file is created ensure that you have env vars setup for MISTRAL_API_KEY
and OPENAI_API_KEY
(or these are defined in .env
file).
Start arch gateway,
$ archgw up arch_config.yaml
2024-12-05 11:24:51,288 - cli.main - INFO - Starting archgw cli version: 0.1.5
2024-12-05 11:24:51,825 - cli.utils - INFO - Schema validation successful!
2024-12-05 11:24:51,825 - cli.main - INFO - Starting arch model server and arch gateway
...
2024-12-05 11:25:16,131 - cli.core - INFO - Container is healthy!
Step 3: Interact with LLM
Step 3.1: Using OpenAI python client
Make outbound calls via Arch gateway
from openai import OpenAI
# Use the OpenAI client as usual
client = OpenAI(
# No need to set a specific openai.api_key since it's configured in Arch's gateway
api_key = '--',
# Set the OpenAI API base URL to the Arch gateway endpoint
base_url = "http://127.0.0.1:12000/v1"
)
response = client.chat.completions.create(
# we select model from arch_config file
model="None",
messages=[{"role": "user", "content": "What is the capital of France?"}],
)
print("OpenAI Response:", response.choices[0].message.content)
Step 3.2: Using curl command
$ curl --header 'Content-Type: application/json' \
--data '{"messages": [{"role": "user","content": "What is the capital of France?"}]}' \
http://localhost:12000/v1/chat/completions
{
...
"model": "gpt-4o-2024-08-06",
"choices": [
{
...
"message": {
"role": "assistant",
"content": "The capital of France is Paris.",
},
}
],
...
}
You can override model selection using x-arch-llm-provider-hint
header. For example if you want to use mistral using following curl command,
$ curl --header 'Content-Type: application/json' \
--header 'x-arch-llm-provider-hint: ministral-3b' \
--data '{"messages": [{"role": "user","content": "What is the capital of France?"}]}' \
http://localhost:12000/v1/chat/completions
{
...
"model": "ministral-3b-latest",
"choices": [
{
"message": {
"role": "assistant",
"content": "The capital of France is Paris. It is the most populous city in France and is known for its iconic landmarks such as the Eiffel Tower, the Louvre Museum, and Notre-Dame Cathedral. Paris is also a major global center for art, fashion, gastronomy, and culture.",
},
...
}
],
...
}
Observability
Arch is designed to support best-in class observability by supporting open standards. Please read our docs on observability for more details on tracing, metrics, and logs. The screenshot below is from our integration with Signoz (among others)
Debugging
When debugging issues / errors application logs and access logs provide key information to give you more context on whats going on with the system. Arch gateway runs in info log level and following is a typical output you could see in a typical interaction between developer and arch gateway,
$ archgw up --service archgw --foreground
...
[2025-03-26 18:32:01.350][26][info] prompt_gateway: on_http_request_body: sending request to model server
[2025-03-26 18:32:01.851][26][info] prompt_gateway: on_http_call_response: model server response received
[2025-03-26 18:32:01.852][26][info] prompt_gateway: on_http_call_response: dispatching api call to developer endpoint: weather_forecast_service, path: /weather, method: POST
[2025-03-26 18:32:01.882][26][info] prompt_gateway: on_http_call_response: developer api call response received: status code: 200
[2025-03-26 18:32:01.882][26][info] prompt_gateway: on_http_call_response: sending request to upstream llm
[2025-03-26 18:32:01.883][26][info] llm_gateway: on_http_request_body: provider: gpt-4o-mini, model requested: None, model selected: gpt-4o-mini
[2025-03-26 18:32:02.818][26][info] llm_gateway: on_http_response_body: time to first token: 1468ms
[2025-03-26 18:32:04.532][26][info] llm_gateway: on_http_response_body: request latency: 3183ms
...
Log level can be changed to debug to get more details. To enable debug logs edit (Dockerfile)[arch/Dockerfile], change the log level --component-log-level wasm:info
to --component-log-level wasm:debug
. And after that you need to rebuild docker image and restart the arch gateway using following set of commands,
# make sure you are at the root of the repo
$ archgw build
# go to your service that has arch_config.yaml file and issue following command,
$ archgw up --service archgw --foreground
Contribution
We would love feedback on our Roadmap and we welcome contributions to Arch! Whether you're fixing bugs, adding new features, improving documentation, or creating tutorials, your help is much appreciated. Please visit our Contribution Guide for more details
Top Related Projects
Manages Envoy Proxy as a Standalone or Kubernetes-based Application Gateway
Connect, secure, control, and observe services.
The Cloud Native Application Proxy
🦍 The Cloud-Native API Gateway and AI Gateway.
Contour is a Kubernetes ingress controller using Envoy proxy.
Convert
designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot