Convert Figma logo to code with AI

twitter logotwemproxy

A fast, light-weight proxy for memcached and redis

12,122
2,056
12,122
190

Top Related Projects

16,805

Emoji for everyone. https://twemoji.twitter.com/

8,771

A fault tolerant, protocol-agnostic RPC system

A compiler for the Mustache templating language

typeahead.js is a fast and fully-featured autocomplete library

Lessons in the Fundamentals of Scala

Snowflake is a network service for generating unique ID numbers at high scale with some simple guarantees.

Quick Overview

Twemproxy (pronounced "two-em-proxy"), also known as nutcracker, is a fast and lightweight proxy for memcached and redis protocols. It was built primarily to reduce the number of connections to the caching servers in large-scale production environments. Twemproxy is designed to handle high-throughput, low-latency scenarios and is used by many large companies to improve their caching infrastructure.

Pros

  • Reduces the number of connections to backend caching servers
  • Supports automatic sharding across multiple servers
  • Provides connection pooling and request pipelining
  • Offers consistent hashing and distribution

Cons

  • Limited support for some Redis commands
  • May introduce a small latency overhead
  • Requires additional configuration and maintenance
  • Not actively maintained (last commit was in 2019)

Getting Started

To get started with Twemproxy, follow these steps:

  1. Clone the repository:

    git clone https://github.com/twitter/twemproxy.git
    
  2. Build Twemproxy:

    cd twemproxy
    autoreconf -fvi
    ./configure
    make
    
  3. Create a configuration file (e.g., nutcracker.yml):

    alpha:
      listen: 127.0.0.1:22121
      hash: fnv1a_64
      distribution: ketama
      auto_eject_hosts: true
      redis: true
      server_retry_timeout: 2000
      server_failure_limit: 1
      servers:
       - 127.0.0.1:6379:1
    
  4. Run Twemproxy:

    ./src/nutcracker -c nutcracker.yml
    

Now Twemproxy is running and ready to proxy connections to your Redis server.

Competitor Comparisons

16,805

Emoji for everyone. https://twemoji.twitter.com/

Pros of Twemoji

  • Focuses on emoji rendering and standardization across platforms
  • Widely adopted and used by many applications and websites
  • Regularly updated with new emoji releases

Cons of Twemoji

  • Limited to emoji-related functionality
  • Requires additional implementation for full text processing

Code Comparison

Twemproxy (configuration example):

alpha:
  listen: 127.0.0.1:22121
  hash: fnv1a_64
  distribution: ketama
  auto_eject_hosts: true
  redis: true
  server_retry_timeout: 2000
  server_failure_limit: 1
  servers:
   - 127.0.0.1:6379:1

Twemoji (usage example):

import twemoji from 'twemoji';

const text = 'I 🧡 emoji!';
const parsedText = twemoji.parse(text);

console.log(parsedText);

Key Differences

Twemproxy is a fast and lightweight proxy for memcached and redis protocols, designed to reduce the connection count on the backend caching servers. It's primarily used for improving the scalability and performance of distributed caching systems.

Twemoji, on the other hand, is a library for parsing and rendering emoji characters consistently across different platforms and devices. It's focused on standardizing emoji appearance and ensuring compatibility across various systems and applications.

While both projects are maintained by Twitter, they serve entirely different purposes and are not directly comparable in terms of functionality. Twemproxy is a backend infrastructure tool, while Twemoji is a frontend library for handling emoji rendering.

8,771

A fault tolerant, protocol-agnostic RPC system

Pros of Finagle

  • More versatile, supporting multiple protocols (HTTP, Thrift, Mux) vs. Twemproxy's focus on memcached and Redis
  • Offers advanced features like load balancing, circuit breaking, and service discovery
  • Actively maintained with regular updates and a larger community

Cons of Finagle

  • Steeper learning curve due to its complexity and Scala-based architecture
  • Higher resource consumption compared to the lightweight Twemproxy
  • May be overkill for simple caching scenarios where Twemproxy excels

Code Comparison

Twemproxy configuration example:

alpha:
  listen: 127.0.0.1:22121
  hash: fnv1a_64
  distribution: ketama
  auto_eject_hosts: true
  redis: true
  servers:
   - 127.0.0.1:6379:1

Finagle client example (Scala):

val client: Service[HttpRequest, HttpResponse] = Http.client
  .withTls(("api.example.com", 443))
  .newService("api.example.com:443")

val request = Request(Method.Get, "/")
val response: Future[Response] = client(request)

A compiler for the Mustache templating language

Pros of Hogan.js

  • Lightweight and fast JavaScript templating engine
  • Supports pre-compilation for improved performance
  • Easy integration with both client-side and server-side JavaScript

Cons of Hogan.js

  • Limited functionality compared to more feature-rich templating engines
  • Requires additional setup for complex rendering scenarios
  • Less active development and community support

Code Comparison

Hogan.js example:

var template = Hogan.compile('Hello {{name}}!');
var output = template.render({ name: 'World' });
console.log(output); // Hello World!

Twemproxy example (configuration):

alpha:
  listen: 127.0.0.1:22121
  hash: fnv1a_64
  distribution: ketama
  auto_eject_hosts: true
  redis: true
  servers:
   - 127.0.0.1:6379:1

Key Differences

  • Purpose: Hogan.js is a templating engine, while Twemproxy is a proxy for memcached and redis protocols
  • Language: Hogan.js is written in JavaScript, Twemproxy in C
  • Use case: Hogan.js is used for rendering templates in web applications, Twemproxy for scaling database connections
  • Performance focus: Hogan.js optimizes template rendering, Twemproxy improves database connection management
  • Deployment: Hogan.js is typically included in web projects, Twemproxy runs as a separate service

Both projects are open-source contributions from Twitter, but they serve entirely different purposes in the development ecosystem.

typeahead.js is a fast and fully-featured autocomplete library

Pros of typeahead.js

  • Enhances user experience with autocomplete functionality for web applications
  • Lightweight and easy to integrate into existing projects
  • Highly customizable with various options for styling and behavior

Cons of typeahead.js

  • Limited to frontend functionality, unlike twemproxy's backend focus
  • Requires JavaScript to function, which may not be ideal for all use cases
  • May have performance issues with large datasets if not optimized properly

Code Comparison

typeahead.js:

$('#search-input').typeahead({
  hint: true,
  highlight: true,
  minLength: 1
},
{
  name: 'states',
  source: substringMatcher(states)
});

twemproxy:

static void
core_core(void *arg)
{
    struct context *ctx = arg;
    rstatus_t status;

    for (;;) {
        status = core_process(ctx);
        if (status != NC_OK) {
            break;
        }
    }
}

While typeahead.js focuses on frontend autocomplete functionality, twemproxy is a backend proxy for memcached and redis. The code snippets demonstrate their different purposes, with typeahead.js using JavaScript for UI interactions and twemproxy using C for server-side operations.

Lessons in the Fundamentals of Scala

Pros of scala_school

  • Educational resource for learning Scala programming language
  • Comprehensive curriculum covering Scala basics to advanced topics
  • Regularly updated with community contributions

Cons of scala_school

  • Not a functional software tool like twemproxy
  • Limited practical application beyond learning purposes
  • Requires more time investment to gain practical benefits

Code comparison

scala_school (Scala example):

def factorial(n: Int): Int = {
  if (n == 0) 1
  else n * factorial(n - 1)
}

twemproxy (C example):

rstatus_t
proxy_init(struct instance *nci)
{
    rstatus_t status;
    status = core_init();
    if (status != NC_OK) {
        return status;
    }
    // ... (additional initialization code)
}

Key differences

  • Purpose: scala_school is an educational resource, while twemproxy is a functional proxy server
  • Language: scala_school focuses on Scala, twemproxy is written in C
  • Audience: scala_school targets learners, twemproxy is for system administrators and developers
  • Functionality: scala_school provides learning materials, twemproxy offers performance improvements for database connections
  • Maintenance: scala_school is community-driven, twemproxy is maintained by Twitter's engineering team

Snowflake is a network service for generating unique ID numbers at high scale with some simple guarantees.

Pros of Snowflake

  • Designed for generating unique IDs, useful for distributed systems
  • Provides a scalable solution for ID generation across multiple data centers
  • Implements a time-based component, allowing for rough sorting of IDs

Cons of Snowflake

  • More specialized in functionality compared to Twemproxy's general-purpose proxy
  • Requires careful configuration and management of worker nodes
  • May have higher implementation complexity for simple use cases

Code Comparison

Snowflake (ID generation):

def nextId(): Long = {
  this.synchronized {
    if (currentMillis > lastTimestamp) {
      sequence = 0L
    }
    // ... (ID generation logic)
  }
}

Twemproxy (proxy configuration):

alpha:
  listen: 127.0.0.1:22121
  hash: fnv1a_64
  distribution: ketama
  auto_eject_hosts: true
  redis: true
  server_retry_timeout: 2000
  server_failure_limit: 1
  servers:
   - 127.0.0.1:6379:1

Summary

Snowflake focuses on distributed ID generation, while Twemproxy is a fast proxy for memcached and redis protocols. Snowflake offers scalable ID generation across data centers, whereas Twemproxy provides efficient request routing and load balancing for caching systems. The choice between them depends on the specific needs of the project: unique ID generation (Snowflake) or optimized caching proxy (Twemproxy).

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

twemproxy (nutcracker) Build Status

twemproxy (pronounced "two-em-proxy"), aka nutcracker is a fast and lightweight proxy for memcached and redis protocol. It was built primarily to reduce the number of connections to the caching servers on the backend. This, together with protocol pipelining and sharding enables you to horizontally scale your distributed caching architecture.

Build

To build twemproxy 0.5.0+ from distribution tarball:

$ ./configure
$ make
$ sudo make install

To build twemproxy 0.5.0+ from distribution tarball in debug mode:

$ CFLAGS="-ggdb3 -O0" ./configure --enable-debug=full
$ make
$ sudo make install

To build twemproxy from source with debug logs enabled and assertions enabled:

$ git clone git@github.com:twitter/twemproxy.git
$ cd twemproxy
$ autoreconf -fvi
$ ./configure --enable-debug=full
$ make
$ src/nutcracker -h

A quick checklist:

  • Use newer version of gcc (older version of gcc has problems)
  • Use CFLAGS="-O1" ./configure && make
  • Use CFLAGS="-O3 -fno-strict-aliasing" ./configure && make
  • autoreconf -fvi && ./configure needs automake and libtool to be installed

make check will run unit tests.

Older Releases

Distribution tarballs for older twemproxy releases (<= 0.4.1) can be found on Google Drive. The build steps are the same (./configure; make; sudo make install).

Features

  • Fast.
  • Lightweight.
  • Maintains persistent server connections.
  • Keeps connection count on the backend caching servers low.
  • Enables pipelining of requests and responses.
  • Supports proxying to multiple servers.
  • Supports multiple server pools simultaneously.
  • Shard data automatically across multiple servers.
  • Implements the complete memcached ascii and redis protocol.
  • Easy configuration of server pools through a YAML file.
  • Supports multiple hashing modes including consistent hashing and distribution.
  • Can be configured to disable nodes on failures.
  • Observability via stats exposed on the stats monitoring port.
  • Works with Linux, *BSD, OS X and SmartOS (Solaris)

Help

Usage: nutcracker [-?hVdDt] [-v verbosity level] [-o output file]
                  [-c conf file] [-s stats port] [-a stats addr]
                  [-i stats interval] [-p pid file] [-m mbuf size]

Options:
  -h, --help             : this help
  -V, --version          : show version and exit
  -t, --test-conf        : test configuration for syntax errors and exit
  -d, --daemonize        : run as a daemon
  -D, --describe-stats   : print stats description and exit
  -v, --verbose=N        : set logging level (default: 5, min: 0, max: 11)
  -o, --output=S         : set logging file (default: stderr)
  -c, --conf-file=S      : set configuration file (default: conf/nutcracker.yml)
  -s, --stats-port=N     : set stats monitoring port (default: 22222)
  -a, --stats-addr=S     : set stats monitoring ip (default: 0.0.0.0)
  -i, --stats-interval=N : set stats aggregation interval in msec (default: 30000 msec)
  -p, --pid-file=S       : set pid file (default: off)
  -m, --mbuf-size=N      : set size of mbuf chunk in bytes (default: 16384 bytes)

Zero Copy

In twemproxy, all the memory for incoming requests and outgoing responses is allocated in mbuf. Mbuf enables zero-copy because the same buffer on which a request was received from the client is used for forwarding it to the server. Similarly the same mbuf on which a response was received from the server is used for forwarding it to the client.

Furthermore, memory for mbufs is managed using a reuse pool. This means that once mbuf is allocated, it is not deallocated, but just put back into the reuse pool. By default each mbuf chunk is set to 16K bytes in size. There is a trade-off between the mbuf size and number of concurrent connections twemproxy can support. A large mbuf size reduces the number of read syscalls made by twemproxy when reading requests or responses. However, with a large mbuf size, every active connection would use up 16K bytes of buffer which might be an issue when twemproxy is handling large number of concurrent connections from clients. When twemproxy is meant to handle a large number of concurrent client connections, you should set chunk size to a small value like 512 bytes using the -m or --mbuf-size=N argument.

Configuration

Twemproxy can be configured through a YAML file specified by the -c or --conf-file command-line argument on process start. The configuration file is used to specify the server pools and the servers within each pool that twemproxy manages. The configuration files parses and understands the following keys:

  • listen: The listening address and port (name:port or ip:port) or an absolute path to sock file (e.g. /var/run/nutcracker.sock) for this server pool.
  • client_connections: The maximum number of connections allowed from redis clients. Unlimited by default, though OS-imposed limitations will still apply.
  • hash: The name of the hash function. Possible values are:
    • one_at_a_time
    • md5
    • crc16
    • crc32 (crc32 implementation compatible with libmemcached)
    • crc32a (correct crc32 implementation as per the spec)
    • fnv1_64
    • fnv1a_64 (default)
    • fnv1_32
    • fnv1a_32
    • hsieh
    • murmur
    • jenkins
  • hash_tag: A two character string that specifies the part of the key used for hashing. Eg "{}" or "$$". Hash tag enable mapping different keys to the same server as long as the part of the key within the tag is the same.
  • distribution: The key distribution mode for choosing backend servers based on the computed hash value. Possible values are:
  • timeout: The timeout value in msec that we wait for to establish a connection to the server or receive a response from a server. By default, we wait indefinitely.
  • backlog: The TCP backlog argument. Defaults to 512.
  • tcpkeepalive: A boolean value that controls if tcp keepalive is enabled for connections to servers. Defaults to false.
  • preconnect: A boolean value that controls if twemproxy should preconnect to all the servers in this pool on process start. Defaults to false.
  • redis: A boolean value that controls if a server pool speaks redis or memcached protocol. Defaults to false.
  • redis_auth: Authenticate to the Redis server on connect.
  • redis_db: The DB number to use on the pool servers. Defaults to 0. Note: Twemproxy will always present itself to clients as DB 0.
  • server_connections: The maximum number of connections that can be opened to each server. By default, we open at most 1 server connection.
  • auto_eject_hosts: A boolean value that controls if server should be ejected temporarily when it fails consecutively server_failure_limit times. See liveness recommendations for information. Defaults to false.
  • server_retry_timeout: The timeout value in msec to wait for before retrying on a temporarily ejected server, when auto_eject_hosts is set to true. Defaults to 30000 msec.
  • server_failure_limit: The number of consecutive failures on a server that would lead to it being temporarily ejected when auto_eject_hosts is set to true. Defaults to 2.
  • servers: A list of server address, port and weight (name:port:weight or ip:port:weight) for this server pool.

For example, the configuration file in conf/nutcracker.yml, also shown below, configures 5 server pools with names - alpha, beta, gamma, delta and omega. Clients that intend to send requests to one of the 10 servers in pool delta connect to port 22124 on 127.0.0.1. Clients that intend to send request to one of 2 servers in pool omega connect to unix path /tmp/gamma. Requests sent to pool alpha and omega have no timeout and might require timeout functionality to be implemented on the client side. On the other hand, requests sent to pool beta, gamma and delta timeout after 400 msec, 400 msec and 100 msec respectively when no response is received from the server. Of the 5 server pools, only pools alpha, gamma and delta are configured to use server ejection and hence are resilient to server failures. All the 5 server pools use ketama consistent hashing for key distribution with the key hasher for pools alpha, beta, gamma and delta set to fnv1a_64 while that for pool omega set to hsieh. Also only pool beta uses nodes names for consistent hashing, while pool alpha, gamma, delta and omega use 'host:port:weight' for consistent hashing. Finally, only pool alpha and beta can speak the redis protocol, while pool gamma, delta and omega speak memcached protocol.

alpha:
  listen: 127.0.0.1:22121
  hash: fnv1a_64
  distribution: ketama
  auto_eject_hosts: true
  redis: true
  server_retry_timeout: 2000
  server_failure_limit: 1
  servers:
   - 127.0.0.1:6379:1

beta:
  listen: 127.0.0.1:22122
  hash: fnv1a_64
  hash_tag: "{}"
  distribution: ketama
  auto_eject_hosts: false
  timeout: 400
  redis: true
  servers:
   - 127.0.0.1:6380:1 server1
   - 127.0.0.1:6381:1 server2
   - 127.0.0.1:6382:1 server3
   - 127.0.0.1:6383:1 server4

gamma:
  listen: 127.0.0.1:22123
  hash: fnv1a_64
  distribution: ketama
  timeout: 400
  backlog: 1024
  preconnect: true
  auto_eject_hosts: true
  server_retry_timeout: 2000
  server_failure_limit: 3
  servers:
   - 127.0.0.1:11212:1
   - 127.0.0.1:11213:1

delta:
  listen: 127.0.0.1:22124
  hash: fnv1a_64
  distribution: ketama
  timeout: 100
  auto_eject_hosts: true
  server_retry_timeout: 2000
  server_failure_limit: 1
  servers:
   - 127.0.0.1:11214:1
   - 127.0.0.1:11215:1
   - 127.0.0.1:11216:1
   - 127.0.0.1:11217:1
   - 127.0.0.1:11218:1
   - 127.0.0.1:11219:1
   - 127.0.0.1:11220:1
   - 127.0.0.1:11221:1
   - 127.0.0.1:11222:1
   - 127.0.0.1:11223:1

omega:
  listen: /tmp/gamma 0666
  hash: hsieh
  distribution: ketama
  auto_eject_hosts: false
  servers:
   - 127.0.0.1:11214:100000
   - 127.0.0.1:11215:1

Finally, to make writing a syntactically correct configuration file easier, twemproxy provides a command-line argument -t or --test-conf that can be used to test the YAML configuration file for any syntax error.

Observability

Observability in twemproxy is through logs and stats.

Twemproxy exposes stats at the granularity of server pool and servers per pool through the stats monitoring port by responding with the raw data over TCP. The stats are essentially JSON formatted key-value pairs, with the keys corresponding to counter names. By default stats are exposed on port 22222 and aggregated every 30 seconds. Both these values can be configured on program start using the -c or --conf-file and -i or --stats-interval command-line arguments respectively. You can print the description of all stats exported by using the -D or --describe-stats command-line argument.

$ nutcracker --describe-stats

pool stats:
  client_eof          "# eof on client connections"
  client_err          "# errors on client connections"
  client_connections  "# active client connections"
  server_ejects       "# times backend server was ejected"
  forward_error       "# times we encountered a forwarding error"
  fragments           "# fragments created from a multi-vector request"

server stats:
  server_eof          "# eof on server connections"
  server_err          "# errors on server connections"
  server_timedout     "# timeouts on server connections"
  server_connections  "# active server connections"
  requests            "# requests"
  request_bytes       "total request bytes"
  responses           "# responses"
  response_bytes      "total response bytes"
  in_queue            "# requests in incoming queue"
  in_queue_bytes      "current request bytes in incoming queue"
  out_queue           "# requests in outgoing queue"
  out_queue_bytes     "current request bytes in outgoing queue"

See notes/debug.txt for examples of how to read the stats from the stats port.

Logging in twemproxy is only available when twemproxy is built with logging enabled. By default logs are written to stderr. Twemproxy can also be configured to write logs to a specific file through the -o or --output command-line argument. On a running twemproxy, we can turn log levels up and down by sending it SIGTTIN and SIGTTOU signals respectively and reopen log files by sending it SIGHUP signal.

Pipelining

Twemproxy enables proxying multiple client connections onto one or few server connections. This architectural setup makes it ideal for pipelining requests and responses and hence saving on the round trip time.

For example, if twemproxy is proxying three client connections onto a single server and we get requests - get key\r\n, set key 0 0 3\r\nval\r\n and delete key\r\n on these three connections respectively, twemproxy would try to batch these requests and send them as a single message onto the server connection as get key\r\nset key 0 0 3\r\nval\r\ndelete key\r\n.

Pipelining is the reason why twemproxy ends up doing better in terms of throughput even though it introduces an extra hop between the client and server.

Deployment

If you are deploying twemproxy in production, you might consider reading through the recommendation document to understand the parameters you could tune in twemproxy to run it efficiently in the production environment.

Utils

Companies using Twemproxy in Production

Issues and Support

Have a bug or a question? Please create an issue here on GitHub!

https://github.com/twitter/twemproxy/issues

Committers

Thank you to all of our contributors!

License

Copyright 2012 Twitter, Inc.

Licensed under the Apache License, Version 2.0: http://www.apache.org/licenses/LICENSE-2.0