Convert Figma logo to code with AI

plasma-umass logoMesh

A memory allocator that automatically reduces the memory footprint of C/C++ applications.

1,745
73
1,745
31

Top Related Projects

10,496

mimalloc is a compact general purpose allocator with excellent performance.

Main gperftools repository

Public domain cross platform lock free thread caching 16-byte aligned memory allocator implemented in C

Message passing based allocator

Quick Overview

Mesh is a memory allocator designed to maximize application performance by reducing memory fragmentation and improving cache locality. It's a drop-in replacement for malloc that can significantly enhance the speed and memory efficiency of C and C++ programs, particularly those with complex allocation patterns or long-running processes.

Pros

  • Improves application performance by reducing memory fragmentation
  • Enhances cache locality, leading to faster memory access
  • Easy to integrate as a drop-in replacement for malloc
  • Works transparently with existing C and C++ codebases

Cons

  • May not provide significant benefits for applications with simple memory allocation patterns
  • Potential overhead for small, short-lived programs
  • Limited documentation and examples available
  • Requires recompilation of the application to integrate

Code Examples

  1. Basic usage of Mesh allocator:
#include "mesh.h"

int main() {
    void* ptr = mesh_malloc(1024);
    // Use the allocated memory
    mesh_free(ptr);
    return 0;
}
  1. Replacing standard malloc with Mesh:
#define malloc(size) mesh_malloc(size)
#define free(ptr) mesh_free(ptr)
#define realloc(ptr, size) mesh_realloc(ptr, size)

// Your existing code using malloc, free, and realloc will now use Mesh
  1. Using Mesh with C++:
#include "mesh.h"
#include <new>

void* operator new(size_t size) {
    return mesh_malloc(size);
}

void operator delete(void* ptr) noexcept {
    mesh_free(ptr);
}

Getting Started

To use Mesh in your project:

  1. Clone the repository:

    git clone https://github.com/plasma-umass/Mesh.git
    
  2. Build Mesh:

    cd Mesh
    mkdir build && cd build
    cmake ..
    make
    
  3. Link your application with Mesh:

    gcc -o your_app your_app.c -L/path/to/mesh/lib -lmesh
    
  4. Set the LD_PRELOAD environment variable to use Mesh:

    LD_PRELOAD=/path/to/mesh/lib/libmesh.so ./your_app
    

Competitor Comparisons

10,496

mimalloc is a compact general purpose allocator with excellent performance.

Pros of mimalloc

  • Highly optimized for performance, often outperforming other allocators
  • Designed for scalability in multi-threaded applications
  • Extensive documentation and benchmarks available

Cons of mimalloc

  • Less focus on memory fragmentation reduction compared to Mesh
  • May not be as effective in reducing overall memory usage in some scenarios

Code comparison

mimalloc:

#include <mimalloc.h>

void* ptr = mi_malloc(sizeof(int));
mi_free(ptr);

Mesh:

#include <mesh.h>

void* ptr = mesh_malloc(sizeof(int));
mesh_free(ptr);

Key differences

  • Mesh focuses on reducing memory fragmentation through meshing, while mimalloc prioritizes performance and scalability
  • mimalloc is more widely adopted and has more extensive documentation
  • Mesh may provide better memory savings in certain scenarios, especially for long-running applications with high fragmentation

Use cases

  • mimalloc: High-performance applications, multi-threaded systems
  • Mesh: Memory-constrained environments, long-running applications with fragmentation issues

Both allocators aim to improve memory management, but they approach the problem from different angles. The choice between them depends on specific application requirements and priorities.

Pros of jemalloc

  • Mature and widely adopted in production environments
  • Excellent performance for multi-threaded applications
  • Extensive configuration options for fine-tuning

Cons of jemalloc

  • Higher memory overhead in some scenarios
  • Less effective at reducing fragmentation compared to Mesh
  • More complex to integrate into existing projects

Code Comparison

Mesh:

#include "mesh.h"

void* operator new(size_t size) {
  return mesh::mesh_malloc(size);
}

jemalloc:

#include <jemalloc/jemalloc.h>

void* operator new(size_t size) {
  return je_malloc(size);
}

Key Differences

  • Mesh focuses on reducing memory fragmentation through meshing, while jemalloc primarily optimizes allocation speed and multi-threaded performance
  • Mesh is designed to be a drop-in replacement for existing allocators, whereas jemalloc often requires more integration effort
  • jemalloc offers more extensive profiling and debugging tools compared to Mesh

Use Cases

  • Mesh: Applications with high memory fragmentation or those seeking easy integration
  • jemalloc: High-performance systems, especially those with multi-threaded workloads or requiring fine-grained control over memory allocation

Main gperftools repository

Pros of gperftools

  • Mature and widely-used performance analysis toolkit
  • Supports multiple programming languages and platforms
  • Includes a highly efficient memory allocator (TCMalloc)

Cons of gperftools

  • May require more manual intervention for optimization
  • Can be complex to set up and configure for specific use cases
  • Primarily focused on performance profiling rather than memory management

Code Comparison

gperftools (TCMalloc usage):

#include <gperftools/tcmalloc.h>

void* ptr = tc_malloc(size);
tc_free(ptr);

Mesh (automatic memory management):

#include "mesh.h"

// No explicit memory management code needed
// Mesh works transparently in the background

Key Differences

  • Mesh focuses on automatic memory management and compaction, while gperftools provides a broader set of performance tools
  • gperftools requires more explicit usage in code, whereas Mesh operates more transparently
  • Mesh aims to reduce memory fragmentation, while gperftools primarily offers profiling and a fast allocator

Both projects have their strengths, with gperftools being a comprehensive performance toolkit and Mesh specializing in efficient memory management through compaction.

Public domain cross platform lock free thread caching 16-byte aligned memory allocator implemented in C

Pros of rpmalloc

  • Lightweight and focused on performance, with minimal overhead
  • Cross-platform support for various operating systems and architectures
  • Designed for multi-threaded applications with thread-local caches

Cons of rpmalloc

  • Lacks advanced memory management features like compaction or defragmentation
  • May not be as effective in reducing memory fragmentation for long-running applications
  • Limited built-in debugging and profiling capabilities

Code Comparison

Mesh:

void* MeshingHeap::malloc(size_t sz) {
  const auto sizeClass = SizeMap::SizeClass(sz);
  const auto sizeMap = SizeMap::GetSizeMap();
  const auto pageCount = sizeMap.class_to_pages(sizeClass);
  return allocPages(pageCount);
}

rpmalloc:

void* rpmalloc(size_t size) {
  if (EXPECTED(size <= SMALL_SIZE_LIMIT))
    return _rpmalloc_small(size);
  else if (size <= MEDIUM_SIZE_LIMIT)
    return _rpmalloc_medium(size);
  return _rpmalloc_large(size);
}

Both allocators use size-based allocation strategies, but Mesh focuses on page-level allocation, while rpmalloc employs a tiered approach for small, medium, and large allocations. Mesh's implementation suggests a more complex memory management system, potentially offering better fragmentation handling, while rpmalloc's approach prioritizes speed and simplicity.

Message passing based allocator

Pros of snmalloc

  • Designed for high scalability and performance in multi-threaded environments
  • Implements a novel "message passing" allocation strategy for improved efficiency
  • Provides better security features, including randomization and guard pages

Cons of snmalloc

  • May have higher memory overhead for small allocations compared to Mesh
  • Less focus on reducing fragmentation, which is a key feature of Mesh
  • Potentially more complex implementation, making it harder to understand and modify

Code Comparison

Mesh:

void* MiniHeap::malloc(size_t sz) {
  if (isFull() || sz > _objectSize)
    return nullptr;
  void* ptr = _freelist.malloc();
  if (ptr != nullptr)
    _allocCount++;
  return ptr;
}

snmalloc:

void* LocalAllocator::alloc(size_t size)
{
  if (size > LOCAL_CACHE_SIZE)
    return alloc_large(size);
  return small_alloc(size);
}

Both allocators use different approaches for memory allocation. Mesh focuses on reducing fragmentation through its MiniHeap structure, while snmalloc emphasizes scalability and performance in multi-threaded scenarios.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Mesh: Compacting Memory Management for C/C++

Mesh is a drop in replacement for malloc(3) that can transparently recover from memory fragmentation without any changes to application code.

Mesh is described in detail in a paper (PDF) that appeared at PLDI 2019.

Or watch this talk by Bobby Powers at Strange Loop:

Compacting the Uncompactable

Mesh runs on Linux and macOS. Windows is a work in progress.

Mesh uses bazel as a build system, but wraps it in a Makefile, and has no runtime dependencies other than libc:

$ git clone https://github.com/plasma-umass/mesh
$ cd mesh
$ make; sudo make install
# example: run git with mesh as its allocator:
$ LD_PRELOAD=libmesh.so git status

Please open an issue if you have questions (or issues)!

But will it blend?

If you run a program linked against mesh (or with Mesh LD_PRELOADed), setting the variable MALLOCSTATS=1 will instruct mesh to print a summary at exit:

$ MALLOCSTATS=1 ./bin/redis-server-mesh ./redis.conf
25216:C 11 Mar 20:27:12.050 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
25216:C 11 Mar 20:27:12.050 # Redis version=4.0.2, bits=64, commit=dfe0d212, modified=0, pid=25216, just started
25216:C 11 Mar 20:27:12.050 # Configuration loaded
[...]
^C25216:signal-handler (1583983641) Received SIGINT scheduling shutdown...
25216:M 11 Mar 20:27:21.945 # User requested shutdown...
25216:M 11 Mar 20:27:21.945 * Removing the pid file.
25216:M 11 Mar 20:27:21.945 * Removing the unix socket file.
25216:M 11 Mar 20:27:21.945 # Redis is now ready to exit, bye bye...
MESH COUNT:         25918
Meshed MB (total):  101.2
Meshed pages HWM:   25918
Meshed MB HWM:      101.2
MH Alloc Count:     56775
MH Free  Count:     17
MH High Water Mark: 82687

Not all workloads experience fragmentation, so its possible that Mesh will have a small 'Meshed MB (total)' number!

Implementation Overview

Mesh is built on Heap Layers, an infrastructure for building high performance memory allocators in C++ (see the paper for details.)

The entry point of the library is libmesh.cc. This file is where malloc, free and the instantiations of the Heap used for allocating program memory lives.

DEFINITIONS

  • Page: The smallest block of memory managed by the operating system, 4Kb on most architectures. Memory given to the allocator by the operating system is always in multiples of the page size, and aligned to the page size.
  • Span: A contiguous run of 1 or more pages. It is often larger than the page size to account for large allocations and amortize the cost of heap metadata.
  • Arena: A contiguous range of virtual address space we allocate out of. All allocations returned by malloc(3) reside within the arena.
  • GlobalHeap: The global heap carves out the Arena into Spans and performs meshing.
  • MiniHeap: Metadata for a Span -- at any time a live Span has a single MiniHeap owner. For small objects, MiniHeaps have a bitmap to track whether an allocation is live or freed.
  • ThreadLocalHeap: A collections of MiniHeaps and a ShuffleVector so that most allocations and free(3)s can be fast and lock-free.
  • ShuffleVector: A novel data structure that enables randomized allocation with bump-pointer-like speed.