Convert Figma logo to code with AI

antimatter15 logoalpaca.cpp

Locally run an Instruction-Tuned Chat-Style LLM

10,255
912
10,255
135

Top Related Projects

64,646

LLM inference in C/C++

17,121

Inference Llama 2 in one file of pure C

Python bindings for llama.cpp

Port of OpenAI's Whisper model in C/C++

69,016

GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.

55,392

Inference code for Llama models

Quick Overview

Alpaca.cpp is a C++ implementation of the Alpaca language model, based on the LLaMA architecture. It aims to provide a lightweight and efficient version of the model that can run on consumer hardware, making large language models more accessible for personal use and experimentation.

Pros

  • Runs on consumer-grade hardware, making it accessible to a wider audience
  • Efficient C++ implementation for improved performance
  • Supports various quantization levels for reduced memory usage
  • Open-source and actively maintained

Cons

  • May not achieve the same level of performance as the original LLaMA model
  • Limited documentation and examples compared to more established language models
  • Requires some technical knowledge to set up and use effectively
  • Potential legal concerns regarding the use of LLaMA architecture

Code Examples

  1. Loading the model:
#include "alpaca.h"

int main() {
    alpaca::Model model("path/to/model/weights.bin");
    if (!model.load()) {
        std::cerr << "Failed to load model" << std::endl;
        return 1;
    }
    // Model loaded successfully
}
  1. Generating text:
std::string prompt = "Once upon a time";
std::string generated_text = model.generate(prompt, 100); // Generate 100 tokens
std::cout << "Generated text: " << generated_text << std::endl;
  1. Setting generation parameters:
alpaca::GenerationConfig config;
config.temperature = 0.7;
config.top_p = 0.9;
config.max_length = 200;

std::string result = model.generate(prompt, config);

Getting Started

  1. Clone the repository:

    git clone https://github.com/antimatter15/alpaca.cpp.git
    cd alpaca.cpp
    
  2. Build the project:

    mkdir build && cd build
    cmake ..
    make
    
  3. Download the model weights and place them in the models directory.

  4. Run the example:

    ./alpaca
    
  5. To use in your own project, include the necessary headers and link against the library.

Competitor Comparisons

64,646

LLM inference in C/C++

Pros of llama.cpp

  • More actively maintained with frequent updates and improvements
  • Supports a wider range of LLaMA model sizes and variants
  • Offers more advanced features like quantization and GPU acceleration

Cons of llama.cpp

  • Slightly more complex setup and usage
  • May require more system resources for larger models

Code Comparison

llama.cpp:

int main(int argc, char ** argv) {
    gpt_params params;
    if (gpt_params_parse(argc, argv, params) == false) {
        return 1;
    }
    llama_init_backend();
    ...
}

alpaca.cpp:

int main(int argc, char ** argv) {
    if (argc != 3) {
        fprintf(stderr, "Usage: %s <model> <prompt>\n", argv[0]);
        exit(1);
    }
    ...
}

The llama.cpp project offers more flexibility in parameter parsing and initialization, while alpaca.cpp has a simpler command-line interface. llama.cpp also includes backend initialization, indicating support for different hardware configurations.

17,121

Inference Llama 2 in one file of pure C

Pros of llama2.c

  • Simpler and more lightweight implementation
  • Focuses on educational value and readability
  • Includes a training script for fine-tuning models

Cons of llama2.c

  • Limited to smaller models due to its simplicity
  • Fewer optimizations and features compared to alpaca.cpp

Code Comparison

llama2.c:

float* malloc_run_memory(Config* p) {
    uint64_t size = p->n_layers * p->dim * p->hidden_dim * 2;
    size += p->seq_len * p->dim;
    size += p->seq_len * p->vocab_size;
    size += p->dim * p->hidden_dim;
    size += p->vocab_size * p->dim;
    return malloc(size * sizeof(float));
}

alpaca.cpp:

static void init_model(const std::string & model_path) {
    printf("Loading model: %s\n", model_path.c_str());
    std::ifstream file(model_path, std::ios::binary);
    file.read((char *)&hparams, sizeof(hparams));
    model.resize(hparams.n_layers);
    for (int i = 0; i < hparams.n_layers; ++i) {
        auto & layer = model[i];
        file.read((char *)&layer, sizeof(layer));
    }
}

Both repositories implement language models, but llama2.c focuses on simplicity and educational value, while alpaca.cpp offers more features and optimizations for larger models. The code snippets show different approaches to memory allocation and model initialization, reflecting their distinct design philosophies.

Python bindings for llama.cpp

Pros of llama-cpp-python

  • Python bindings for easier integration with Python projects
  • Supports GPU acceleration (CUDA) for faster inference
  • Provides a high-level API for easier usage

Cons of llama-cpp-python

  • May have slightly higher memory overhead due to Python wrapper
  • Potentially slower execution compared to pure C++ implementation
  • Requires additional setup for Python environment

Code Comparison

llama-cpp-python:

from llama_cpp import Llama

llm = Llama(model_path="./models/7B/ggml-model.bin")
output = llm("Q: Name the planets in the solar system? A: ", max_tokens=32, stop=["Q:", "\n"], echo=True)
print(output)

alpaca.cpp:

#include "ggml.h"
#include "llama.h"

gpt_params params;
params.model = "./models/7B/ggml-model.bin";
llama_context * ctx = llama_init_from_file(params.model.c_str(), params.n_ctx);
// Additional code for inference

The llama-cpp-python project provides a more user-friendly Python interface, while alpaca.cpp offers a lower-level C++ implementation. The Python version simplifies usage but may have slightly higher overhead, while the C++ version provides more direct control and potentially better performance.

Port of OpenAI's Whisper model in C/C++

Pros of whisper.cpp

  • Focuses on speech recognition, providing a specialized tool for audio transcription
  • Implements the Whisper model, which is known for its high accuracy in speech-to-text tasks
  • Offers multi-language support, making it versatile for various applications

Cons of whisper.cpp

  • Limited to speech recognition tasks, unlike alpaca.cpp which is a more general-purpose language model
  • May require more preprocessing of input data (audio files) compared to text-based input in alpaca.cpp

Code Comparison

whisper.cpp:

// Load model
struct whisper_context * ctx = whisper_init_from_file(model_path);

// Process audio
whisper_full_params params = whisper_full_default_params(WHISPER_SAMPLING_GREEDY);
whisper_full(ctx, params, pcm, n_samples);

// Print result
const int n_segments = whisper_full_n_segments(ctx);
for (int i = 0; i < n_segments; ++i) {
    const char * text = whisper_full_get_segment_text(ctx, i);
    printf("%s", text);
}

alpaca.cpp:

// Initialize model
llama_context * ctx = llama_init_from_file(model_path, params);

// Generate text
llama_eval(ctx, tokens.data(), tokens.size(), n_past, n_threads);

// Get output
float * logits = llama_get_logits(ctx);
int token = llama_sample_top_p_top_k(ctx, logits, top_k, top_p, temp, repeat_penalty);

Both repositories focus on implementing efficient C++ versions of popular AI models, but they serve different purposes. whisper.cpp is specialized for speech recognition, while alpaca.cpp is a more general language model implementation.

69,016

GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.

Pros of gpt4all

  • More comprehensive and feature-rich, offering a wider range of language models and capabilities
  • Better documentation and community support, making it easier for users to get started and troubleshoot issues
  • Actively maintained with frequent updates and improvements

Cons of gpt4all

  • Higher system requirements and more complex setup process
  • Larger file size and potentially slower performance on resource-constrained devices
  • Steeper learning curve for beginners due to its more extensive feature set

Code Comparison

alpaca.cpp:

int main(int argc, char ** argv) {
    gpt_params params;
    params.model = "ggml-alpaca-7b-q4.bin";
    if (gpt_params_parse(argc, argv, params) == false) {
        return 1;
    }

gpt4all:

int main(int argc, char ** argv) {
    gpt4all::GPT4All model("ggml-gpt4all-j-v1.3-groovy.bin");
    std::string prompt = "Once upon a time";
    std::string response = model.generate(prompt, 128);
    std::cout << response << std::endl;

Both repositories aim to provide local, efficient language model implementations, but gpt4all offers a more comprehensive solution with broader model support and features, while alpaca.cpp focuses on a simpler, lightweight approach for running the Alpaca model.

55,392

Inference code for Llama models

Pros of llama

  • Official implementation from Meta, ensuring high-quality and well-maintained codebase
  • Comprehensive documentation and extensive features for fine-tuning and deployment
  • Supports multiple model sizes and configurations

Cons of llama

  • Requires more computational resources and memory
  • May have stricter licensing and usage restrictions
  • Potentially more complex setup and configuration process

Code Comparison

llama:

from llama import Llama

model = Llama(model_path="path/to/model.pth")
output = model.generate("Hello, how are you?", max_length=50)
print(output)

alpaca.cpp:

#include "alpaca.h"

alpaca::Model model("path/to/model.bin");
std::string output = model.generate("Hello, how are you?", 50);
std::cout << output << std::endl;

Summary

While llama offers a more comprehensive and officially supported implementation, alpaca.cpp provides a lightweight C++ alternative that may be more suitable for resource-constrained environments or projects requiring lower-level integration. The choice between the two depends on specific project requirements, available resources, and development preferences.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

Alpaca.cpp

Run a fast ChatGPT-like model locally on your device. The screencast below is not sped up and running on an M2 Macbook Air with 4GB of weights.

asciicast

This combines the LLaMA foundation model with an open reproduction of Stanford Alpaca a fine-tuning of the base model to obey instructions (akin to the RLHF used to train ChatGPT) and a set of modifications to llama.cpp to add a chat interface.

Consider using LLaMA.cpp instead

The changes from alpaca.cpp have since been upstreamed in llama.cpp.

Get Started (7B)

Download the zip file corresponding to your operating system from the latest release. On Windows, download alpaca-win.zip, on Mac (both Intel or ARM) download alpaca-mac.zip, and on Linux (x64) download alpaca-linux.zip.

Download ggml-alpaca-7b-q4.bin and place it in the same folder as the chat executable in the zip file. There are several options:

Once you've downloaded the model weights and placed them into the same directory as the chat or chat.exe executable, run:

./chat

The weights are based on the published fine-tunes from alpaca-lora, converted back into a pytorch checkpoint with a modified script and then quantized with llama.cpp the regular way.

Building from Source (MacOS/Linux)

git clone https://github.com/antimatter15/alpaca.cpp
cd alpaca.cpp

make chat
./chat

Building from Source (Windows)

  • Download and install CMake: https://cmake.org/download/
  • Download and install git. If you've never used git before, consider a GUI client like https://desktop.github.com/
  • Clone this repo using your git client of choice (for GitHub Desktop, go to File -> Clone repository -> From URL and paste https://github.com/antimatter15/alpaca.cpp in as the URL)
  • Open a Windows Terminal inside the folder you cloned the repository to
  • Run the following commands one by one:
cmake .
cmake --build . --config Release
  • Download the weights via any of the links in "Get started" above, and save the file as ggml-alpaca-7b-q4.bin in the main Alpaca directory.
  • In the terminal window, run this command:
.\Release\chat.exe
  • (You can add other launch options like --n 8 as preferred onto the same line)
  • You can now type to the AI in the terminal and it will reply. Enjoy!

Credit

This combines Facebook's LLaMA, Stanford Alpaca, alpaca-lora and corresponding weights by Eric Wang (which uses Jason Phang's implementation of LLaMA on top of Hugging Face Transformers), and llama.cpp by Georgi Gerganov. The chat implementation is based on Matvey Soloviev's Interactive Mode for llama.cpp. Inspired by Simon Willison's getting started guide for LLaMA. Andy Matuschak's thread on adapting this to 13B, using fine tuning weights by Sam Witteveen.

Disclaimer

Note that the model weights are only to be used for research purposes, as they are derivative of LLaMA, and uses the published instruction data from the Stanford Alpaca project which is generated by OpenAI, which itself disallows the usage of its outputs to train competing models.