Convert Figma logo to code with AI

marcpaq logob1fipl

A Bestiary of Single-File Implementations of Programming Languages

1,004
38
1,004
0

Top Related Projects

12,214

Tensor library for machine learning

17,879

Inference Llama 2 in one file of pure C

57,265

Inference code for Llama models

74,778

Robust Speech Recognition via Large-Scale Weak Supervision

Port of OpenAI's Whisper model in C/C++

Python bindings for llama.cpp

Quick Overview

B1FIPL (BASIC-1 Fantasy Interpreter Programming Language) is a minimalist programming language inspired by BASIC. It is designed to be simple, easy to learn, and suitable for creating text-based games and interactive fiction. B1FIPL aims to provide a nostalgic programming experience reminiscent of early home computers.

Pros

  • Simple syntax, making it easy for beginners to learn and use
  • Designed specifically for creating text-based games and interactive fiction
  • Lightweight and portable, with minimal dependencies
  • Includes a built-in interpreter for running B1FIPL programs

Cons

  • Limited functionality compared to more comprehensive programming languages
  • Lack of advanced features and libraries found in modern languages
  • May not be suitable for complex or large-scale projects
  • Limited community support and resources due to its niche nature

Code Examples

  1. Hello World program:
10 PRINT "Hello, World!"
20 END
  1. Simple input and output:
10 INPUT "What is your name? "; N$
20 PRINT "Hello, "; N$; "!"
30 END
  1. Basic loop and conditional:
10 FOR I = 1 TO 5
20   IF I = 3 THEN PRINT "Three!"
30   IF I <> 3 THEN PRINT I
40 NEXT I
50 END

Getting Started

To get started with B1FIPL:

  1. Clone the repository:

    git clone https://github.com/marcpaq/b1fipl.git
    
  2. Navigate to the project directory:

    cd b1fipl
    
  3. Compile the interpreter:

    make
    
  4. Run a B1FIPL program:

    ./b1fipl examples/hello.b1
    

You can now create your own B1FIPL programs using a text editor and run them with the interpreter.

Competitor Comparisons

12,214

Tensor library for machine learning

Pros of ggml

  • More comprehensive and feature-rich machine learning library
  • Actively maintained with frequent updates and contributions
  • Supports a wider range of models and applications

Cons of ggml

  • More complex and potentially harder to learn for beginners
  • Requires more computational resources due to its extensive features
  • Larger codebase and dependencies

Code Comparison

b1fipl:

void b1_init(void) {
    b1_rnd_seed = 1;
}

int b1_rnd(int n) {
    return (((b1_rnd_seed = b1_rnd_seed * 214013L + 2531011L) >> 16) & 0x7fff) % n;
}

ggml:

void ggml_init_params(struct ggml_init_params * params) {
    memset(params, 0, sizeof(*params));
    params->mem_size   = 16*1024*1024;
    params->mem_buffer = NULL;
    params->no_alloc   = false;
}

The code snippets show that b1fipl focuses on simple random number generation, while ggml provides more complex initialization for machine learning tasks. ggml's code demonstrates its broader scope and more advanced features compared to b1fipl's straightforward implementation.

17,879

Inference Llama 2 in one file of pure C

Pros of llama2.c

  • More advanced and focused on large language models
  • Implements a state-of-the-art AI model (LLaMA 2)
  • Actively maintained with recent updates

Cons of llama2.c

  • More complex and requires deeper understanding of AI/ML concepts
  • Larger codebase and potentially higher resource requirements
  • Specific to LLaMA 2 model, less flexible for other applications

Code Comparison

llama2.c:

float* malloc_float32(size_t n) {
    void* ptr = malloc(n * sizeof(float));
    if (ptr == NULL) {
        printf("malloc failed!\n");
        exit(1);
    }
    return (float*)ptr;
}

b1fipl:

void *b1_malloc(size_t size)
{
    void *p = malloc(size);
    if (!p) b1_error("Out of memory");
    return p;
}

Both repositories implement custom memory allocation functions, but llama2.c focuses on float arrays for AI computations, while b1fipl provides a more general-purpose allocation wrapper.

57,265

Inference code for Llama models

Pros of Llama

  • Larger and more active community with over 39k stars and 5.5k forks
  • Comprehensive documentation and examples for using the Llama language model
  • Backed by Meta, providing substantial resources and ongoing development

Cons of Llama

  • More complex codebase and setup process
  • Requires significant computational resources to run effectively
  • Limited to specific use cases in natural language processing

Code Comparison

B1fipl:

int main(void)
{
    putchar('H');
    putchar('i');
    putchar('\n');
    return 0;
}

Llama:

from llama import Llama

llm = Llama.build(
    ckpt_dir="llama-2-7b/",
    tokenizer_path="tokenizer.model",
    max_seq_len=512,
    max_batch_size=8,
)

output = llm("Hi", max_tokens=32, temperature=0.7)
print(output)

B1fipl is a simple "Hello, World!" program in C, while Llama demonstrates loading and using a large language model for text generation. The Llama example is more sophisticated but requires more setup and resources.

74,778

Robust Speech Recognition via Large-Scale Weak Supervision

Pros of Whisper

  • Advanced speech recognition capabilities using state-of-the-art AI models
  • Supports multiple languages and can transcribe audio in various formats
  • Actively maintained with frequent updates and improvements

Cons of Whisper

  • Requires significant computational resources and may be slower for simple tasks
  • More complex setup and usage compared to simpler alternatives
  • Larger codebase and dependencies, potentially harder to customize

Code Comparison

B1FIPL (BASIC interpreter):

10 PRINT "HELLO WORLD"
20 GOTO 10

Whisper (Python audio transcription):

import whisper

model = whisper.load_model("base")
result = model.transcribe("audio.mp3")
print(result["text"])

Summary

Whisper is a powerful speech recognition tool with advanced features, while B1FIPL is a simple BASIC interpreter. Whisper offers multi-language support and uses AI models, but requires more resources. B1FIPL is lightweight and easy to use for basic programming tasks. The code examples highlight the difference in complexity and purpose between the two projects.

Port of OpenAI's Whisper model in C/C++

Pros of whisper.cpp

  • Focuses on speech recognition and transcription, offering a specific and advanced functionality
  • Implements OpenAI's Whisper model in C++, providing efficient performance
  • Supports multiple languages and can run on various platforms, including mobile devices

Cons of whisper.cpp

  • More complex codebase and dependencies, requiring more setup and configuration
  • Larger project scope, which may be overkill for simpler audio processing tasks
  • Requires more computational resources due to its advanced AI-based functionality

Code Comparison

whisper.cpp:

// Load model
struct whisper_context * ctx = whisper_init_from_file("ggml-base.en.bin");

// Process audio
whisper_full_default(ctx, wparams, pcmf32.data(), pcmf32.size());

// Print result
const char * text = whisper_full_get_segment_text(ctx, 0);
printf("%s\n", text);

b1fipl:

// Read audio file
read_wav_file(argv[1], &wav);

// Process audio
process_audio(&wav, &processed);

// Write processed audio
write_wav_file(argv[2], &processed);

Summary

whisper.cpp is a more advanced project focused on speech recognition using AI, while b1fipl is a simpler audio processing tool. whisper.cpp offers more sophisticated functionality but requires more resources and setup. b1fipl is more straightforward and suitable for basic audio manipulation tasks.

Python bindings for llama.cpp

Pros of llama-cpp-python

  • Focuses on providing Python bindings for the llama.cpp library, enabling easy integration of LLaMA models in Python projects
  • Actively maintained with regular updates and contributions from the community
  • Offers comprehensive documentation and examples for usage

Cons of llama-cpp-python

  • More complex setup and dependencies compared to b1fipl
  • Requires more computational resources to run LLaMA models
  • Limited to LLaMA-specific functionality, whereas b1fipl is a more general-purpose interpreter

Code Comparison

llama-cpp-python:

from llama_cpp import Llama

llm = Llama(model_path="./models/7B/ggml-model.bin")
output = llm("Q: Name the planets in the solar system? A: ", max_tokens=32, stop=["Q:", "\n"], echo=True)
print(output)

b1fipl:

10 PRINT "PLANETS IN THE SOLAR SYSTEM:"
20 DATA MERCURY,VENUS,EARTH,MARS,JUPITER,SATURN,URANUS,NEPTUNE
30 FOR I = 1 TO 8
40 READ P$: PRINT P$
50 NEXT I

The code snippets demonstrate the different focus areas of the two projects. llama-cpp-python is designed for working with advanced language models, while b1fipl is a simple BASIC interpreter suitable for educational purposes and nostalgic programming experiences.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README

A Bestiary of Single-File Implementations of Programming Languages

From a French Prayer-book of the Thirteenth Century, in the British Museum.

Programming languages are amazing

Programming languages are amazing pieces of work. They turn our words, numbers, and symbols into the bits that make a machine do things.

It's easy to get overwhelmed when implementing a programming language. The GNU Compiler Collection is millions of lines long.

That's too complicated to learn how to implement a programming language. Luckily, some smart people have distilled the most interesting parts of programming languages into an approachable essence. I'm referring to implementations of programming languages that fit in a single source code file.

These single-file implementations are rarely complete, hardly sophisticated or efficient. But they are self-contained, concise, and clear. They make it fun to discover why programming languages are amazing.

Concatenative

bbf, v3 implemented by Mark Carter. A Forth in C.

dc implemented by Lorinda Cherry.

Forth1 implemented by Tycho Luyben.

jonesForth implemented by Richard W.M. Jones.

Mouse implemented by Peter Grogono.

Functional

7 lines of code, 3 minutes implemented by Matt Might.

arpilisp implemented by Marc Paquette.

How to implement a programming language in JavaScript implemented by Mihai Bazon.

(How to Write a (Lisp) Interpreter (in Python)) implemented by Peter Norvig.

komplott implemented by Kristoffer Grönlund.

Lisp500 implemented by Teemu Kalvas.

Lisp9 implemented by Nils M. Holm. A byte-compiling Lisp interpreter.

Lisp90 implemented by Anthony C. Hay.

Lisp In Less Than 200 Lines Of C implemented by Carl Douglas.

MiniLisp implemented by Rui Ueyama.

Mini-Scheme updated by Chris Pressey, originally implemented by Atsushi Moriwaki.

mLite implemented by Nils M. Holm. A lightweight variant of ML.

Most functional implemented by John Tromp. An implementation of binary lambda calculus in 25 lines of obfuscated C.

sectorlisp implemented by Justine Alexandra Roberts Tunney et al. An x86 Lisp interpreter that fits in a boot sector.

sedlisp implemented by Shinichiro Hamaji.

single_cream, scheme interpreter implemented by Raymond Nicholson.

ulc implemented by Asad Saeeduddin. A minimalistic implementation of the untyped lambda calculus in JS

Imperative

asm6502.py implemented by David Beazley.

wak is the single-file version of a "fairly compact implementation of the AWK programming language", implemented by Ray Gardner.

bc implemented by depsterr. Compiles brainfuck into an x86_64 linux binary.

Brainfuck implemented by Brian Raiter.

c4 C in 4 functions, implemented by Robert Swierczek.

Jasic implemented by Robert Nystrom. Old-school BASIC in Java.

mescc-tools-seed, implemented by Jeremiah Orians. A complete chain of one-file languages, from compiler to assemblers and even a shell, for Intel x86: C compiler in assembly, macro assembler to build the C compiler, hex2 assembler to build the macro assembler, hex1 assembler to build the hex2 assembler, hex0 assembler to bootstrap the whole thing, and finally, a shell to script the previous stages.

Mini-C implemented by Sam Nipps. A small subset of C, of course. But not as small as you would guess.

Pascal-S implemented by Niklaus Wirth & Scott A. Moore.

picol is a Tcl interpreter implemented in C. Implemented by Salvatore Sanfilippo, aka antirez.

Selfie includes a 1-file C compiler in C. Implemented by the Computational Systems Group at the Department of Computer Sciences of the University of Salzburg.

swizzle implemented by Robert Swierczek.

The Super Tiny Compiler! implemented by James Kyle.

Tiny Basic implemented by Tom Pittman.

Trac implemented by Jack Trainor.

Tutorial - Write a Shell in C implemented by Stephen Brennan.

VTL02 for 6502 ported and improved by Mike Barry. VTL-02 was originally designed and implemented by Gary Shannon & Frank McCoy.

Logical

microKanren is a Kanren interpreter, implemented by Jason Hemann.

prolog.c is a simple Prolog interpreter written in C++, implemented by Alan Mycroft.

Prolog originally implemented by Ken Kahn, adapted by Nils M. Holm.

Tiny Prolog in OCaml is an interpreter for a subset of Prolog, in OCaml, implemented by Lilian Besson (@Naereen)

Honourable Mentions

256LOL implemented by Jeff Overbey. An x86 assembler in 256 lines or less. Technically not a single file but Jeff gives good descriptions of the problems with elegant, simple solutions.

An Implementation of J implemented by Arthur Whitney. See the appendix "Incunabulum". It's only a fragment of the J interpreter, but its conciseness is impressive.

A Regular Expression Matcher implemented by Rob Pike, exegesis by Brian Kernighan.

JS-Interpreter implemented by Neil Fraser. A JavaScript interpreter in JavaScript. This file is part of a larger project for running JavaScript in a sandbox.

Microlisp, a Scheme-like lisp in less than 1000 loc of C, implemented by Michael Lazear. A single-implementation with extra files for examples and building.

Tiny Compiler implemented by Minko Gechev. It translates only arithmetic expressions, but it's well written.

Epilogue

Have you implemented a programming language in a single file? Let me know with a pull request.

Or fork your own b1fipl. If you do, please give me credit.

Image credit

Parton, James. Caricature and Other Comic Art in all Times and many Lands. Project Gutenberg. Retrieved 2021-02-04. http://gutenberg.org/ebooks/39347