Convert Figma logo to code with AI

OptimalBits logobull

Premium Queue package for handling distributed jobs and messages in NodeJS.

15,398
1,431
15,398
160

Top Related Projects

5,898

BullMQ - Message Queue and Batch processing for NodeJS and Python based on Redis

A simple, fast, robust job/task queue for Node.js, backed by Redis.

9,452

Kue is a priority job queue backed by redis, built for node.js.

9,360

Lightweight job scheduling for Node.js

1,775

High performance Node.js/PostgreSQL job queue (also suitable for getting jobs generated by PostgreSQL triggers/functions out into a different work queue)

Quick Overview

Bull is a Node.js library that implements a fast and robust queue system based on Redis. It provides a simple and powerful API for managing job queues, making it ideal for handling background jobs, distributed tasks, and message queues in Node.js applications.

Pros

  • High performance and reliability due to its Redis-based implementation
  • Feature-rich, including job prioritization, retries, rate limiting, and delayed jobs
  • Supports multiple job types and concurrent processing
  • Extensive documentation and active community support

Cons

  • Requires Redis as a dependency, which may increase infrastructure complexity
  • Limited built-in support for non-Node.js consumers
  • Learning curve for advanced features and configurations
  • Potential scalability challenges for extremely high-volume queues

Code Examples

  1. Creating a queue and adding a job:
const Queue = require('bull');
const myQueue = new Queue('my-queue');

await myQueue.add({ foo: 'bar' });
  1. Processing jobs:
myQueue.process(async (job) => {
  console.log(`Processing job ${job.id}`);
  // Perform job tasks here
  return { result: 'success' };
});
  1. Adding a delayed job:
await myQueue.add(
  { data: 'delayed job' },
  { delay: 60000 } // Delay for 1 minute
);
  1. Using job events:
myQueue.on('completed', (job, result) => {
  console.log(`Job ${job.id} completed with result:`, result);
});

Getting Started

To get started with Bull, follow these steps:

  1. Install Bull and its Redis dependency:

    npm install bull
    
  2. Create a new queue and add a job:

    const Queue = require('bull');
    const myQueue = new Queue('my-queue');
    
    async function start() {
      await myQueue.add({ task: 'example' });
    
      myQueue.process(async (job) => {
        console.log('Processing job:', job.data);
        // Perform job tasks here
      });
    }
    
    start();
    
  3. Run your Node.js application, ensuring Redis is running and accessible.

Competitor Comparisons

5,898

BullMQ - Message Queue and Batch processing for NodeJS and Python based on Redis

Pros of BullMQ

  • Written in TypeScript, providing better type safety and developer experience
  • Supports Redis Streams, offering improved performance and reliability
  • Includes a dashboard UI for easier queue management and monitoring

Cons of BullMQ

  • Requires Redis 5.0 or higher, which may not be available in all environments
  • Some users report a steeper learning curve compared to Bull
  • Less mature ecosystem with fewer third-party integrations

Code Comparison

Bull:

const Queue = require('bull');
const myQueue = new Queue('my-queue');

myQueue.add({ foo: 'bar' });

BullMQ:

import { Queue } from 'bullmq';
const myQueue = new Queue('my-queue');

await myQueue.add('job-name', { foo: 'bar' });

Both Bull and BullMQ are popular job queue libraries for Node.js, built on top of Redis. Bull is the older and more established project, while BullMQ is a newer, rewritten version with some modern improvements. BullMQ offers better TypeScript support and leverages newer Redis features, but may require more recent Redis versions and have a slightly steeper learning curve. The choice between the two often depends on specific project requirements and the development team's preferences.

A simple, fast, robust job/task queue for Node.js, backed by Redis.

Pros of bee-queue

  • Lightweight and faster for simple use cases
  • Lower memory footprint
  • Simpler API, easier to get started quickly

Cons of bee-queue

  • Fewer advanced features compared to Bull
  • Less active development and community support
  • Limited to Redis as the only backend option

Code Comparison

bee-queue:

const Queue = require('bee-queue');
const queue = new Queue('example');

queue.createJob({x: 2, y: 3}).save();

queue.process(async (job) => {
  return job.data.x + job.data.y;
});

Bull:

const Queue = require('bull');
const queue = new Queue('example');

queue.add({x: 2, y: 3});

queue.process(async (job) => {
  return job.data.x + job.data.y;
});

Both libraries offer similar basic functionality for creating and processing jobs. Bull provides more advanced features like job prioritization, rate limiting, and repeatable jobs, which are not available in bee-queue. However, bee-queue's simpler API can be advantageous for projects with straightforward queueing needs.

Bull has a more active community and frequent updates, making it a better choice for long-term, complex projects. On the other hand, bee-queue's lightweight nature and lower memory usage can be beneficial for smaller applications or those with limited resources.

9,452

Kue is a priority job queue backed by redis, built for node.js.

Pros of Kue

  • Mature and well-established project with a large user base
  • Extensive documentation and community support
  • Built-in web interface for job monitoring and management

Cons of Kue

  • No longer actively maintained (last commit in 2019)
  • Limited support for modern Node.js versions
  • Fewer advanced features compared to Bull

Code Comparison

Kue job creation:

var job = queue.create('email', {
  title: 'Welcome email',
  to: 'user@example.com',
  template: 'welcome-email'
}).save();

Bull job creation:

const job = await queue.add('email', {
  title: 'Welcome email',
  to: 'user@example.com',
  template: 'welcome-email'
});

Both libraries offer similar syntax for job creation, but Bull uses Promises for asynchronous operations, while Kue relies on callbacks.

Bull provides more advanced features like rate limiting, job prioritization, and repeatable jobs out of the box. It also offers better performance and scalability, especially for high-volume queues.

Kue's web interface is a significant advantage for monitoring and managing jobs, but its lack of recent updates and limited compatibility with newer Node.js versions make it less suitable for modern projects.

Overall, Bull is the recommended choice for new projects due to its active development, better performance, and more extensive feature set.

9,360

Lightweight job scheduling for Node.js

Pros of Agenda

  • Supports MongoDB as a backend, offering robust data persistence
  • Provides a more comprehensive job scheduling system with repeatable jobs and priority queues
  • Offers a web-based UI for monitoring and managing jobs

Cons of Agenda

  • Generally slower performance compared to Bull
  • Less active development and community support
  • More complex setup and configuration process

Code Comparison

Agenda:

const Agenda = require('agenda');
const agenda = new Agenda({db: {address: mongoConnectionString}});

agenda.define('send email', async job => {
  // Send email logic here
});

await agenda.start();
await agenda.schedule('in 5 minutes', 'send email');

Bull:

const Queue = require('bull');
const emailQueue = new Queue('email', 'redis://127.0.0.1:6379');

emailQueue.process(async (job) => {
  // Send email logic here
});

emailQueue.add({}, { delay: 5 * 60 * 1000 });

Both Bull and Agenda are popular job queue libraries for Node.js, but they have different strengths. Bull is known for its simplicity and high performance, leveraging Redis for fast in-memory operations. It's ideal for applications requiring quick job processing and real-time updates. Agenda, on the other hand, offers more advanced scheduling features and uses MongoDB for persistence, making it suitable for complex job scheduling scenarios and applications that need to maintain job history.

1,775

High performance Node.js/PostgreSQL job queue (also suitable for getting jobs generated by PostgreSQL triggers/functions out into a different work queue)

Pros of Worker

  • Built with TypeScript, offering better type safety and developer experience
  • Supports PostgreSQL for job storage, providing ACID compliance and reliability
  • Offers a more flexible job scheduling system with cron-like syntax

Cons of Worker

  • Less mature ecosystem compared to Bull, with fewer integrations and plugins
  • Requires PostgreSQL, which may not be suitable for all use cases
  • Steeper learning curve for developers not familiar with PostgreSQL

Code Comparison

Bull:

const queue = new Bull('my-queue');
queue.add({ data: 'example' });
queue.process(async (job) => {
  console.log(job.data);
});

Worker:

const worker = new Worker(connectionString);
await worker.addJob('my-task', { data: 'example' });
worker.registerTask('my-task', async (job) => {
  console.log(job.payload);
});

Both libraries offer similar functionality for adding and processing jobs, but Worker uses TypeScript and requires a PostgreSQL connection string. Bull is more straightforward to set up and use, especially for developers familiar with Redis-based queues. Worker provides stronger typing and leverages PostgreSQL's features for job management and scheduling.

Convert Figma logo designs to code with AI

Visual Copilot

Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.

Try Visual Copilot

README




The fastest, most reliable, Redis-based queue for Node.
Carefully written for rock solid stability and atomicity.


Sponsors · Features · UIs · Install · Quick Guide · Documentation

Check the new Guide!


📻 News and updates

Bull is currently in maintenance mode, we are only fixing bugs. For new features check BullMQ, a modern rewritten implementation in Typescript. You are still very welcome to use Bull if it suits your needs, which is a safe, battle tested library.

Follow me on Twitter for other important news and updates.

🌟 Rediscover Scale Conference 2024

Discover the latest in in-memory and real-time data technologies at Rediscover Scale 2024. Ideal for engineers, architects, and technical leaders looking to push technological boundaries. Connect with experts and advance your skills at The Foundry SF, San Francisco.

Learn more and register here!

🛠 Tutorials

You can find tutorials and news in this blog: https://blog.taskforce.sh/


Used by

Bull is popular among large and small organizations, like the following ones:

Atlassian Autodesk Mozilla Nest Salesforce

🚀 Sponsors 🚀

Dragonfly Dragonfly is a new Redis™ drop-in replacement that is fully compatible with BullMQ and brings some important advantages over Redis™ such as massive better performance by utilizing all CPU cores available and faster and more memory efficient data structures. Read more here on how to use it with BullMQ.
Memetria for Redis If you need high quality production Redis instances for your Bull project, please consider subscribing to Memetria for Redis, leaders in Redis hosting that works perfectly with BullMQ. Use the promo code "BULLMQ" when signing up to help us sponsor the development of BullMQ!

Official FrontEnd

Taskforce.sh, Inc

Supercharge your queues with a professional front end:

  • Get a complete overview of all your queues.
  • Inspect jobs, search, retry, or promote delayed jobs.
  • Metrics and statistics.
  • and many more features.

Sign up at Taskforce.sh


Bull Features

  • Minimal CPU usage due to a polling-free design.
  • Robust design based on Redis.
  • Delayed jobs.
  • Schedule and repeat jobs according to a cron specification.
  • Rate limiter for jobs.
  • Retries.
  • Priority.
  • Concurrency.
  • Pause/resume—globally or locally.
  • Multiple job types per queue.
  • Threaded (sandboxed) processing functions.
  • Automatic recovery from process crashes.

And coming up on the roadmap...

  • Job completion acknowledgement (you can use the message queue pattern in the meantime).
  • Parent-child jobs relationships.

UIs

There are a few third-party UIs that you can use for monitoring:

BullMQ

Bull v3

Bull <= v2


Monitoring & Alerting


Feature Comparison

Since there are a few job queue solutions, here is a table comparing them:

FeatureBullMQ-ProBullMQBullKueBeeAgenda
Backendredisredisredisredisredismongo
Observables✓
Group Rate Limit✓
Group Support✓
Batches Support✓
Parent/Child Dependencies✓✓
Priorities✓✓✓✓✓
Concurrency✓✓✓✓✓✓
Delayed jobs✓✓✓✓✓
Global events✓✓✓✓
Rate Limiter✓✓✓
Pause/Resume✓✓✓✓
Sandboxed worker✓✓✓
Repeatable jobs✓✓✓✓
Atomic ops✓✓✓✓
Persistence✓✓✓✓✓✓
UI✓✓✓✓✓
Optimized forJobs / MessagesJobs / MessagesJobs / MessagesJobsMessagesJobs

Install

npm install bull --save

or

yarn add bull

Requirements: Bull requires a Redis version greater than or equal to 2.8.18.

Typescript Definitions

npm install @types/bull --save-dev
yarn add --dev @types/bull

Definitions are currently maintained in the DefinitelyTyped repo.

Contributing

We welcome all types of contributions, either code fixes, new features or doc improvements. Code formatting is enforced by prettier. For commits please follow conventional commits convention. All code must pass lint rules and test suites before it can be merged into develop.


Quick Guide

Basic Usage

const Queue = require('bull');

const videoQueue = new Queue('video transcoding', 'redis://127.0.0.1:6379');
const audioQueue = new Queue('audio transcoding', { redis: { port: 6379, host: '127.0.0.1', password: 'foobared' } }); // Specify Redis connection using object
const imageQueue = new Queue('image transcoding');
const pdfQueue = new Queue('pdf transcoding');

videoQueue.process(function (job, done) {

  // job.data contains the custom data passed when the job was created
  // job.id contains id of this job.

  // transcode video asynchronously and report progress
  job.progress(42);

  // call done when finished
  done();

  // or give an error if error
  done(new Error('error transcoding'));

  // or pass it a result
  done(null, { framerate: 29.5 /* etc... */ });

  // If the job throws an unhandled exception it is also handled correctly
  throw new Error('some unexpected error');
});

audioQueue.process(function (job, done) {
  // transcode audio asynchronously and report progress
  job.progress(42);

  // call done when finished
  done();

  // or give an error if error
  done(new Error('error transcoding'));

  // or pass it a result
  done(null, { samplerate: 48000 /* etc... */ });

  // If the job throws an unhandled exception it is also handled correctly
  throw new Error('some unexpected error');
});

imageQueue.process(function (job, done) {
  // transcode image asynchronously and report progress
  job.progress(42);

  // call done when finished
  done();

  // or give an error if error
  done(new Error('error transcoding'));

  // or pass it a result
  done(null, { width: 1280, height: 720 /* etc... */ });

  // If the job throws an unhandled exception it is also handled correctly
  throw new Error('some unexpected error');
});

pdfQueue.process(function (job) {
  // Processors can also return promises instead of using the done callback
  return pdfAsyncProcessor();
});

videoQueue.add({ video: 'http://example.com/video1.mov' });
audioQueue.add({ audio: 'http://example.com/audio1.mp3' });
imageQueue.add({ image: 'http://example.com/image1.tiff' });

Using promises

Alternatively, you can return promises instead of using the done callback:

videoQueue.process(function (job) { // don't forget to remove the done callback!
  // Simply return a promise
  return fetchVideo(job.data.url).then(transcodeVideo);

  // Handles promise rejection
  return Promise.reject(new Error('error transcoding'));

  // Passes the value the promise is resolved with to the "completed" event
  return Promise.resolve({ framerate: 29.5 /* etc... */ });

  // If the job throws an unhandled exception it is also handled correctly
  throw new Error('some unexpected error');
  // same as
  return Promise.reject(new Error('some unexpected error'));
});

Separate processes

The process function can also be run in a separate process. This has several advantages:

  • The process is sandboxed so if it crashes it does not affect the worker.
  • You can run blocking code without affecting the queue (jobs will not stall).
  • Much better utilization of multi-core CPUs.
  • Less connections to redis.

In order to use this feature just create a separate file with the processor:

// processor.js
module.exports = function (job) {
  // Do some heavy work

  return Promise.resolve(result);
}

And define the processor like this:

// Single process:
queue.process('/path/to/my/processor.js');

// You can use concurrency as well:
queue.process(5, '/path/to/my/processor.js');

// and named processors:
queue.process('my processor', 5, '/path/to/my/processor.js');

Repeated jobs

A job can be added to a queue and processed repeatedly according to a cron specification:

  paymentsQueue.process(function (job) {
    // Check payments
  });

  // Repeat payment job once every day at 3:15 (am)
  paymentsQueue.add(paymentsData, { repeat: { cron: '15 3 * * *' } });

As a tip, check your expressions here to verify they are correct: cron expression generator

Pause / Resume

A queue can be paused and resumed globally (pass true to pause processing for just this worker):

queue.pause().then(function () {
  // queue is paused now
});

queue.resume().then(function () {
  // queue is resumed now
})

Events

A queue emits some useful events, for example...

.on('completed', function (job, result) {
  // Job completed with output result!
})

For more information on events, including the full list of events that are fired, check out the Events reference

Queues performance

Queues are cheap, so if you need many of them just create new ones with different names:

const userJohn = new Queue('john');
const userLisa = new Queue('lisa');
.
.
.

However every queue instance will require new redis connections, check how to reuse connections or you can also use named processors to achieve a similar result.

Cluster support

NOTE: From version 3.2.0 and above it is recommended to use threaded processors instead.

Queues are robust and can be run in parallel in several threads or processes without any risk of hazards or queue corruption. Check this simple example using cluster to parallelize jobs across processes:

const Queue = require('bull');
const cluster = require('cluster');

const numWorkers = 8;
const queue = new Queue('test concurrent queue');

if (cluster.isMaster) {
  for (let i = 0; i < numWorkers; i++) {
    cluster.fork();
  }

  cluster.on('online', function (worker) {
    // Let's create a few jobs for the queue workers
    for (let i = 0; i < 500; i++) {
      queue.add({ foo: 'bar' });
    };
  });

  cluster.on('exit', function (worker, code, signal) {
    console.log('worker ' + worker.process.pid + ' died');
  });
} else {
  queue.process(function (job, jobDone) {
    console.log('Job done by worker', cluster.worker.id, job.id);
    jobDone();
  });
}

Documentation

For the full documentation, check out the reference and common patterns:

  • Guide — Your starting point for developing with Bull.
  • Reference — Reference document with all objects and methods available.
  • Patterns — a set of examples for common patterns.
  • License — the Bull license—it's MIT.

If you see anything that could use more docs, please submit a pull request!


Important Notes

The queue aims for an "at least once" working strategy. This means that in some situations, a job could be processed more than once. This mostly happens when a worker fails to keep a lock for a given job during the total duration of the processing.

When a worker is processing a job it will keep the job "locked" so other workers can't process it.

It's important to understand how locking works to prevent your jobs from losing their lock - becoming stalled - and being restarted as a result. Locking is implemented internally by creating a lock for lockDuration on interval lockRenewTime (which is usually half lockDuration). If lockDuration elapses before the lock can be renewed, the job will be considered stalled and is automatically restarted; it will be double processed. This can happen when:

  1. The Node process running your job processor unexpectedly terminates.
  2. Your job processor was too CPU-intensive and stalled the Node event loop, and as a result, Bull couldn't renew the job lock (see #488 for how we might better detect this). You can fix this by breaking your job processor into smaller parts so that no single part can block the Node event loop. Alternatively, you can pass a larger value for the lockDuration setting (with the tradeoff being that it will take longer to recognize a real stalled job).

As such, you should always listen for the stalled event and log this to your error monitoring system, as this means your jobs are likely getting double-processed.

As a safeguard so problematic jobs won't get restarted indefinitely (e.g. if the job processor always crashes its Node process), jobs will be recovered from a stalled state a maximum of maxStalledCount times (default: 1).

NPM DownloadsLast 30 Days