Top Related Projects
Node.js Background jobs backed by redis.
Resque is a Redis-backed Ruby library for creating background jobs, placing them on multiple queues, and processing them later.
Kue is a priority job queue backed by redis, built for node.js.
Lightweight job scheduling for Node.js
A simple, fast, robust job/task queue for Node.js, backed by Redis.
Quick Overview
The node-resque
project is a Node.js implementation of the Resque library, a Redis-backed library for creating and managing background jobs. It provides a simple and flexible way to execute long-running tasks asynchronously, allowing your application to remain responsive and scalable.
Pros
- Asynchronous Task Execution:
node-resque
allows you to offload time-consuming tasks to a background worker, keeping your main application responsive. - Redis-backed: The project uses Redis as the message broker, providing a reliable and scalable solution for managing tasks.
- Flexible Configuration: The library offers a wide range of configuration options, making it easy to customize the behavior to fit your specific needs.
- Robust Error Handling:
node-resque
provides comprehensive error handling, making it easier to debug and troubleshoot issues with your background tasks.
Cons
- Redis Dependency: The project requires a running Redis instance, which may not be suitable for all deployment environments.
- Limited Documentation: While the project has decent documentation, some areas could be more comprehensive, especially for more advanced use cases.
- Potential Performance Bottlenecks: Depending on the complexity and volume of your background tasks, the Redis-based architecture may introduce performance bottlenecks in high-load scenarios.
- Lack of Distributed Locking: The project does not provide built-in support for distributed locking, which may be necessary for certain use cases.
Code Examples
Here are a few code examples demonstrating the usage of node-resque
:
Defining a Worker:
const { Worker, Queue, Scheduler } = require('node-resque');
const jobs = {
slowCalculation: {
perform: async (a, b) => {
// Simulate a long-running task
await new Promise((resolve) => setTimeout(resolve, 5000));
return a + b;
}
}
};
const worker = new Worker({ connection: { redis: { host: 'localhost', port: 6379 } } }, jobs);
worker.start();
Enqueuing a Job:
const queue = new Queue({ connection: { redis: { host: 'localhost', port: 6379 } } });
await queue.enqueue('default', 'slowCalculation', [1, 2]);
Scheduling a Recurring Job:
const scheduler = new Scheduler({ connection: { redis: { host: 'localhost', port: 6379 } } });
scheduler.start();
await scheduler.every('10 seconds', 'default', 'slowCalculation', [1, 2]);
Handling Job Failures:
worker.on('failure', (q, job, failure) => {
console.error(`Job ${job.queue} failed with ${failure}`);
});
Getting Started
To get started with node-resque
, follow these steps:
-
Install the package:
npm install node-resque
-
Set up a Redis server. You can either install Redis locally or use a hosted Redis service.
-
Create a worker that defines your background tasks:
const { Worker, Queue, Scheduler } = require('node-resque'); const jobs = { myTask: { perform: async (arg1, arg2) => { // Implement your task logic here return result; } } }; const worker = new Worker({ connection: { redis: { host: 'localhost', port: 6379 } } }, jobs); worker.start();
-
Enqueue jobs to be processed by the worker:
const queue = new Queue({ connection: { redis: { host: 'localhost', port: 6379 } } }); await queue.enqueue('default', 'myTask', [arg1, arg2]);
-
(Optional) Set up a scheduler to manage recurring tasks:
const scheduler = new Scheduler({ connection: { redis: { host:
Competitor Comparisons
Node.js Background jobs backed by redis.
Pros of node-resque
- Supports a wide range of job types, including delayed jobs, recurring jobs, and priority jobs.
- Provides a web-based dashboard for monitoring and managing jobs.
- Integrates with a variety of databases, including Redis, MySQL, and PostgreSQL.
Cons of node-resque
- Requires more setup and configuration compared to node-resque.
- May have a steeper learning curve for developers unfamiliar with the Resque ecosystem.
- Potentially less performant than node-resque for certain workloads.
Code Comparison
node-resque:
const queue = new Resque.Queue({
connection: {
host: 'localhost',
port: 6379
}
});
queue.on('error', function(error) {
console.error(error);
});
queue.connect(function() {
queue.enqueue('some_queue', 'some_worker', [1, 2, 3]);
});
node-resque:
const queue = new Resque.Queue({
redis: {
host: 'localhost',
port: 6379
}
});
queue.on('error', (error) => {
console.error(error);
});
queue.connect(() => {
queue.enqueue('some_queue', 'some_worker', [1, 2, 3]);
});
Resque is a Redis-backed Ruby library for creating background jobs, placing them on multiple queues, and processing them later.
Pros of Resque
- Resque is a well-established and widely-used background job processing system, with a large and active community.
- Resque provides a web-based interface for monitoring and managing jobs, which can be useful for larger-scale applications.
- Resque supports a variety of storage backends, including Redis, Mongo, and Postgres, making it more flexible than Node-Resque.
Cons of Resque
- Resque is primarily written in Ruby, which may not be the preferred language for some Node.js developers.
- Resque may have a steeper learning curve for developers who are more familiar with Node.js and its ecosystem.
- Resque may not have the same level of integration and compatibility with the Node.js ecosystem as Node-Resque.
Code Comparison
Node-Resque:
const queue = await this.queue.enqueue('my_queue', 'my_task', [arg1, arg2]);
Resque:
Resque.enqueue(MyTask, arg1, arg2)
Kue is a priority job queue backed by redis, built for node.js.
Pros of Kue
- Kue provides a web-based UI for managing and monitoring jobs, which can be useful for visualizing and troubleshooting job processing.
- Kue supports a wide range of job types, including delayed jobs, recurring jobs, and priority-based jobs, making it a flexible solution for various use cases.
- Kue is built on top of Redis, which is a fast and reliable in-memory data structure store, providing good performance and scalability.
Cons of Kue
- Kue is a standalone library, while node-resque is part of the Actionhero framework, which may make it more difficult to integrate into existing projects that are not using Actionhero.
- Kue's dependency on Redis may be a drawback for some projects that prefer to use a different message queue or database system.
- Kue's web-based UI may not be necessary for all projects, and the additional complexity it introduces may be overkill for simpler use cases.
Code Comparison
node-resque:
const queue = await this.queue.enqueue('myQueue', 'myTask', [arg1, arg2]);
Kue:
const job = kue.createJob('myTask', { arg1, arg2 })
.priority('normal')
.delay(10000)
.save();
In the node-resque example, we enqueue a task called myTask
with the arguments arg1
and arg2
to the myQueue
queue. In the Kue example, we create a new job with the myTask
name and the same arguments, and then set the priority and delay before saving the job.
Lightweight job scheduling for Node.js
Pros of Agenda
- Agenda provides a more robust and flexible scheduling system, with support for recurring jobs, job prioritization, and job locking.
- Agenda has a larger and more active community, with more contributors and more frequent updates.
- Agenda has better documentation and more extensive examples, making it easier to get started and integrate into your project.
Cons of Agenda
- Agenda has a more complex API and configuration options, which may be overkill for simpler use cases.
- Agenda is primarily focused on scheduling and job management, while node-resque has a broader set of features, including support for worker pools and job queues.
- Agenda may have a higher learning curve for developers who are new to job scheduling and background processing.
Code Comparison
Agenda:
const agenda = new Agenda({ db: { address: 'mongodb://localhost:27017/agenda' } });
agenda.define('send email', async (job) => {
await sendEmail(job.attrs.data);
});
agenda.on('ready', () => {
agenda.start();
});
node-resque:
const queue = new Queue({ connection: { redis: redisConnection } });
queue.process('send email', async (job) => {
await sendEmail(job.data);
});
queue.connect(() => {
queue.enqueue('send email', { to: 'user@example.com', subject: 'Hello' });
});
A simple, fast, robust job/task queue for Node.js, backed by Redis.
Pros of Bee Queue
- Simplicity: Bee Queue is a lightweight and easy-to-use queue system, with a straightforward API and minimal dependencies.
- Scalability: Bee Queue is designed to be highly scalable, with support for distributed queues and the ability to handle large volumes of tasks.
- Reliability: Bee Queue uses Redis as its backend, which provides a reliable and durable storage solution for queue data.
Cons of Bee Queue
- Limited Features: Compared to node-resque, Bee Queue may lack some advanced features, such as job scheduling, retries, and error handling.
- Redis Dependency: Bee Queue is tightly coupled with Redis, which may be a drawback if you prefer a different queue storage solution.
- Smaller Community: The Bee Queue project has a smaller community compared to node-resque, which may mean fewer resources and support available.
Code Comparison
Here's a brief code comparison between Bee Queue and node-resque:
Bee Queue:
const queue = new Queue('my-queue', 'redis://localhost:6379');
queue.add({ foo: 'bar' }, { delay: 5000 });
queue.process('my-queue', (job) => {
console.log(job.data);
});
node-resque:
const queue = new Resque.Queue({ connection: { redis: { host: 'localhost', port: 6379 }}});
await queue.enqueue('my-queue', 'myTask', [{ foo: 'bar' }]);
const worker = new Resque.Worker({ connection: { redis: { host: 'localhost', port: 6379 }}, queues: ['my-queue'] });
worker.on('success', (q, job, result) => { console.log(result); });
worker.start();
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual CopilotREADME
node-resque: The best background jobs in node.
Distributed delayed jobs in nodejs. Resque is a background job system backed by Redis (version 2.6.0 and up required). It includes priority queues, plugins, locking, delayed jobs, and more! This project is a very opinionated but API-compatible with Resque and Sidekiq (caveats). We also implement some of the popular Resque plugins, including resque-scheduler and resque-retry
The full API documentation for this package is automatically generated from the main
via typedoc branch and published to https://node-resque.actionherojs.com/
The Resque Factory (How It Works)
Overview
Resque is a queue based task processing system that can be thought of as a "Kanban" style factory. Workers in this factory can each only work one Job at a time. They pull Jobs from Queues and work them to completion (or failure). Each Job has two parts: instructions on how to complete the job (the perform
function), and any inputs necessary to complete the Job.
Queues
In our factory example, Queues are analogous to conveyor belts. Jobs are placed on the belts (Queues) and are held in order waiting for a Worker to pick them up. There are three types of Queues: regular work Queues, Delayed Job Queues, and the Failed Job Queue. The Delayed Job Queues contains Job definitions that are intended to be worked at or in a specified time. The Failed Job Queue is where Workers place any Jobs that have failed during execution.
Workers
Our Workers are the heart of the factory. Each Worker is assigned one or more Queues to check for work. After taking a Job from a Queue the Worker attempts to complete the Job. If successful, they go back to check out more work from the Queues. However, if there is a failure, the Worker records the job and its inputs in the Failed Jobs Queue before going back for more work.
Scheduler
The Scheduler can be thought of as a specialized type of Worker. Unlike other Workers, the Scheduler does not execute any Jobs, instead it manages the Delayed Job Queue. As Job definitions are added to the Delayed Job Queue they must specify when they can become available for execution. The Scheduler constantly checks to see if any Delayed Jobs are ready to execute. When a Delayed Job becomes ready for execution the Scheduler places a new instance of that Job in its defined Queue.
API Docs
You can read the API docs for Node Resque @ node-resque.actionherojs.com. These are generated automatically from the master branch via TypeDoc
Version Notes
- The version of redis required is >= 2.6.0 as we use lua scripting to create custom atomic operations
- â¼ï¸ Version 6+ of Node Resque uses TypeScript. We will still include JavaScript transpiled code in NPM releases, but they will be generated from the TypeScript source. Functionality between node-resque v5 and v6 should be the same.
- â¼ï¸ Version 5+ of Node Resque uses async/await. There is no upgrade path from previous versions. Node v8.0.0+ is required.
Usage
I learn best by examples:
import { Worker, Plugins, Scheduler, Queue } from "node-resque";
async function boot() {
// ////////////////////////
// SET UP THE CONNECTION //
// ////////////////////////
const connectionDetails = {
pkg: "ioredis",
host: "127.0.0.1",
password: null,
port: 6379,
database: 0,
// namespace: 'resque',
// looping: true,
// options: {password: 'abc'},
};
// ///////////////////////////
// DEFINE YOUR WORKER TASKS //
// ///////////////////////////
let jobsToComplete = 0;
const jobs = {
add: {
plugins: [Plugins.JobLock],
pluginOptions: {
JobLock: { reEnqueue: true },
},
perform: async (a, b) => {
await new Promise((resolve) => {
setTimeout(resolve, 1000);
});
jobsToComplete--;
tryShutdown();
const answer = a + b;
return answer;
},
},
subtract: {
perform: (a, b) => {
jobsToComplete--;
tryShutdown();
const answer = a - b;
return answer;
},
},
};
// just a helper for this demo
async function tryShutdown() {
if (jobsToComplete === 0) {
await new Promise((resolve) => {
setTimeout(resolve, 500);
});
await scheduler.end();
await worker.end();
process.exit();
}
}
// /////////////////
// START A WORKER //
// /////////////////
const worker = new Worker(
{ connection: connectionDetails, queues: ["math", "otherQueue"] },
jobs,
);
await worker.connect();
worker.start();
// ////////////////////
// START A SCHEDULER //
// ////////////////////
const scheduler = new Scheduler({ connection: connectionDetails });
await scheduler.connect();
scheduler.start();
// //////////////////////
// REGISTER FOR EVENTS //
// //////////////////////
worker.on("start", () => {
console.log("worker started");
});
worker.on("end", () => {
console.log("worker ended");
});
worker.on("cleaning_worker", (worker, pid) => {
console.log(`cleaning old worker ${worker}`);
});
worker.on("poll", (queue) => {
console.log(`worker polling ${queue}`);
});
worker.on("ping", (time) => {
console.log(`worker check in @ ${time}`);
});
worker.on("job", (queue, job) => {
console.log(`working job ${queue} ${JSON.stringify(job)}`);
});
worker.on("reEnqueue", (queue, job, plugin) => {
console.log(`reEnqueue job (${plugin}) ${queue} ${JSON.stringify(job)}`);
});
worker.on("success", (queue, job, result, duration) => {
console.log(
`job success ${queue} ${JSON.stringify(
job,
)} >> ${result} (${duration}ms)`,
);
});
worker.on("failure", (queue, job, failure, duration) => {
console.log(
`job failure ${queue} ${JSON.stringify(
job,
)} >> ${failure} (${duration}ms)`,
);
});
worker.on("error", (error, queue, job) => {
console.log(`error ${queue} ${JSON.stringify(job)} >> ${error}`);
});
worker.on("pause", () => {
console.log("worker paused");
});
scheduler.on("start", () => {
console.log("scheduler started");
});
scheduler.on("end", () => {
console.log("scheduler ended");
});
scheduler.on("poll", () => {
console.log("scheduler polling");
});
scheduler.on("leader", () => {
console.log("scheduler became leader");
});
scheduler.on("error", (error) => {
console.log(`scheduler error >> ${error}`);
});
scheduler.on("cleanStuckWorker", (workerName, errorPayload, delta) => {
console.log(
`failing ${workerName} (stuck for ${delta}s) and failing job ${errorPayload}`,
);
});
scheduler.on("workingTimestamp", (timestamp) => {
console.log(`scheduler working timestamp ${timestamp}`);
});
scheduler.on("transferredJob", (timestamp, job) => {
console.log(`scheduler enquing job ${timestamp} >> ${JSON.stringify(job)}`);
});
// //////////////////////
// CONNECT TO A QUEUE //
// //////////////////////
const queue = new Queue({ connection: connectionDetails }, jobs);
queue.on("error", function (error) {
console.log(error);
});
await queue.connect();
await queue.enqueue("math", "add", [1, 2]);
await queue.enqueue("math", "add", [1, 2]);
await queue.enqueue("math", "add", [2, 3]);
await queue.enqueueIn(3000, "math", "subtract", [2, 1]);
jobsToComplete = 4;
}
boot();
// and when you are done
// await queue.end()
// await scheduler.end()
// await worker.end()
Node Resque Interfaces: Queue, Worker, and Scheduler
There are 3 main classes in node-resque
: Queue, Worker, and Scheduler
- Queue: This is the interface your program uses to interact with resque's queues - to insert jobs, check on the performance of things, and generally administer your background jobs.
- Worker: This interface is how jobs get processed. Workers are started and then they check for jobs enqueued into various queues and complete them. If there's an error, they write to the
error
queue.- There's a special class called
multiWorker
in Node Resque which will run many workers at once for you (see below).
- There's a special class called
- Scheduler: The scheduler can be thought of as the coordinator for Node Resque. It is primarily in charge of checking when jobs told to run later (with
queue.enqueueIn
orqueue.enqueueAt
) should be processed, but it performs some other jobs like checking for 'stuck' workers and general cluster cleanup.- You can (and should) run many instances of the scheduler class at once, but only one will be elected to be the 'leader', and actually do work.
- The 'delay' defined on a scheduled job does not specify when the job should be run, but rather when the job should be enqueued. This means that
node-resque
can not guarantee when a job is going to be executed, only when it will become available for execution (added to a Queue).
Configuration Options:
new queue
requires only the "queue" variable to be set. If you intend to run plugins withbeforeEnqueue
orafterEnqueue
hooks, you should also pass thejobs
object to it.new worker
has some additional options:
options = {
looping: true,
timeout: 5000,
queues: "*",
name: os.hostname() + ":" + process.pid,
};
Note that when using "*"
queue:
- there's minor performance impact for checking the queues
- queues are processed in undefined order
The configuration hash passed to new NodeResque.Worker
, new NodeResque.Scheduler
or new NodeResque.Queue
can also take a connection
option.
const connectionDetails = {
pkg: "ioredis",
host: "127.0.0.1",
password: "",
port: 6379,
database: 0,
namespace: "resque", // Also allow array of strings
};
const worker = new NodeResque.Worker(
{ connection: connectionDetails, queues: "math" },
jobs,
);
worker.on("error", (error) => {
// handler errors
});
await worker.connect();
worker.start();
// and when you are done
// await worker.end()
You can also pass redis client directly.
// assume you already initialized redis client before
// the "redis" key can be IORedis.Redis or IORedis.Cluster instance
const redisClient = new Redis();
const connectionDetails = { redis: redisClient };
// or
const redisCluster = new Cluster();
const connectionDetails = { redis: redisCluster };
const worker = new NodeResque.Worker(
{ connection: connectionDetails, queues: "math" },
jobs,
);
worker.on("error", (error) => {
// handler errors
});
await worker.connect();
worker.start();
// and when you are done
await worker.end();
Notes
- Be sure to call
await worker.end()
,await queue.end()
andawait scheduler.end()
before shutting down your application if you want to properly clear your worker status from resque. - When ending your application, be sure to allow your workers time to finish what they are working on
- This project implements the "scheduler" part of rescue-scheduler (the daemon which can promote enqueued delayed jobs into the work queues when it is time), but not the CRON scheduler proxy. To learn more about how to use a CRON-like scheduler, read the Job Schedules section of this document.
- "Namespace" is a string which is appended to the front of your keys in redis. Normally, it is "resque". This is helpful if you want to store multiple work queues in one redis database. Do not use
keyPrefix
if you are using theioredis
(default) redis driver in this project (see https://github.com/actionhero/node-resque/issues/245 for more information.) - If you are using any plugins which effect
beforeEnqueue
orafterEnqueue
, be sure to pass thejobs
argument to thenew NodeResque.Queue()
constructor - If a job fails, it will be added to a special
failed
queue. You can then inspect these jobs, write a plugin to manage them, move them back to the normal queues, etc. Failure behavior by default is just to enter thefailed
queue, but there are many options. Check out these examples from the ruby ecosystem for inspiration: - If you plan to run more than one worker per nodejs process, be sure to name them something distinct. Names must follow the pattern
hostname:pid+unique_id
. For example: - For the Retry plugin, a success message will be emitted from the worker on each attempt (even if the job fails) except the final retry. The final retry will emit a failure message instead.
If you want to learn more about running Node-Resque with docker, please view the examples here: https://github.com/actionhero/node-resque/tree/master/examples/docker
const name = os.hostname() + ":" + process.pid + "+" + counter;
const worker = new NodeResque.Worker(
{ connection: connectionDetails, queues: "math", name: name },
jobs,
);
Worker#performInline
DO NOT USE THIS IN PRODUCTION. In tests or special cases, you may want to process/work a job in-line. To do so, you can use worker.performInline(jobName, arguments, callback)
. If you are planning on running a job via #performInline, this worker should also not be started, nor should be using event emitters to monitor this worker. This method will also not write to redis at all, including logging errors, modify resque's stats, etc.
Queue Management
const queue = new NodeResque.Queue({ connection: connectionDetails, jobs });
await queue.connect();
API documentation for the main methods you will be using to enqueue jobs to be worked can be found @ node-resque.actionherojs.com.
Failed Job Management
From time to time, your jobs/workers may fail. Resque workers will move failed jobs to a special failed
queue which will store the original arguments of your job, the failing stack trace, and additional metadata.
You can work with these failed jobs with the following methods:
let failedCount = await queue.failedCount()
failedCount
is the number of jobs in the failed queue
let failedJobs = await queue.failed(start, stop)
failedJobs
is an array listing the data of the failed jobs. Each element looks like:{"worker": "host:pid", "queue": "test_queue", "payload": {"class":"slowJob", "queue":"test_queue", "args":[null]}, "exception": "TypeError", "error": "MyImport is not a function", "backtrace": [' at Worker.perform (/path/to/worker:111:24)', ' at <anonymous>'], "failed_at": "Fri Dec 12 2014 14:01:16 GMT-0800 (PST)"}
- To retrieve all failed jobs, use arguments:
await queue.failed(0, -1)
Failing a Job
We use a try/catch pattern to catch errors in your jobs. If any job throws an uncaught exception, it will be caught, and the job's payload moved to the error queue for inspection. Do not use domain, process.on("exit"), or any other method of "catching" a process crash.
The error payload looks like:
{ worker: 'busted-worker-3',
queue: 'busted-queue',
payload: { class: 'busted_job', queue: 'busted-queue', args: [ 1, 2, 3 ] },
exception: 'ERROR_NAME',
error: 'I broke',
failed_at: 'Sun Apr 26 2015 14:00:44 GMT+0100 (BST)' }
await queue.removeFailed(failedJob)
- the input
failedJob
is an expanded node object representing the failed job, retrieved viaqueue.failed
await queue.retryAndRemoveFailed(failedJob)
- the input
failedJob
is an expanded node object representing the failed job, retrieved viaqueue.failed
- this method will instantly re-enqueue a failed job back to its original queue, and delete the failed entry for that job
Failed Worker Management
Automatically
By default, the scheduler will check for workers which haven't pinged redis in 60 minutes. If this happens, we will assume the process crashed, and remove it from redis. If this worker was working on a job, we will place it in the failed queue for later inspection. Every worker has a timer running in which it then updates a key in redis every timeout
(default: 5 seconds). If your job is slow, but async, there should be no problem. However, if your job consumes 100% of the CPU of the process, this timer might not fire.
To modify the 60 minute check, change stuckWorkerTimeout
when configuring your scheduler, ie:
const scheduler = new NodeResque.Scheduler({
stuckWorkerTimeout: (1000 * 60 * 60) // 1 hour, in ms
connection: connectionDetails
})
Set your scheduler's stuckWorkerTimeout = false
to disable this behavior.
const scheduler = new NodeResque.Scheduler({
stuckWorkerTimeout: false // will not fail jobs which haven't pinged redis
connection: connectionDetails
})
Manually
Sometimes a worker crashes is a severe way, and it doesn't get the time/chance to notify redis that it is leaving the pool (this happens all the time on PAAS providers like Heroku). When this happens, you will not only need to extract the job from the now-zombie worker's "working on" status, but also remove the stuck worker. To aid you in these edge cases, await queue.cleanOldWorkers(age)
is available.
Because there are no 'heartbeats' in resque, it is impossible for the application to know if a worker has been working on a long job or it is dead. You are required to provide an "age" for how long a worker has been "working", and all those older than that age will be removed, and the job they are working on moved to the error queue (where you can then use queue.retryAndRemoveFailed
) to re-enqueue the job.
If you know the name of a worker that should be removed, you can also call await queue.forceCleanWorker(workerName)
directly, and that will also remove the worker and move any job it was working on into the error queue. This method will still proceed for workers which are only partially in redis, indicting a previous connection failure. In this case, the job which the worker was working on is irrecoverably lost.
Job Schedules
You may want to use node-resque to schedule jobs every minute/hour/day, like a distributed CRON system. There are a number of excellent node packages to help you with this, like node-schedule and node-cron. Node-resque makes it possible for you to use the package of your choice to schedule jobs with.
Assuming you are running node-resque across multiple machines, you will need to ensure that only one of your processes is actually scheduling the jobs. To help you with this, you can inspect which of the scheduler processes is currently acting as leader, and flag only the master scheduler process to run the schedule. A full example can be found at /examples/scheduledJobs.ts, but the relevant section is:
const NodeResque = require("node-resque");
const schedule = require("node-schedule");
const queue = new NodeResque.Queue({ connection: connectionDetails }, jobs);
const scheduler = new NodeResque.Scheduler({ connection: connectionDetails });
await scheduler.connect();
scheduler.start();
schedule.scheduleJob("10,20,30,40,50 * * * * *", async () => {
// do this job every 10 seconds, CRON style
// we want to ensure that only one instance of this job is scheduled in our environment at once,
// no matter how many schedulers we have running
if (scheduler.leader) {
console.log(">>> enqueuing a job");
await queue.enqueue("time", "ticktock", new Date().toString());
}
});
Plugins
Just like ruby's resque, you can write worker plugins. They look like this. The 4 hooks you have are beforeEnqueue
, afterEnqueue
, beforePerform
, and afterPerform
. Plugins are classes
which extend NodeResque.Plugin
const { Plugin } = require("node-resque");
class MyPlugin extends Plugin {
constructor(...args) {
// @ts-ignore
super(...args);
this.name = "MyPlugin";
}
beforeEnqueue() {
// console.log("** beforeEnqueue")
return true; // should the job be enqueued?
}
afterEnqueue() {
// console.log("** afterEnqueue")
}
beforePerform() {
// console.log("** beforePerform")
return true; // should the job be run?
}
afterPerform() {
// console.log("** afterPerform")
}
}
And then your plugin can be invoked within a job like this:
const jobs = {
add: {
plugins: [MyPlugin],
pluginOptions: {
MyPlugin: { thing: "stuff" },
},
perform: (a, b) => {
let answer = a + b;
return answer;
},
},
};
notes
- You need to return
true
orfalse
on the before hooks.true
indicates that the action should continue, andfalse
prevents it. This is calledtoRun
. - If you are writing a plugin to deal with errors which may occur during your resque job, you can inspect and modify
this.worker.error
in your plugin. Ifthis.worker.error
is null, no error will be logged in the resque error queue. - There are a few included plugins, all in the
src/plugins/*
directory. You can write your own and include it like this:
const jobs = {
add: {
plugins: [require("Myplugin").Myplugin],
pluginOptions: {
MyPlugin: { thing: "stuff" },
},
perform: (a, b) => {
let answer = a + b;
return answer;
},
},
};
The plugins which are included with this package are:
DelayQueueLock
- If a job with the same name, queue, and args is already in the delayed queue(s), do not enqueue it again
JobLock
- If a job with the same name, queue, and args is already running, put this job back in the queue and try later
QueueLock
- If a job with the same name, queue, and args is already in the queue, do not enqueue it again
Retry
- If a job fails, retry it N times before finally placing it into the failed queue
Multi Worker
node-resque
provides a wrapper around the Worker
class which will auto-scale the number of resque workers. This will process more than one job at a time as long as there is idle CPU within the event loop. For example, if you have a slow job that sends email via SMTP (with low overhead), we can process many jobs at a time, but if you have a math-heavy operation, we'll stick to 1. The MultiWorker
handles this by spawning more and more node-resque workers and managing the pool.
const NodeResque = require("node-resque");
const connectionDetails = {
pkg: "ioredis",
host: "127.0.0.1",
password: "",
};
const multiWorker = new NodeResque.MultiWorker(
{
connection: connectionDetails,
queues: ["slowQueue"],
minTaskProcessors: 1,
maxTaskProcessors: 100,
checkTimeout: 1000,
maxEventLoopDelay: 10,
},
jobs,
);
// normal worker emitters
multiWorker.on("start", (workerId) => {
console.log("worker[" + workerId + "] started");
});
multiWorker.on("end", (workerId) => {
console.log("worker[" + workerId + "] ended");
});
multiWorker.on("cleaning_worker", (workerId, worker, pid) => {
console.log("cleaning old worker " + worker);
});
multiWorker.on("poll", (workerId, queue) => {
console.log("worker[" + workerId + "] polling " + queue);
});
multiWorker.on("ping", (workerId, time) => {
console.log("worker[" + workerId + "] check in @ " + time);
});
multiWorker.on("job", (workerId, queue, job) => {
console.log(
"worker[" + workerId + "] working job " + queue + " " + JSON.stringify(job),
);
});
multiWorker.on("reEnqueue", (workerId, queue, job, plugin) => {
console.log(
"worker[" +
workerId +
"] reEnqueue job (" +
plugin +
") " +
queue +
" " +
JSON.stringify(job),
);
});
multiWorker.on("success", (workerId, queue, job, result) => {
console.log(
"worker[" +
workerId +
"] job success " +
queue +
" " +
JSON.stringify(job) +
" >> " +
result,
);
});
multiWorker.on("failure", (workerId, queue, job, failure) => {
console.log(
"worker[" +
workerId +
"] job failure " +
queue +
" " +
JSON.stringify(job) +
" >> " +
failure,
);
});
multiWorker.on("error", (workerId, queue, job, error) => {
console.log(
"worker[" +
workerId +
"] error " +
queue +
" " +
JSON.stringify(job) +
" >> " +
error,
);
});
multiWorker.on("pause", (workerId) => {
console.log("worker[" + workerId + "] paused");
});
multiWorker.on("multiWorkerAction", (verb, delay) => {
console.log(
"*** checked for worker status: " +
verb +
" (event loop delay: " +
delay +
"ms)",
);
});
multiWorker.start();
MultiWorker Options
The Options available for the multiWorker are:
connection
: The redis configuration options (same as worker)queues
: Array of ordered queue names (or*
) (same as worker)minTaskProcessors
: The minimum number of workers to spawn under this multiWorker, even if there is no work to do. You need at least one, or no work will ever be processed or checkedmaxTaskProcessors
: The maximum number of workers to spawn under this multiWorker, even if the queues are long and there is available CPU (the event loop isn't entirely blocked) to this node process.checkTimeout
: How often to check if the event loop is blocked (in ms) (for adding or removing multiWorker children),maxEventLoopDelay
: How long the event loop has to be delayed before considering it blocked (in ms),
Presentation
This package was featured heavily in this presentation I gave about background jobs + node.js. It contains more examples!
Acknowledgments
- Most of this code was inspired by / stolen from coffee-resque and coffee-resque-scheduler. Thanks!
- This Resque package aims to be fully compatible with Ruby's Resque and implementations of Resque Scheduler. Other packages from other languages may conflict.
- If you are looking for a UI to manage your Resque instances in nodejs, check out ActionHero's Resque UI
Top Related Projects
Node.js Background jobs backed by redis.
Resque is a Redis-backed Ruby library for creating background jobs, placing them on multiple queues, and processing them later.
Kue is a priority job queue backed by redis, built for node.js.
Lightweight job scheduling for Node.js
A simple, fast, robust job/task queue for Node.js, backed by Redis.
Convert designs to code with AI
Introducing Visual Copilot: A new AI model to turn Figma designs to high quality code using your components.
Try Visual Copilot