|
| 1 | + |
| 2 | +Patterns |
| 3 | +======== |
| 4 | + |
| 5 | +Here are a few examples of useful patterns that are often implemented with Bull: |
| 6 | + |
| 7 | +- [Message Queue](#message-queue) |
| 8 | +- [Returning Job Completions](#returning-job-completions) |
| 9 | +- [Reusing Redis Connections](#reusing-redis-connections) |
| 10 | +- [Debugging](#debugging) |
| 11 | + |
| 12 | +If you have any other common patterns you want to add, pull request them! |
| 13 | + |
| 14 | + |
| 15 | +Message Queue |
| 16 | +------------- |
| 17 | + |
| 18 | +Bull can also be used for persistent message queues. This is a quite useful |
| 19 | +feature in some usecases. For example, you can have two servers that need to |
| 20 | +communicate with each other. By using a queue the servers do not need to be online at the same time, this create a very robust communication channel. You can treat `add` as *send* and `process` as *receive*: |
| 21 | + |
| 22 | +Server A: |
| 23 | + |
| 24 | +```js |
| 25 | +var Queue = require('bull'); |
| 26 | + |
| 27 | +var sendQueue = Queue("Server B"); |
| 28 | +var receiveQueue = Queue("Server A"); |
| 29 | + |
| 30 | +receiveQueue.process(function(job, done){ |
| 31 | + console.log("Received message", job.data.msg); |
| 32 | + done(); |
| 33 | +}); |
| 34 | + |
| 35 | +sendQueue.add({msg:"Hello"}); |
| 36 | +``` |
| 37 | + |
| 38 | +Server B: |
| 39 | + |
| 40 | +```js |
| 41 | +var Queue = require('bull'); |
| 42 | + |
| 43 | +var sendQueue = Queue("Server A"); |
| 44 | +var receiveQueue = Queue("Server B"); |
| 45 | + |
| 46 | +receiveQueue.process(function(job, done){ |
| 47 | + console.log("Received message", job.data.msg); |
| 48 | + done(); |
| 49 | +}); |
| 50 | + |
| 51 | +sendQueue.add({msg:"World"}); |
| 52 | +``` |
| 53 | + |
| 54 | + |
| 55 | +Returning Job Completions |
| 56 | +------------------------- |
| 57 | + |
| 58 | +A common pattern is where you have a cluster of queue processors that just process jobs as fast as they can, and some other services that need to take the result of this processors and do something with it, maybe storing results in a database. |
| 59 | + |
| 60 | +The most robust and scalable way to accomplish this is by combining the standard job queue with the message queue pattern: a service sends jobs to the cluster just by opening a job queue and adding jobs to it, the cluster will start processing as fast as it can. Everytime a job gets completed in the cluster a message is send to a results message queue with the result data, this queue is listened by some other service that stores the results in a database. |
| 61 | + |
| 62 | + |
| 63 | +Reusing Redis Connections |
| 64 | +------------------------- |
| 65 | + |
| 66 | +A standard queue requires **3 connections** to the Redis server. In some situations you might want to re-use connections—for example on Heroku where the connection count is restricted. You can do this with the `createClient` option in the `Queue` constructor: |
| 67 | + |
| 68 | +```js |
| 69 | +var client = new redis(); |
| 70 | +var subscriber = new redis(); |
| 71 | + |
| 72 | +var opts = { |
| 73 | + redis: { |
| 74 | + opts: { |
| 75 | + createClient: function(type){ |
| 76 | + switch(type){ |
| 77 | + case 'client': |
| 78 | + return client; |
| 79 | + case 'subscriber': |
| 80 | + return subscriber; |
| 81 | + default: |
| 82 | + return new redis(); |
| 83 | + } |
| 84 | + } |
| 85 | + } |
| 86 | + } |
| 87 | +} |
| 88 | +var queueFoo = new Queue('foobar', opts); |
| 89 | +var queueQux = new Queue('quxbaz', opts); |
| 90 | +``` |
| 91 | + |
| 92 | + |
| 93 | +Debugging |
| 94 | +--------- |
| 95 | + |
| 96 | +To see debug statements set or add `bull` to the `NODE_DEBUG` environment variable: |
| 97 | + |
| 98 | +```bash |
| 99 | +export NODE_DEBUG=bull |
| 100 | +``` |
| 101 | + |
| 102 | +```bash |
| 103 | +NODE_DEBUG=bull node ./your-script.js |
| 104 | +``` |
0 commit comments