Load balance of flowable jobs between different engines

Hello,

I have the following setup: 3 separate flowable engine servers run on different vms and they use single database to execute processes. Every flowable engine has async job executor and they execute jobs of started processes. I had noticed in my statistics monitoring that frequently one of the engine servers hit max limit of running job executor threads while the other ones stay in idle with only 1-2 running job executor threads. My questions is how jobs for execution are distributed between flowable engines? Is there some configurations for load balancing between different engines?

An engine will try to execute the jobs on the local node, if there is still room in the internal queue. If not, the job will be picked up by other nodes.

You can change the asyncExecutorThreadPoolQueueSize setting for this (default it’s 100).

Thank you @joram for the answer. Actually I had already set custom queue size to async job executor for every node. I had already the following configuration for every node:
async job executor queue size: 16, corePoolSize: 8, maxPoolSize: 32

Can you give some advice about configuration of queue, core pool and max pool sizes when I have 3 separate flowable engines?

As usual, the answer is ‘it depends’ ;-). It depends on your type of processes and how much async jobs/timers they have.

On a decent sized machine, you can make the queue size larger. As long as the queue isn’t filled, jobs will be picked up by the local node. Of course, making it small will force spreading it out.

8-16 threads is a decent default. Unless your jobs take a long time to complete, than you need more because the long running logic will block the thread for a long time.

The best thing you can do is, as you were doing, monitor and tweak accordingly. If you see your number of threads being blocked, indeed adding (and making queue size smaller) as you did makes sense.

Thank you for your time @joram . I have added additional monitoring for asyncJobExecutorQueue and I will adjust the size of queue when it is needed.