Multiple engines - Multiple instance subprocess

Hello,

I have a BPMN with a multiple instances subprocess.
This subprocess will have 2 service tasks (quite fasts; making simple calls to remote rest API).
This subprocess may treat a list of 200K items with it’s multiInstanceLoop. Each item can be treated independantly so the loop can be parallelized (isSequential=“false”).
With a sigle instance of the engine the full treatment takes around 1 minute to treat 1000 items.

I tried to use multiple instances of the flowable engine and it looks like that the default behaviour is that the engine entering the subprocess will treat every loop iteration (load is not spread across the differents engines).

I tried many configuration changes (global acquire lock ; default-async-job-acquire-wait-time ; .max-async-jobs-due-per-acquisition; async-job-lock-time…) but when i succeed to have multiples engines working on the loop at the same time, it is really slower that with a single engine and it ends up getting stucked… probably due to optimistic lock.

1 - With my scenario (simple subprocess with only service tasks; fast parallelizable loop; 200K iterations), is it a good idea to try to spread the load over multiple instances of the flowable engine?

2 - If yes, could somebody suggest me a suitable configuration?

Thanks

Hey @marcberger,

Depending on whether you have configured the service tasks to be parallel or not, Flowable will either run them on multiple engines or not.

Flowable runs a single transaction on one node only, even if you are using parallel multi instance. The parallel multi instance is a business parallel, not a technical parallel.

Nevertheless, you are talking about executing 200K items in a single multi instance sub process. I’m not sure that you should do this like that. Why don’t you write a dedicated code that would handle this 200K executions, or spread them out in multiple processes, i.e. each item is one process instance that you can start asynchronously and thus the load on multiple nodes would be distributed.

Cheers,
Filip