Limit the number of parallel multi-instances

Hello flowable experts,

In my diagram, I have a lot of call activities, which are multi instance and are executed in parallel (isSequential="false").
Consider the following example, which is one of the call activities in my diagram:

    <callActivity id="deleteServicesCallActivity" name="Delete Services Sub Process" flowable:async="true" calledElement="deleteServiceSubProcess" flowable:calledElementType="key" flowable:inheritVariables="true" flowable:completeAsync="true" flowable:fallbackToDefaultTenant="false">
        <flowable:in source="serviceToDelete" target="serviceToDelete"></flowable:in>
      <multiInstanceLoopCharacteristics isSequential="false" flowable:collection="servicesToDelete" flowable:elementVariable="serviceToDelete"></multiInstanceLoopCharacteristics>

As my application might execute a lot of process instances, I want to limit the number of parallel call activities per process instance. For example, if the servicesToDelete collection contains 100 elements, I want to process 10 in parallel and the others to “wait at the queue” until an element (respectively an activity) is completed. What I imagine as perfect option is to specify a value for the nrOfActiveInstances in a similar way to loopCardinality.

My current solution is to have two nested call activities - the main one will execute sequentially the inner one until all elements are processed. The inner one will process 10 or less elements in parallel.

Even though I have a possible solution, I am looking for a better and simpler approach. So any advice will be highly appreciated.

Thank you in advance,

The number of instances for a parallel multi instance is calculated when the execution arrives in it. So no, that number is not re-evaluated once the instances are determined. The only way I can see as an alternative is having a second collection (containing a subset) and then looping back to the call activity.

Another possibility would be to use batch service instead of multiinstance behavior.