I have a case where I need to lock process execution before I will work with the processed business entity.
To prevent concurrent process engine instance acquire timer jobs I’ve implemented the same logic as ExecuteAsyncRunnable to updateProcessInstanceLockTime. At this case I’m sure process engine will not execute process jobs.
It works, but when ExecuteAsyncRunnable catch FlowableOptimisticLockingException it tries to release the job again so it can be acquired later or by another node. Job Manager will reinsert the job.
But at the same time execution may be already finished.
The error is:
### Error updating database. Cause: org.postgresql.util.PSQLException: ERROR: insert or update on table "act_ru_job" violates foreign key constraint "act_fk_job_execution" Detail: Key (execution_id_)=(15383588) is not present in table "act_ru_execution".
- Should it be care on the Process Engine side to avoid exceptions or should I override DefaultJobManager implementation?
- Is it correct way to lock process and prevent timers be triggered?
- Is it possible create some mechanism inside Engine that will allow freeze processing (for execution, process or whole root process) while we do some operations and trigger execution, for example?
I read this article from documentation: Async Executor https://flowable.com/open-source/docs/bpmn/ch18-Advanced/#async-executor
I’ve analyzed how TakeOutgoingSequenceFlowsOperation cleans up executions.
It calls ExecutionEntityManager to deleteRelatedDataForExecution where deleteJobs will be called. Here I’ve understood what jobs and timer jobs and how they are retrieves. Child executions are used there.
Then I looked at AbstractSetProcessInstanceStateCmd to understand how jobs are moved to suspended jobs.
Okay, now I can get execution with child executions and suspend all their jobs and timer jobs.
Then I can do some work with execution, here I sure that BPM Engine won’t be execute any timers for THAT execution I’ve suspended jobs.
After all my operations done I can found suspended jobs and activate it if any of them is already exists (jobs may be deleted by process execution) like it does AbstractSetProcessInstanceStateCmd. Just I use there exeсutionId, not a processInstanceId.
I guess it is a correct way.
Yes, reading your first post, the thought of suspending the process (and thus the jobs) popped up too.
As you found out, the locking is quite specific, and the logic to handle the optimistic locking exception would need to be changed (as you found out), due to the logic assuming a different usage for that lock.