FlowableOptimisticLocking Exception during update of PropertyEntity

Hello Flowable Experts,

I am experiencing strange behaviour while trying to execute a lot of processes on different JVMs using same database. For this I am using one ProcessEngineConfiguration.
The problem comes when I am trying to insert new variable in the database from one of my processes. I have attached below the logs after the exception.

Caused by: org.flowable.common.engine.api.FlowableOptimisticLockingException: PropertyEntity[name=next.dbid, value=16930001] was updated by another transaction concurrently
at org.flowable.common.engine.impl.db.DbSqlSession.flushUpdates(DbSqlSession.java:505)
at org.flowable.common.engine.impl.db.DbSqlSession.flush(DbSqlSession.java:292)
at org.flowable.common.engine.impl.interceptor.CommandContext.flushSessions(CommandContext.java:191)
at org.flowable.common.engine.impl.interceptor.CommandContext.close(CommandContext.java:61)
at org.flowable.common.engine.impl.interceptor.CommandContextInterceptor.execute(CommandContextInterceptor.java:81)
at org.flowable.common.spring.SpringTransactionInterceptor.lambda$execute$0(SpringTransactionInterceptor.java:56)
at org.springframework.transaction.support.TransactionTemplate.execute(TransactionTemplate.java:140)
at org.flowable.common.spring.SpringTransactionInterceptor.execute(SpringTransactionInterceptor.java:56)
at org.flowable.common.engine.impl.interceptor.LogInterceptor.execute(LogInterceptor.java:30)
at org.flowable.common.engine.impl.cfg.CommandExecutorImpl.execute(CommandExecutorImpl.java:56)
at org.flowable.engine.impl.db.DbIdGenerator.getNewBlock(DbIdGenerator.java:44)
at org.flowable.engine.impl.db.DbIdGenerator.getNextId(DbIdGenerator.java:37)
at org.flowable.common.engine.impl.db.DbSqlSession.insert(DbSqlSession.java:79)
at org.flowable.common.engine.impl.db.AbstractDataManager.insert(AbstractDataManager.java:75)
at org.flowable.variable.service.impl.persistence.entity.AbstractEntityManager.insert(AbstractEntityManager.java:54)
at org.flowable.variable.service.impl.persistence.entity.AbstractEntityManager.insert(AbstractEntityManager.java:49)
at org.flowable.variable.service.impl.persistence.entity.VariableScopeImpl.createVariableInstance(VariableScopeImpl.java:893)
at org.flowable.engine.impl.persistence.entity.ExecutionEntityImpl.createVariableInstance(ExecutionEntityImpl.java:722)
at org.flowable.engine.impl.persistence.entity.ExecutionEntityImpl.createVariableLocal(ExecutionEntityImpl.java:744)
at org.flowable.engine.impl.persistence.entity.ExecutionEntityImpl.setVariable(ExecutionEntityImpl.java:618)
at org.flowable.engine.impl.persistence.entity.ExecutionEntityImpl.setVariable(ExecutionEntityImpl.java:611)
at org.flowable.engine.impl.persistence.entity.ExecutionEntityImpl.setVariable(ExecutionEntityImpl.java:586)
at org.flowable.variable.service.impl.persistence.entity.VariableScopeImpl.setVariable(VariableScopeImpl.java:661)
at com.sap.cloud.lm.sl.cf.process.steps.StepsUtil.setAsJsonBinary(StepsUtil.java:1143)
at com.sap.cloud.lm.sl.cf.process.steps.StepsUtil.setAppsToDeploy(StepsUtil.java:265)
at com.sap.cloud.lm.sl.cf.process.steps.PrepareToUndeployStep.executeStep(PrepareToUndeployStep.java:38)
at com.sap.cloud.lm.sl.cf.process.steps.SyncFlowableStep.execute(SyncFlowableStep.java:60)

Looking more deeply, it seems that the property next.dbid was being updated concurrently. I think that there is a little chance of this to happen and I hit it.
Here is explanation - while the engine was executing the process, two elements(variables, executions, etc.) are being scheduled to be stored in the database. It appears that for those two elements, their idBlocks are ended and they need to take new block size in order to get id with which to be inserted in the database. Here comes the problem, at the exact same time, the newly created Ids are being added as new property in the database → BAM OptimisticLockingException.

Could you verify my scenario?
Also, is there any possibility an ignore mechanism to be added as the one in the AcquireAsyncJobsDueRunnable which is ignoring the OptimisticLocking in case there are two or more job executors which are executing jobs against the same database?

Thanks and best regards,
Encho

Hey Encho,

Your analysis is correct. The DbIdGenerator is the one that is used for creating ids. And that one uses the next.dbid property.

For your scenario (a lot of processes) I would advise to switch to the StrongUuidGenerator. In our Spring Boot configuration this is already the default IdGenerator.

Is it even an option for you to change the IdGenerator?

Cheers,
Filip

Hello Filip,

Many thanks for the fast response! :slight_smile:
We are using these Ids to show them yo our users(for instance, the process instance id). If the ids are UUIDs they could be non-readable which might cause us additional complaints - I will discuss it in more details with my colleagues.
On the other hand, what do you think about the preposition for ignoring/retrying the next id generation?

Cheers,
Encho

In the current DbIdGenerator ignoring is not a good option since it might lead to primary key clashes (which is not what you want). However, you can implement your own IdGenerator that would suit your own needs. Depending on which DB you are using you can even do something similar as in the DbIdGenerator but use a sequence instead.

Cheers,
Filip

Hey Filip,

I think I did not express myself correctly. What I want to achieve here is to retry the whole mechanism for retrieving the idBlock from the database.
More specifically, here is the code I want to retry from the DbIdGenerator:

protected synchronizedvoid getNewBlock() {

IdBlock idBlock = commandExecutor.execute(commandConfig, new GetNextIdBlockCmd(idBlockSize));
this .nextId = idBlock.getNextId();
this .lastId = idBlock.getLastId();
}

The retry will be fixed number, for instance 3 times retry → on the 3rd fail if there is exception.
What do you say about this? If I try something like this, would there be a PrimaryKey Constraint violation?

Cheers,
Encho :slight_smile:

In theory that should work, but note that each of these command executions is a new transaction (‘requires new’ semantics), so you need to be careful where you do these retries. Also make sure the transaction is properly closed again before starting the new one as otherwise the db will complain very quickly.

Hello,

Thanks for the answers!
I have found a way - we started using UUID and the problem is now gone.
Thanks for the suggestions.

Best regards,
Encho