Currently, I have Flowable (Flowable wars) running on one sever which is talking to one my-sql DB. My question is would Flowable work fine, if we make it run in a clustered environment where the wars are deployed on 2 Tomcat servers and the wars on both these servers talk to one DB server. We plan to have these severs sit behind a load balancer . Requests to any of these servers would go in a round robin manner .
Thanks @martin.grofcik for answering . I went through an article about Activiti ( http://www.mastertheboss.com/jboss-jbpm/activiti-bpmn/clustering-activiti-bpmn ) which you mentioned in one of your replies that Flowable also follows the same architecture. Can you please answer as what exactly you meant by Job Executions here i.e. what issues can occur if jobs are running on multiple nodes ?
When flowable runs on multiple nodes (e.g. 2 nodes), job executor can run on one of them on both of them (and nne of them… but it does not make sense). In the case when jobexecutor runs on both nodes, the jobs are fetches from the same database. There is optimistic locking implemented i the parallel access to the jobs. (Sometimes you can see optimistic locking exception in the logs - normal optimistic locking behaviour.)
Currently Flowable is running on one tomcat server pointing to one DB, but some times I run into the following error.
– Logs from Catalina.log
11:27:08,375 [localhost-startStop-1] WARN com.mchange.v2.resourcepool.BasicResourcePool - Bad pool size config, start 3 < min 10. Using 10 as start.
11:27:09,964 [localhost-startStop-1] INFO org.flowable.engine.impl.cfg.ProcessEngineConfigurationImpl - Executing configure() of class org.flowable.form.spring.configurator.SpringFormEngineConfigurator (priority:10000) INFO 10/16/17 11:27 AM:liquibase: Waiting for changelog lock… INFO 10/16/17 11:27 AM:liquibase: Waiting for changelog lock… INFO 10/16/17 11:27 AM:liquibase: Waiting for changelog lock…
Is this the error which you are talking about ? Though there is not specific way to reproduce it, but this happens when I have to restart sever. Can you suggest, as how can this be fixed ?