I have a bit of code that, in order to not spam the Flowable engine with 50 requests rapid-fire, instead schedules the task update actions to happen using the JobService’s scheduleAsyncJob(). I’m connected to the flowable engine using a HikariCP connection pool setup for a poolsize of 20, with an idle timeout set to 200 milliseconds. Below is the code showing how I’m setting up the job to be run later:
public void scheduleAsyncJob(ProcessEngine engine, String jobHandlerType, String taskId, String action, String completedBy) {
// First off, we need the management service out of the process engine
ManagementService managementService = engine.getManagementService();
// Create an anonymous command to execute
managementService.executeCommand(new Command<String>() {
public String execute(CommandContext commandContext) {
// We need the job service to be able to scheule the job against
JobService jobService = CommandContextUtil.getJobService(commandContext);
// We get the task service to pull the task object from flowable, as we need some of the process data for logging later on
TaskService taskService = engine.getTaskService();
List<Task> taskList = taskService.createTaskQuery().includeProcessVariables().includeTaskLocalVariables().taskId(taskId).list();
Task task = taskList.get(0);
String processId = task.getProcessInstanceId();
String taskCategory = task.getCategory();
Map<String, Object> processVars = task.getProcessVariables();
String bussObjId = processVars.get("bussObjId").toString();
JobEntity job = jobService.createJob();
job.setJobHandlerType(jobHandlerType);
// The task completion details need to be encoded into a JSON packet, so that we can dynamically expand this later on with fewer headaches
JsonObject jsonConfig = Json.createObjectBuilder()
.add("taskId", taskId)
.add("action", action)
.add("completedBy", completedBy)
.build();
job.setJobHandlerConfiguration(jsonConfig.toString());
// Once the job gets schedule, log that the job was scheduled in the system, so we can track when it was scheduled vs when the system thinks it executed
jobService.createAsyncJob(job, false);
jobService.scheduleAsyncJob(job);
return "scheduled";
}
});
}
When the job executes, it should be invoking the TaskService.completeTask() command to finish my user tasks. Here’s the code I wrote to handle the job execution:
public void execute(JobEntity job, String configuration, VariableScope variableScope,CommandContext commandContext) {
// Start out by creating a JSON parser, because our data in the configuration object is packaged in a JSON payload
Reader jsonReader = new StringReader(configuration);
JsonReader jsonConfig = Json.createReader(jsonReader);
JsonObject jsonObj = jsonConfig.readObject();
// We need the taskId, action taken, user who took that action, and the token. The token may or mot not be provided, but the other values
// all should be here.
String taskId = jsonObj.getString("taskId");
String action = jsonObj.getString("action");
String completedBy = jsonObj.getString("completedBy");
// Get the task service and update the task variables as well as relevant process-level variables
TaskService taskService = CommandContextUtil.getProcessEngineConfiguration(commandContext).getTaskService();
if (action != null && action.length() > 0) {
taskService.setVariableLocal(taskId, "action", action);
}
taskService.setVariableLocal(taskId, "completedBy", completedBy);
// Complete the task, and log that our execution took place
// But remember that it's possible for our command to actually fail (sometimes the task just doesn't work)
CommandContextUtil.getProcessEngineConfiguration(commandContext).getTaskService().complete(taskId);
}
When I fire off the code for 10 requests, everything seems to be fine and I can see that my tasks get updated properly. When I increase the request size to 20, I start getting warned by HikariCP about resource leaks. This is the actual stack trace HikariCP throws at me when it trips the leak detection warning:
03/22 11:38:06 [HikariPool-1 housekeeper] WARN Connection leak detection triggered for oracle.jdbc.driver.T4CConnection@325e8845 on thread flowable-async-job-executor-thread-8, stack trace follows
java.lang.Exception: Apparent connection leak detected
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:100) ~[HikariCP-3.4.5.jar:?]
at org.apache.ibatis.transaction.jdbc.JdbcTransaction.openConnection(JdbcTransaction.java:139) ~[mybatis-3.5.6.jar:3.5.6]
at org.apache.ibatis.transaction.jdbc.JdbcTransaction.getConnection(JdbcTransaction.java:61) ~[mybatis-3.5.6.jar:3.5.6]
at org.apache.ibatis.session.defaults.DefaultSqlSession.getConnection(DefaultSqlSession.java:297) ~[mybatis-3.5.6.jar:3.5.6]
at org.flowable.common.engine.impl.db.DbSqlSessionFactory.openSession(DbSqlSessionFactory.java:96) ~[flowable-engine-common-6.6.2.2.jar:6.6.2.2]
at org.flowable.common.engine.impl.interceptor.CommandContext.getSession(CommandContext.java:265) ~[flowable-engine-common-6.6.2.2.jar:6.6.2.2]
at org.flowable.common.engine.impl.cfg.standalone.StandaloneMybatisTransactionContext.(StandaloneMybatisTransactionContext.java:48) ~[flowable-engine-common-6.6.2.2.jar:6.6.2.2]
at org.flowable.common.engine.impl.cfg.standalone.StandaloneMybatisTransactionContextFactory.openTransactionContext(StandaloneMybatisTransactionContextFactory.java:26) ~[flowable-engine-common-6.6.2.2.jar:6.6.2.2]
at org.flowable.common.engine.impl.interceptor.TransactionContextInterceptor.execute(TransactionContextInterceptor.java:47) ~[flowable-engine-common-6.6.2.2.jar:6.6.2.2]
at org.flowable.common.engine.impl.interceptor.CommandContextInterceptor.execute(CommandContextInterceptor.java:83) ~[flowable-engine-common-6.6.2.2.jar:6.6.2.2]
at org.flowable.common.engine.impl.interceptor.LogInterceptor.execute(LogInterceptor.java:30) ~[flowable-engine-common-6.6.2.2.jar:6.6.2.2]
at org.flowable.common.engine.impl.cfg.CommandExecutorImpl.execute(CommandExecutorImpl.java:56) ~[flowable-engine-common-6.6.2.2.jar:6.6.2.2]
at org.flowable.common.engine.impl.cfg.CommandExecutorImpl.execute(CommandExecutorImpl.java:51) ~[flowable-engine-common-6.6.2.2.jar:6.6.2.2]
at org.flowable.job.service.impl.asyncexecutor.ExecuteAsyncRunnable.executeJob(ExecuteAsyncRunnable.java:127) ~[flowable-job-service-6.6.2.2.jar:6.6.2.2]
at org.flowable.job.service.impl.asyncexecutor.ExecuteAsyncRunnable.run(ExecuteAsyncRunnable.java:115) ~[flowable-job-service-6.6.2.2.jar:6.6.2.2]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128) ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628) ~[?:?]
at java.lang.Thread.run(Thread.java:834) ~[?:?]
My assumption is not that Flowable has a memory leak in it, but that I’m doing something wrong that’s causing this problem to manifest. JDBC and connection pool tuning are not my specialty, though, so I can’t figure out what I’m doing wrong. I set up the connection pool with the following code. My driver type is for Oracle 19c, idleTimeout is 200, minPoolSize is 20, minIdleConn is 1, cachePrepStatements is true, prepStmtCacheSize is 250, and prepStmtCacheSqlLimit is 2048
HikariConfig dsConfig = new HikariConfig();
dsConfig.setDriverClassName(driverType);
dsConfig.setJdbcUrl(jdbcUrl);
dsConfig.setUsername(username);
dsConfig.setPassword(password);
dsConfig.setIdleTimeout(idleTimeout);
dsConfig.setMaximumPoolSize(minPoolsize);
dsConfig.setMinimumIdle(minIdleConn);
dsConfig.addDataSourceProperty(“cachePrepStmts”, prepareCacheStatement);
dsConfig.addDataSourceProperty(“prepStmtCacheSize”, cacheSize);
dsConfig.addDataSourceProperty(“prepStmtCacheSqlLimit”, cacheLimit);
dsConfig.setReadOnly(readOnly);
dsConfig.setLeakDetectionThreshold(29000);
dataSource = new HikariDataSource(dsConfig);
Then I connect to my database using ProcessEngineConfiguration.createStandAloneProcessEngineConfiguration(). I set the database type to Oracle, the datasource to the above datasource object, and then I also turn on the async executor. My engine setup code looks like this, all told:
ProcessEngineConfiguration processEngineConfig = ProcessEngineConfiguration.createStandaloneProcessEngineConfiguration(); processEngineConfig.setDatabaseSchemaUpdate(AbstractEngineConfiguration.DB_SCHEMA_UPDATE_TRUE); processEngineConfig.setDatabaseType(AbstractEngineConfiguration.DATABASE_TYPE_ORACLE);
processEngineConfig.setDataSource(dataSource);
processEngineConfig.setAsyncExecutorActivate(true);
....
processEngine = processEngineConfig.buildProcessEngine();
I can see side effects of the Flowable tasks being completed, but it’s like the workflow engine is just getting stuck on me. Under the assumption I’m missing something, is there a way to configure the async job system to release a connection when it’s done? Or is it something in the HikariCP settings that I’m not paying attention to but Flowable is that’s causing it to run out of connections?