Async Job Executor Connection Leak?

I have a bit of code that, in order to not spam the Flowable engine with 50 requests rapid-fire, instead schedules the task update actions to happen using the JobService’s scheduleAsyncJob(). I’m connected to the flowable engine using a HikariCP connection pool setup for a poolsize of 20, with an idle timeout set to 200 milliseconds. Below is the code showing how I’m setting up the job to be run later:

public void scheduleAsyncJob(ProcessEngine engine, String jobHandlerType, String taskId, String action, String completedBy) {
	// First off, we need the management service out of the process engine
	ManagementService managementService = engine.getManagementService();		
	// Create an anonymous command to execute
	managementService.executeCommand(new Command<String>() {	
		public String execute(CommandContext commandContext) {
			// We need the job service to be able to scheule the job against
		    JobService jobService = CommandContextUtil.getJobService(commandContext);	
		    // We get the task service to pull the task object from flowable, as we need some of the process data for logging later on
		    TaskService taskService = engine.getTaskService();			    
		    List<Task> taskList = taskService.createTaskQuery().includeProcessVariables().includeTaskLocalVariables().taskId(taskId).list();
		    Task task = taskList.get(0);
		    String processId = task.getProcessInstanceId();
		    String taskCategory = task.getCategory();			    
		    Map<String, Object> processVars = task.getProcessVariables();
		    String bussObjId = processVars.get("bussObjId").toString();
		    JobEntity job = jobService.createJob();
		    // The task completion details need to be encoded into a JSON packet, so that we can dynamically expand this later on with fewer headaches
		    JsonObject jsonConfig = Json.createObjectBuilder()
		    		.add("taskId", taskId)
		    		.add("action", action)
		    		.add("completedBy", completedBy)
		    // Once the job gets schedule, log that the job was scheduled in the system, so we can track when it was scheduled vs when the system thinks it executed
			jobService.createAsyncJob(job, false);    
		    return "scheduled";

When the job executes, it should be invoking the TaskService.completeTask() command to finish my user tasks. Here’s the code I wrote to handle the job execution:

public void execute(JobEntity job, String configuration, VariableScope variableScope,CommandContext commandContext) {
	// Start out by creating a JSON parser, because our data in the configuration object is packaged in a JSON payload
	Reader jsonReader = new StringReader(configuration);
	JsonReader jsonConfig = Json.createReader(jsonReader);
	JsonObject jsonObj = jsonConfig.readObject();
	// We need the taskId, action taken, user who took that action, and the token.  The token may or mot not be provided, but the other values
	//		all should be here.
	String taskId = jsonObj.getString("taskId");
	String action = jsonObj.getString("action");
	String completedBy = jsonObj.getString("completedBy");
	// Get the task service and update the task variables as well as relevant process-level variables
	TaskService taskService = CommandContextUtil.getProcessEngineConfiguration(commandContext).getTaskService();		
	if (action != null && action.length() > 0) {
		taskService.setVariableLocal(taskId, "action", action);
	taskService.setVariableLocal(taskId, "completedBy", completedBy);
	// Complete the task, and log that our execution took place
	// But remember that it's possible for our command to actually fail (sometimes the task just doesn't work)

When I fire off the code for 10 requests, everything seems to be fine and I can see that my tasks get updated properly. When I increase the request size to 20, I start getting warned by HikariCP about resource leaks. This is the actual stack trace HikariCP throws at me when it trips the leak detection warning:

03/22 11:38:06 [HikariPool-1 housekeeper] WARN Connection leak detection triggered for oracle.jdbc.driver.T4CConnection@325e8845 on thread flowable-async-job-executor-thread-8, stack trace follows
java.lang.Exception: Apparent connection leak detected
at com.zaxxer.hikari.HikariDataSource.getConnection( ~[HikariCP-3.4.5.jar:?]
at org.apache.ibatis.transaction.jdbc.JdbcTransaction.openConnection( ~[mybatis-3.5.6.jar:3.5.6]
at org.apache.ibatis.transaction.jdbc.JdbcTransaction.getConnection( ~[mybatis-3.5.6.jar:3.5.6]
at org.apache.ibatis.session.defaults.DefaultSqlSession.getConnection( ~[mybatis-3.5.6.jar:3.5.6]
at org.flowable.common.engine.impl.db.DbSqlSessionFactory.openSession( ~[flowable-engine-common-]
at org.flowable.common.engine.impl.interceptor.CommandContext.getSession( ~[flowable-engine-common-]
at org.flowable.common.engine.impl.cfg.standalone.StandaloneMybatisTransactionContext.( ~[flowable-engine-common-]
at org.flowable.common.engine.impl.cfg.standalone.StandaloneMybatisTransactionContextFactory.openTransactionContext( ~[flowable-engine-common-]
at org.flowable.common.engine.impl.interceptor.TransactionContextInterceptor.execute( ~[flowable-engine-common-]
at org.flowable.common.engine.impl.interceptor.CommandContextInterceptor.execute( ~[flowable-engine-common-]
at org.flowable.common.engine.impl.interceptor.LogInterceptor.execute( ~[flowable-engine-common-]
at org.flowable.common.engine.impl.cfg.CommandExecutorImpl.execute( ~[flowable-engine-common-]
at org.flowable.common.engine.impl.cfg.CommandExecutorImpl.execute( ~[flowable-engine-common-]
at org.flowable.job.service.impl.asyncexecutor.ExecuteAsyncRunnable.executeJob( ~[flowable-job-service-]
at ~[flowable-job-service-]
at java.util.concurrent.ThreadPoolExecutor.runWorker( ~[?:?]
at java.util.concurrent.ThreadPoolExecutor$ ~[?:?]
at ~[?:?]

My assumption is not that Flowable has a memory leak in it, but that I’m doing something wrong that’s causing this problem to manifest. JDBC and connection pool tuning are not my specialty, though, so I can’t figure out what I’m doing wrong. I set up the connection pool with the following code. My driver type is for Oracle 19c, idleTimeout is 200, minPoolSize is 20, minIdleConn is 1, cachePrepStatements is true, prepStmtCacheSize is 250, and prepStmtCacheSqlLimit is 2048

HikariConfig dsConfig = new HikariConfig();
dsConfig.addDataSourceProperty(“cachePrepStmts”, prepareCacheStatement);
dsConfig.addDataSourceProperty(“prepStmtCacheSize”, cacheSize);
dsConfig.addDataSourceProperty(“prepStmtCacheSqlLimit”, cacheLimit);
dataSource = new HikariDataSource(dsConfig);

Then I connect to my database using ProcessEngineConfiguration.createStandAloneProcessEngineConfiguration(). I set the database type to Oracle, the datasource to the above datasource object, and then I also turn on the async executor. My engine setup code looks like this, all told:

ProcessEngineConfiguration processEngineConfig = ProcessEngineConfiguration.createStandaloneProcessEngineConfiguration(); processEngineConfig.setDatabaseSchemaUpdate(AbstractEngineConfiguration.DB_SCHEMA_UPDATE_TRUE); processEngineConfig.setDatabaseType(AbstractEngineConfiguration.DATABASE_TYPE_ORACLE);


	processEngine = processEngineConfig.buildProcessEngine();

I can see side effects of the Flowable tasks being completed, but it’s like the workflow engine is just getting stuck on me. Under the assumption I’m missing something, is there a way to configure the async job system to release a connection when it’s done? Or is it something in the HikariCP settings that I’m not paying attention to but Flowable is that’s causing it to run out of connections?

I’m still curious about an answer to the ability to fine-tune the connection pool settings to ensure that if there’s a problem, Flowable can be instructed to release stuck connections back to the pool and not cause a total log jam.

That said, the root of my issue seems to have been a threadlock on the connection pool, caused by trying to perform the same operation both before the async task scheduled and during the execution of the async task. My BPMN diagram had a wire leading to an HTTP task that updated the same value the server code updated just prior to scheduling the async task. I modified the wire to NOT do the HTTP task and it behaved itself properly. Curious behavior, but clearly something to do with when/how the connections were being acquired and released from the pool.