Hanging async executor threads at JavaDelegate classes

I see so many threads stucking at

“flowable-task-Executor-8757” #113541 prio=5 os_prio=0 tid=0x000000002e03e800 nid=0x3330 runnable [0x0000000068e3e000]
java.lang.Thread.State: RUNNABLE
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)

I am doing some rest calls in my custom servicetasks but although I set socketTimeout this socketTimeout is not working in some cases that i can not reproduce and the thread keeps stucking there. There seems to be a bug on jdk related to socketRead on windows environments. This is a out of flowable-context problem.

What really a flowable-context problem is; I expect these hanging threads to kill themselves after a certain period of time becasuse as long as these stucked thread count increases, at one point there remains no availavle async threads to execute my upcoming tasks.

Is there such a property to set timeout for these situations, if there is not, how can i implement my own solution for this kind of problems?

Is this property something related to my problem?

* The time (in milliseconds) a thread used for job execution must be kept
* alive before it is destroyed. Default setting is 0. Having a non-default
* setting of 0 takes resources, but in the case of many job executions it
* avoids creating new threads all the time.
protected long keepAliveTime = 5000L;

I went deep into source-code and saw that AsyncJobExecutor is actually a wrapper of pure ThreadPoolExecutor and keep-alive is releated to threads waiting in IDLE status and my problem was with the threads in RUNNING state. There really should be something to pass this timeout parameter to executorService. Instead of using executorService.execute using executorService.invokeAll can be a better choice but it also has its own side-effects too. Just giving a feedback…

I ended up with the solution of shutting down and starting of this executor service twice a day which is really a dirty solution…

That is correct. This is using standard JDK threadpooling, which should catch starving threads and clean them up. But if you’re threads are in the running state, the JDK doesn’t know that there is a problem (it could very well be executing properly).

Normally, the database transaction will timeout and this will also get the thread back. Have you tried looking into configuring the transaction timeout on your db?