java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "flowable-cmmn-acquire-timer-jobs"

Hi Team

we are getting outOfMemoryError issue in while application is running for 20 to 30 minutes. Application is starting up properly and after some time we are getting below issues. We are unable to findout reason why service is getting stopped because of below error. Kindly help

Exception in thread “flowable-cmmn-acquire-timer-jobs”
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread “flowable-cmmn-acquire-timer-jobs”
Exception in thread “flowable-cmmn-reset-expired-jobs” Exception in thread “http-nio-9031-Poller” java.lang.OutOfMemoryError: Java heap space
java.lang.OutOfMemoryError: Java heap space
{“@timestamp”:“2022-09-15T09:26:16.807+00:00”,“message”:“Opened connection [connectionId{localValue:5, serverValue:1392447}] to 18.192.63.163:27017”,“logger_name”:“org.mongodb.driver.connection”,“level”:“INFO”,“applicationName”:“Workflow View Task”}
{“@timestamp”:“2022-09-15T09:26:16.808+00:00”,“message”:“Exception while executing runnable io.grpc.internal.KeepAliveManager$2@1e130798”,“logger_name”:“i.g.i.LogExceptionRunnable”,“level”:“ERROR”,“stack_trace”:“java.lang.OutOfMemoryError: Java heap space\n”,“applicationName”:“Workflow View Task”}
{“@timestamp”:“2022-09-15T09:26:16.852+00:00”,“message”:“Exception processing background thread”,“logger_name”:“o.a.c.core.ContainerBase”,“level”:“ERROR”,“stack_trace”:“java.lang.OutOfMemoryError: Java heap space\nWrapped by: j.u.c.ExecutionException: java.lang.OutOfMemoryError: Java heap space\n”,“applicationName”:“Workflow View Task”}
{“@timestamp”:“2022-09-15T09:26:16.858+00:00”,“message”:“exception for engine cmmn during async job acquisition: Java heap space”,“logger_name”:“o.f.j.s.i.a.AcquireAsyncJobsDueRunnable”,“level”:“WARN”,“stack_trace”:“java.lang.OutOfMemoryError: Java heap space\n”,“applicationName”:“Workflow View Task”}
{“@timestamp”:“2022-09-15T09:26:16.859+00:00”,“message”:“exception during resetting expired jobs: Could not open JPA EntityManager for transaction; nested exception is java.lang.OutOfMemoryError: Java heap space for engine bpmn”,“logger_name”:“o.f.j.s.i.a.ResetExpiredJobsRunnable”,“level”:“WARN”,“stack_trace”:“java.lang.OutOfMemoryError: Java heap space\n\tat o.h.e.j.i.JdbcCoordinatorImpl.(JdbcCoordinatorImpl.java:96)\n\tat o.h.i.AbstractSharedSessionContract.(AbstractSharedSessionContract.java:231)\n\tat o.h.i.AbstractSessionImpl.(AbstractSessionImpl.java:29)\n\tat o.h.internal.SessionImpl.(SessionImpl.java:230)\n\tat o.h.i.SessionFactoryImpl$SessionBuilderImpl.openSession(SessionFactoryImpl.java:1334)\n\tat o.h.i.SessionFactoryImpl.buildEntityManager(SessionFactoryImpl.java:651)\n\tat o.h.i.SessionFactoryImpl.createEntityManager(SessionFactoryImpl.java:637)\n\tat o.h.i.SessionFactoryImpl.createEntityManager(SessionFactoryImpl.java:158)\n\tat o.s.o.j.AbstractEntityManagerFactoryBean.createNativeEntityManager(AbstractEntityManagerFactoryBean.java:585)\n\t… 3 frames excluded\n\tat o.s.o.j.AbstractEntityManagerFactoryBean.invokeProxyMethod(AbstractEntityManagerFactoryBean.java:487)\n\tat o.s.o.j.AbstractEntityManagerFactoryBean$ManagedEntityManagerFactoryInvocationHandler.invoke(AbstractEntityManagerFactoryBean.java:734)\n\tat com.sun.proxy.$Proxy182.createNativeEntityManager(Unknown Source)\n\tat o.s.o.j.JpaTransactionManager.createEntityManagerForTransaction(JpaTransactionManager.java:485)\n\tat o.s.o.j.JpaTransactionManager.doBegin(JpaTransactionManager.java:410)\n\t… 11 common frames omitted\nWrapped by: o.s.t.CannotCreateTransactionException: Could not open JPA EntityManager for transaction; nested exception is java.lang.OutOfMemoryError: Java heap space\n\tat o.s.o.j.JpaTransactionManager.doBegin(JpaTransactionManager.java:467)\n\tat o.s.t.s.AbstractPlatformTransactionManager.startTransaction(AbstractPlatformTransactionManager.java:400)\n\t… 2 frames excluded\n\tat o.f.c.s.SpringTransactionInterceptor.execute(SpringTransactionInterceptor.java:57)\n\tat o.f.c.e.i.i.LogInterceptor.execute(LogInterceptor.java:30)\n\tat o.f.c.e.i.c.CommandExecutorImpl.execute(CommandExecutorImpl.java:56)\n\tat o.f.c.e.i.c.CommandExecutorImpl.execute(CommandExecutorImpl.java:51)\n\tat o.f.j.s.i.a.ResetExpiredJobsRunnable.resetJobs(ResetExpiredJobsRunna…”,“applicationName”:“Workflow View Task”}
{“@timestamp”:“2022-09-15T09:26:19.881+00:00”,“message”:“HikariPool-1 - Thread starvation or clock leap detected (housekeeper delta=1m56s939ms719\u00B5s954ns).”,“logger_name”:“c.z.hikari.pool.HikariPool”,“level”:“WARN”,“applicationName”:“Workflow View Task”}

And Describe POD is having below values
PS D:\kubectl> .\kubectl.exe --kubeconfig bm-dev.yml describe pod om-workflow-viewtask-f68d888d4-gw4k2 -n dev-omnius-vnext
Name: om-workflow-viewtask-f68d888d4-gw4k2
Namespace: dev-omnius-vnext
Priority: 0
PriorityClassName:
Node: ip-10-0-1-122.eu-central-1.compute.internal/10.0.1.122
Start Time: Thu, 15 Sep 2022 15:24:57 +0530
Labels: app=om-workflow-viewtask
app.kubernetes.io/instance=om-workflow-viewtask
app.kubernetes.io/name=om-workflow-viewtask
pod-template-hash=f68d888d4
version=v1
Annotations: cni.projectcalico.org/podIP: 10.42.49.94/32
prometheus.io/path: /api/viewtask/actuator/prometheus
prometheus.io/port: 9031
prometheus.io/scrape: true
rollme: DZSvO
Status: Running
IP: 10.42.49.94
Controlled By: ReplicaSet/om-workflow-viewtask-f68d888d4
Containers:
om-workflow-viewtask:
Container ID: docker://e112f76a84930778e5580f178a9ed9e5548e8ace177ba7b44ab6c6062c43d3a3
Image: dockerhub.tmbmarble.com/omnius-vnext/om-workflow-viewtask:latest
Image ID: docker-pullable://dockerhub.tmbmarble.com/omnius-vnext/om-workflow-viewtask@sha256:aa5ae4ca7aeb05bd92e8d714633325aca499cd28a9100f9c04a7cc64bc6bbbe9
Port: 9031/TCP
Host Port: 0/TCP
State: Running
Started: Thu, 15 Sep 2022 15:24:59 +0530
Ready: True
Restart Count: 0
Limits:
cpu: 1
memory: 1Gi
Requests:
cpu: 1
memory: 1Gi
Environment:
JWT_ROLE_LOCATION: <set to the key ‘JWT_ROLE_LOCATION’ of config map ‘ordermgmt-config’> Optional: false
WHITELIST_URLS: <set to the key ‘WHITELIST_URLS’ of config map ‘ordermgmt-config’> Optional: false
AXON_AXONSERVER_SERVERS: <set to the key ‘host’ of config map ‘axon-host-config’> Optional: false
SPRING_DATA_MONGODB_AUTHENTICATION_DATABASE: <set to the key ‘authdb’ of config map ‘mongodb-host-config’> Optional: false
SPRING_SECURITY_OAUTH2_RESOURCESERVER_JWT_JWK_SET_URI: <set to the key ‘idp_url’ of config map ‘idp-config’> Optional: false
SPRING_DATA_USERNAME: <set to the key ‘username’ in secret ‘mongodb-basic-auth’> Optional: false
SPRING_DATA_URI_CONFIG: <set to the key ‘mongouriconfig’ of config map ‘mongodb-uri-config’> Optional: false
SPRING_DATA_MONGODB_HOST_ADDRESS: <set to the key ‘host’ of config map ‘mongodb-host-config’> Optional: false
SPRING_DATA_PASSWORD: <set to the key ‘password’ in secret ‘mongodb-basic-auth’> Optional: false
SPRING_DATASOURCE_USERNAME: <set to the key ‘username’ in secret ‘mariadb-basic-auth’> Optional: false
SPRING_DATASOURCE_PASSWORD: <set to the key ‘password’ in secret ‘mariadb-basic-auth’> Optional: false
SPRING_DATASOURCE_URL: <set to the key ‘omurl’ of config map ‘mariadb-host-config’> Optional: false
SPRINGFOX_DOCUMENTATION_ENABLED: <set to the key ‘enableSwaggerDoc’ of config map ‘swagger-config’> Optional: false
LOGGING_TRANSACTIONLOGGER: <set to the key ‘LOGGING_TRANSACTIONLOGGER’ of config map ‘ordermgmt-config’> Optional: false
BM_LOGGING_MDC_LOGUSERID: <set to the key ‘BM_LOGGING_MDC_LOGUSERID’ of config map ‘ordermgmt-config’> Optional: false
BM_LOGGING_MDC_DEFAULT_CHANNEL: <set to the key ‘BM_LOGGING_MDC_DEFAULT_CHANNEL’ of config map ‘ordermgmt-config’> Optional: false
BM_LOGGING_MDC_DEFAULT_CORRELATIONID: <set to the key ‘BM_LOGGING_MDC_DEFAULT_CORRELATIONID’ of config map ‘ordermgmt-config’> Optional: false
BM_LOGGING_MDC_DEFAULT_TRANSCATIONID: <set to the key ‘BM_LOGGING_MDC_DEFAULT_TRANSCATIONID’ of config map ‘ordermgmt-config’> Optional: false
BM_LOGGING_MDC_DEFAULT_USER: <set to the key ‘BM_LOGGING_MDC_DEFAULT_USER’ of config map ‘ordermgmt-config’> Optional: false
BM_LOGGING_TRANSACTIONLOGGER: <set to the key ‘BM_LOGGING_TRANSACTIONLOGGER’ of config map ‘ordermgmt-config’> Optional: false
BM_LOGGING_API_FILTER_LOGREQUESTBODYONEXCEPTION: <set to the key ‘BM_LOGGING_API_FILTER_LOGREQUESTBODYONEXCEPTION’ of config map ‘ordermgmt-config’> Optional: false
BM_LOGGING_API_FILTER_LOGREQUESTRESPONSEBODY: <set to the key ‘BM_LOGGING_API_FILTER_LOGREQUESTRESPONSEBODY’ of config map ‘ordermgmt-config’> Optional: false
BM_LOGGING_API_FILTER_SKIPLOGS: <set to the key ‘BM_LOGGING_API_FILTER_SKIPLOGS’ of config map ‘ordermgmt-config’> Optional: false
BM_LOGGING_CQRS_INTECEPTOR_SKIPLOGS: <set to the key ‘BM_LOGGING_CQRS_INTECEPTOR_SKIPLOGS’ of config map ‘ordermgmt-config’> Optional: false
BM_LOGGING_CQRS_INTECEPTOR_LOGEVENTPAYLOAD: <set to the key ‘BM_LOGGING_CQRS_INTECEPTOR_LOGEVENTPAYLOAD’ of config map ‘ordermgmt-config’> Optional: false
BM_LOGGING_CQRS_INTECEPTOR_LOGCOMMANDPAYLOAD: <set to the key ‘BM_LOGGING_CQRS_INTECEPTOR_LOGCOMMANDPAYLOAD’ of config map ‘ordermgmt-config’> Optional: false
API_VERSION: latest
DELAYTIMER: 1000
DELAYTIMERMULTIPLIER: 2
FLOWABLE_DATABASESCHEMAUPDATE: none
JAVA_OPTS: -XX:+UnlockExperimentalVMOptions -XX:InitialRAMPercentage=50.0 -XX:MaxRAMPercentage=50.0
MAXRETRY: 2.147483647e+09
SERVER_SERVLET_CONTEXT_PATH: /api/viewtask
SERVICE_CONTEXTPATH: /om-workflow-viewtask
SPRING_DATASOURCE_HIKARI_MAX_LIFETIME: 540000
SPRING_DATA_MONGODB_DATABASE: workflows
URL_CHECKLIST_PATCH: http://om-task-checklist-command/api/taskCheckList/checklist
URL_CHECKLIST_POST: http://om-task-checklist-command/api/taskCheckList/checklist
URL_CHECKLIST_QUERY: http://om-task-checklist-query/api/taskCheckListQuery/checklist
URL_JWT_TOKEN: http://om-jwt-token-scheduler/api/JwtTokenSchedulerService/v1/token
URL_ORDER_COMMAND: http://om-order-command-api/api/productOrderingManagement/command/productOrder
URL_ORDER_QUERY: http://om-order-query-api/api/productOrderingManagement/query/productOrder
URL_SERVICEORDER_COMMAND: http://om-serviceorder-command/api/serviceOrdering/command/serviceOrder
URL_SERVICEORDER_QUERY: http://om-serviceorder-query/api/serviceOrdering/query/serviceOrder
URL_TASK_CHECKLIST_QUERY: http://om-task-checklist-query
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-wggzn (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-wggzn:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-wggzn
Optional: false
QoS Class: Guaranteed
Node-Selectors:
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message


Normal Scheduled 115s default-scheduler Successfully assigned dev-omnius-vnext/om-workflow-viewtask-f68d888d4-gw4k2 to ip-10-0-1-122.eu-central-1.compute.internal
Normal Pulling 114s kubelet, ip-10-0-1-122.eu-central-1.compute.internal pulling image “dockerhub.tmbmarble.com/omnius-vnext/om-workflow-viewtask:latest
Normal Pulled 114s kubelet, ip-10-0-1-122.eu-central-1.compute.internal Successfully pulled image “dockerhub.tmbmarble.com/omnius-vnext/om-workflow-viewtask:latest
Normal Created 113s kubelet, ip-10-0-1-122.eu-central-1.compute.internal Created container
Normal Started 113s kubelet, ip-10-0-1-122.eu-central-1.compute.internal Started container

@here Hi Team

please share input. We are stuck. Appreciate your input

thanks & Regards
Sameer

Hi.
As you are limiting your pod(s) … I’m not sure the -XX:InitialRAMPercentage=50.0 -XX:MaxRAMPercentage=50.0 suffice.
Did you try setting -Xms<val> -Xmx<val>?

Regards,

Yvo

I see in the stacktrace that you are using mongodb? If so - do not that the mongodb integration is far from finished and the out of memory could be related to that (as stated on https://github.com/flowable/flowable-mongodb: " It is currently considered in alpha state as not all features have been implemented"). The actual error seems to be happening during resetting of expired jobs (see the stacktrace). How many jobs are there (and how many are expired, i.e. with a due date that many hours ago)?

Also not this is an open source forum, all help here is done by the community.

hi @joram - we are not using flowable-mongodb. we are connecting to mongodb for some other requirements. Our flowable is running with mariadb.

thanks & Regards
Sameer

hi @yvo- we tried with giving below properly but still getting error
org.gradle.jvmargs=-Xms2g -Xmx2g -XX:-UseGCOverheadLimit

@here Hi Team

Please share your input. We are stuck here

thanks & Regards
Sameer

Hi @sameer.uppal

As @joram stated before;

Also not this is an open source forum, all help here is done by the community.

Pushing like this is not helpful.

A few remarks;

  • Why are you setting org.gradle.jvmargs? Isn’t that only for the build daemon? Not the runtime?
  • Why are you setting a limit of 2GB when the pod is limited to 1GB?

Yvo