I have implemented SparkDelegate which extends JavaDelegate and TriggerableActivityBehavior. The delegate submits a spark/Hadoop job to a distributed job scheduler as part of the delegates execute-function, the job-scheduler schedules the job and returns an job-id immediately. The job might take hours to run and when finished it gives a callback which in turn becomes the trigger for our SparkDelegate.
We have also written listners to do auditing of variables/process and job instrumentation metrics.
The sequence of function calls can be visualised as below.
What is the right way to handle errors in Flowable?
- The execute-function(2) which calls the job-scheduler can fail due to timeouts, capacity overload etc.
- What is the way in which we can tell flowable to mark the task as failed one?
- Should we raise an exception?
- Is there any exception apart from throwing BPMNError and having a boundary event?
- Raising the BPMNError and catching it with boundary event is hard as it requires all the BPMN authors to write a boundary event in all BPMN definitions. Is there an easy way to do this?
- Event listener failures[3, 6], what happens when something fails in the event listeners?
- I have set isFailOnException to true in FlowableEventListener, but does this fail the task or the entire process?
- What if I want to fail the job for the event listener has failed?
- Is there any function in RuntimeService to mark an activity as failed using activityId and ProcessInstanceId?
- BPMNError and Boundary method is the way handle errors here? Reference test case This will lead to too many Boundary events in the flowable bpmn definition.
- In trigger function we get the callback from job-scheduler, here there job could have either completed with success/failure, if the job is successful then I can call the triggerAsync of RuntimeServce to mark completion.
- What is the function that we have to use if in case the job has failed and callback is for failure?