Best approach waiting on response from external system

Hi Guys,
This is a bit of generic question. Our workflows will integrate with an external system (potentially using a camel task, Rest task or Service task. I am trying to understand how best to model the workflow to wait for the response.

My first approach was to mock up a a Receive Task with a message boundary event. I used the Execution Rest API to then send a message to the execution so it would move to next step. Not sure if this is a valid approach as in testing I often got a Rest response that it could not find any executions subscribed to my message. I assume I can also use a message intermediate task instead of a Receive Task. Is this better?

Another approach documented is to use a Triggerable. It talks about using a Service Task and setting the triggerable attribute. This can then be triggered by the runtime service. It is unclear how the Rest interface supports sending a trigger. Also, do you name a trigger for a process definition and then send the named trigger, similar to a message or signal. With this approach would you call your external system with the service task first and then wait for the trigger. How does this work if I use a camel task to send the details to the external system?

Apologies if this is a little broad. I am more confused at the end of the day on the correct approach than at the start so hoping for some help to keep me moving along.

Regards

Brian

Hi @boneill

IMO, you should use receive task only if the definition of your business activities are asynchronous. If you a have long-running task through (service task or derivative) you should use the flag ā€˜trigerrableā€™. If tomorrow, your long-running task becomes fast sufficiently, just disable the flag ā€˜trigerrableā€™ in the implementation of your service task and your process definition does not change in a business point of view.

The behavior of the flag ā€˜triggerableā€™ is similar to an asynchronous processing using a message broker or async HTTP client. For example, using HTTP:

  • the logic of your task invoke your remote processing sending a HTTP request asynchronously, the HTTP response will be processed by an internal thread of the HTTP client,
  • on HTTP response, the async HTTP client will invoke the right API of the runtime service to triggered your service task,
  • on trigger receipt, your service task is ended and your process instance execution continues
    Note: the response of your remote service must contain information to be able to retrieve the service task to trigger

Regards,
Christophe

2 Likes

Hi Christoph,

Thanks for the answer. It put things in context and now I understand the best approach. I also read the blog on async activities which was great to make me understand options when waiting on external systems. It looks like the best approach is to use a service task, configured with a triggerable and let the external app call out to the rest execution api to signal the tiggerable when it has finished its processing.

One thing that has caught me out though is that I have a parrallel gateway with multiple async service tasks. In this case each script task has the same execution id. So if I trigger using runtimeService.trigger(executionId) I get an OptimisticLocking exception. This post explains why: https://stackoverflow.com/questions/47301783/activiti-parallel-service-tasks but not how to avoid it.
I am using async for the service tags as I want them to be auto re-run if the call to the external system they connect to fails. I am using triggers to wait for the short time that each external system does its processing.

Is there an API I can use to trigger the different service tasks (potentially based on activityId) which have the same execution ID? If not, what is the correct approach for dealing with parallel service tasks with triggers and async processing?

Many thanks

Brian

Hi Brian,

your link points to activiti. Flowable 6 engine was rewritten, each parallel execution has its own unique Id. There is no problem to address it.

Regards
Martin

Hi Martin,

Okay, thats good to hear. I observed this behaviour as part of my unit tests. I use an execution query to get the execution id based on the activityid and processid and it was logging out the same execution id for each servicetask in the parallel flow. ie

ex = runtimeService.createExecutionQuery().processInstanceId(processInstance.getId()).activityId(ā€œgenerateIDTaskā€).singleResult();

	System.out.println("Execution ID: "  + ex.getId() + " activityID:" + ex.getActivityId());
	assertEquals("generateIDTask",ex.getActivityId());
    assertNotNull(ex);
    runtimeService.trigger(ex.getId(), this.variables);

I noted that if I set the Engine Configuration to .setAsyncExecutorActivate(false); then my tests work fine and I get a new execution per task. However, if I set it to true then I get the same ID. I have made the assumption that by setting the AsysnExecutorActive = true that this would emulate how it would work when using the Task App and camel integration. Any pointers onto where I am going wrong with this assumption?

Regards

Brian

Hi Brian

Could you reproduce behaviour in the jUnit test?
The process execution in flowable jUnit test shows 2 different execution paths.

image

Regards
Martin

Just to make sure that we are on the same page. @boneill you are using Flowable 6.x and not 5.x right?

Cheers,
Filip

Hi Guys,

Thanks for taking a look over this. My unit test was not correct. It was my first unit test for flowable and was not using FlowableRule to get the services or create the process. I refactored it to use correct Junit4 approach as documented by Flowable and it now works, providing unique executions within the parallel gateway. Once again thanks for getting across this as it was a major concern.

Brian