Hi, not urgent at all, but I have asked myself several times the following question.
Would there be fundamental problems to supporting the BPMN Message End Event or Message throwing event?
The modeler and engine allow the use of an Intermediate Signal Throwing event. Any reason why a Intermediate Message throwing Event is not supported?
I was thinking it would make some larger BPMN diagrams more simple /concise (because you could use less flowconnectors, and so less crossings of flowconnectors). Instead of using flowconnectors, one could kindāa āteleportā to another place or exit the subproces scope? Especially combined with an event subprocess, it feels powerful.
But, on the other hand, maybe it would create all sorts of crazy jumps that are just impossible to handle for the engine.
So has this simply not been a requirement yet, or is there something more fundamental reason opposing it?
Thereās no fundamental reason to not support it. In most cases a signal throw event could be used as well, so there was no specific need for a message throw and message end event. But we should definitely just implement it and support it in the Engine and the Modeler.
OK, thanks for clarifying.
An objection for using Signal throw events; is the broadcast semantics (rather often you donāt want to influence other process instances).
So, no need to rush this development, just my thoughts on what would be practical, for future reference:
A Message Throwing event should only send a message to the actual process instance (read further). In terms of logic: the engine should find the current process instance (possible crossing containing subprocesses scopes) , and find the message listener for the given message reference.
The person of who creates the BPMN proces should make sure that in all scenarios, exactly one message listener for the given message reference is present, when the message throwing even is reached. An error should be thrown if there are no listeners found , or if there are multiple found for the same message reference.
I guess that some people would also be enticed to use these messages for communication between different process (like the many multi-pool examples in BPMN books), but that might be tricky. Maybe this would work when adding a parameter āprocess business keyā to the message throwing event. This would make the message be thrown to another process instance; instead of the current process instance.
Note: In a way, there is some overlap with something I would think the REST API could be improved:
Problem: When sending a BPMN message, you must do a PUT āmessageEventReceivedā on a specific executionId. It is cumbersome to query executions that are listening to that particular message reference.
Improvement suggestion: Allow PUT of a messageEvent on a process instance Id , optionially of passing process variables (so let the API do the lookup of the execution id).
Thanks for the additional feedback, weāll have a look on adding the message event elements.
Just as an additional pointer, be aware that we added an extension the signal event to restrict it to the process instance scope instead of the whole Flowable Engine:
Hey there.
Just wanted to inquire as to the time table of this request.
Iām asking because Iād like to know if I should re-model my BPMN to implement workarounds in lieu of the absence of these events.
I havenāt had the chance to test the engine yet, but I am learning it and planning to use it in a big project of ours soon.
If your modeller is anything to go by, you guys are doing amazing work.
Thanks in advance.
Sorry I shouldāve clarified my needs better. I want to send a message to an in-flight workflow process not start one.
I need to send a message containing data to update the process variables; to get around the lack of message throwing event.
that is what runtimeService.messageEventReceived(ā¦) is for
there are a few variations of it but for your purpose, you pass in the message name, the execution id (for in-flight process) and the variables to send.
if i set scope to process instance, will it also cover processes started using call activity task or sub processes or it is only for same process instance.
This will only cover the same process instance.
But it would be not hard to expand this functionality to support the process instance including sub process instances. If you need this functionality, can you create a Github issue?