Callback URL for Process Execution


I want to check if the process I started is finished. Can we register a URL to a process which can call this URL after the process execution finishes? If there is no such a feature what is the best way to listen, query if the process execution has finished?


  • You certainly could add an HTTP Task as the last task before completion for a low code solution (make sure you handle errors in case your listener isn’t up).
  • For a more code code based solution, you could implement an execution listener on the process end. []
  • You could also use async history to write write the history to a message queue and pick the messages out of there []
  • If you just want to query the REST API, you can use /history/historic-process-instances endpoint with the process id (example: http://localhost:8080/flowable-rest/service/history/historic-process-instances/21365603-9f36-11e9-aac0-ae403ebd7bc3 ) and look for an endTime.


@wwitt Thanks William.

Choice 3 seems to fit my requirements well.

I found this project on github but my version is 6.5.0 and flowable version on github it is out-dated such that AsyncHistoryJobMessageReceiver is expecting commandExecutor rather than processEngineConfiguration. How can i pass commandExecutor on 6.5.0 version?

If you need to get CommandExexecuter to pass to AsyncHistoryJobMessageReceiver, you could just do a processEngineConfiguration.getCommandExecutor(). Though, admittedly, I haven’t set up an AsyncHistoryJobMessageReceiver before.

Hi @wwitt, I already did what you adviced but although I started many Processes, MyJobMessageHandler handleJob method not even triggered once. Looks like there is something more to be configured? Who can help us about this :slight_smile:

Edit 1
I autowired as below

private ProcessEngineConfiguration engine;

engine is not null but engine.getCommandExecutor() returns null.

Edit 2

I obtained processEngineConfiguration as follows and commandExecutor is not null now but overrided handleJob is not still being triggered.

public ProcessEngineConfiguration processEngineConfig;

public PlatformTransactionManager transactionManager() {
    DataSourceTransactionManager transactionManager = new DataSourceTransactionManager();
    return transactionManager;

public ProcessEngineConfiguration processEngineConfiguration() {
    SpringProcessEngineConfiguration config = new SpringProcessEngineConfiguration();
    // Async history configuration
    // Optional tweaking
    // To speed up the example. Don't use this in production, it'll hammer the db.
    return config;

public ProcessEngine processEngine() {
    return processEngineConfiguration().buildProcessEngine();

The issue is that the CommandExecutor is not configured when the AsyncHistoryJobMessageReceiver is initialized. Rather than figure out how to delay bean initialization this example project just sets it every time a message is received. The first few error out until the process engine is completely up. Note that by using message based history, you are taking responsibility for storing history

Here’s and example based on flowable-examples.

Hi @wwitt

Thanks for the shares. Before I dig into examples I want to be sure about one thing. I thought that I can use AsyncHistoryJobMessageReceiver without rabbigmq or any other data store implementation, I thought AsyncHistoryJobMessageReceiver is registering itself to ProcessEngine and ProcessEngine is sending events to the registered Receivers but I think the execution is not flowing as I am thinking, it first needs a middleware datastore to send messages to receivers. Am I right?

My another question is about the bold part you wrote on your post. At the end If I get all these things working end start to receive messages, do I also need to persist these datas to Flowable tables after processing the messages? I thought this feature is acting like a wire tap but I started to doubt after reading your post.

Third and the last question is :slight_smile: I am calling REST-API to start a process and try to hook the event in an another newly created Process Engine(for this case the processes engine in your examples). Am I trying to do a reasonable, logical thing? Do these Process Engines are aware of each other and can be able to send messages if something historical happens? Is this the case of AsyncHistoryJobMessageReceiver?


Lets address Async history a little. In the default setup, storing historical data is done in the same database transaction that updates the runtime environment. Sync history’s general purpose is to speed up processing by waiting to write to the history tables (I’ve linked to some benchmarks below). When you offload that writing to another service via a message bus like this example, the gains are even greater but you become responsible for persisting history in some way. Since you have access to every change, you can also slip some logic in.

Whether or not capturing every historical event makes sense depends on your case. My guess is that it probably doesn’t, you’d probably be better of handling it in the process with either an HTTP task or execution listener.

To your question on how process engines interact, the process engine is configured in a very microservice like way. State is kept in the database with locks in place so multiple engines don’t attempt to complete the same tasks at the same time. You can scale up as many instances of the process engine as you need and they’ll all act in concert as long as they’re all pointing to the same datasouce. In my example, I probably should have had the receiver connect to an H2 data source so it didn’t inadvertently try to process tasks.

@wwitt hi again, Thank you for your explanation. I preferred to check the status of started processes via JAVA API. I didn’t preferred REST because I didn’t want do depend on HTTP layer. For those who needs such feature, I used the method down here in the process engine.


I query it every 30 seconds with the processInstancedIds that I started. Hope it helps.