So I have an array of integers that’s being stored as a process variable. This has been working for a while, but suddenly the query for getting all of my tasks from the task history table fails with this error:
Couldn’t deserialize object in variable ‘checkbookIdArray’
There have been updates made to the workflow as I do development on it, but the same workflow runs just fine in a different environment so I don’t think there’s a problem with the workflow itself.
The only other thing that changed was that recently the CM team upgraded our server, which resulted in an upgrade to the JVM for the server as well. I’m not sure if that caused a problem with the JARs that were installed for running Flowable or not.
Any ideas on what could have broken things? And/or how I can get my Flowable engine back? I can spin up processes just fine, but I can’t pull the list of tasks at the moment. Or is there a way for me to use the API to purge records, so if I can identify the problem rows I can at least remove them?
As an update on this one: the problem seems linked to the Java version that serializes/deserializes the object structure. First, this problem only showed up after we did a server upgrade (and under the hood, that also entails upgrading the Java version).
Second, I confirmed this is the root issue using 2 different web servers that I knew to be running slightly different versions of Java. Creating the entire process engine database in Server A rendered all the process instance variables unreadable from Server B.
So the problem now is, how do I maintain the capability to deserialize after a server upgrade? I can’t just kill flowable and rebuild it every time. This is a nuclear option that works in development but is untennable for production. Is it a matter of the JARs, that i need to make sure the appropriate JARs are included for backwards compatibility?
The problem is indeed that serialization can change slightly between JDK versions. The alternative is to not rely on serializable objects, but to store the variables in a format that is not prone to these changes, for example by transforming the object to a json variable (JsonNode) and back.
I’ll take a look at that solution and see if it works, but I hadn’t actually considered transmitting the values to the process as JSON, then unpacking the JSON when I need to read the values.
Update on this. Converting to a JSON payload does work. You have to remember to deserialize the JSON when you need to use the array as an object, but it doesn’t store the objects as a byte array anymore, which means I’m not dependent on the JVM versions anymore.
I’m going to be working on upgrading the servers for my development environment shortly, and will see if the problem emerges, but I’m pretty sure it won’t. We only started having this issue when I started storing the serialized data.
@jeff.gehly great that it works. One small tip from me. If you are using the objects only in your processes / cases you can access it as an object myObject.field or the ArrayNode as a list.
In case you want to use the variables in your Java code and you don’t want to perform the deserialization manually every time, you can use a custom VariableType that would serialize your object as json and then deserialize it into your object.
I’ll keep that in mind. The reason I ran into this was that we read back the process variables to display to the UI, so I can provide the user with an update on the process status, what steps have been completed, etc. This particular array was included with the process variables because it really made sense to put it there. It was a list of correlation IDs between the process warehouse and my system records (stored in a different database). That array only came into play at the very end of a the happy path for the process. The process makes a REST call to a remote endpoint that updates the database for the list of IDs.
Since I only use the ID array at the end of the process flow, it was simpler and required less rework to do the JSON conversion at process start and in the remote endpoint. That way I kept the underlying code the same, but now I’m able to avoid the JVM versioning headaches.
Can you please create a new post with more information about the problems you are facing? Information about the exceptions you are getting, the type of variables you are storing would make it easier for us to help you.
The problem that the original poster had was due to a change in the java version and they way data was serialized. And as @joram said in Unable to serialize object in variable it is best not to rely on serializable objects, but store variable in a format that is not prone to Java serialization changes (e.g. using JsonNode)
@joram Thanks for your reply…
I want to pass a complex data type like a ‘Customer’ object while starting a process and with in that process some of the attributes of ‘Customer’ will get updated and finally i have to send back the ‘Customer’ Object to a Service. I want to serialize and de-serialize the object.
Please advise whether implementing ‘VariableType’ is the more performance efficient or using JsonNode in this sceneario.
Performance is not the only concern you should look at. I won’t get into a religious debate on the speed of JSON versus what the framework manages using VariableType (mostly because I’ve never done the benchmarks), but in the current web development world JSON parsing is reasonably fast.
Besides performance you should also consider time to implementation, ease of debugging, and long term maintenance. If you find yourself getting into the weeds of a framework, you are probably looking at a solution that could buy you raw performance at the cost of being a pain to troubleshoot and maintain.