I was expecting more feedback on my Executable BPMN 2.0 post. I did get a thoughtful and amusing rant from Alex Pavlov. He dismisses the whole idea of executable BPMN 2.0 as a cynical ploy by the middleware vendors that created it. Besides making some good points on the possibility of executable BPMN 2.0, he challenges me to defend why anyone would think adopting the standard is a good idea in the first place.
Alex asserts, probably with good reason, that BPMN 2.0 can never succeed as a process execution language because it tries to be “abstract and detailed at the same time.” I puzzled over that one for a bit, but I think he means that the abstract nature of the BPMN 2.0 metamodel is in conflict with the requirements for any language that an engine can reliably execute. And I completely agree with that.
You could, however, define a subset of BPMN 2.0 that is reliably executable. The subset that maps unambiguously to BPEL is one such. (I can already hear Michael Rowley laughing.) But larger subsets as well. Something like the Common Executable subclass, but expanded to a somewhat larger palette. Why didn’t they define such a subset in the BPMN 2.0 spec? (I don’t think Common Executable meets the test.) Ran out of time, possibly, or maybe Alex is right.
In addition to restricting the palette you need to validate the flow topology. BPEL is block oriented and BPMN allows gateways to loop back and do all kinds of twists and turns that business users like. Back in the day, most BPMN tools that offered BPEL export would give “interleaving” errors unless you created BPMN in a very un-businesslike block structure. But eventually the eClarus guys figured out how to solve that problem, leaving only a few pathological topologies to worry about. Besides, many BPMN 1.x-based BPM Suites have had no problem with executing BPMN flow constructs directly. So flow topology should not be the stopper.
But let’s say you could define a subset of flow elements – the Analytic subclass, for instance – and flow topologies that are reliably executable. Without runtime portability, is there really any benefit in serializing the design according to the BPMN 2.0 spec? Alex makes a good argument that interoperability of executable designs between tools or engines is never going to happen.
It was easier with BPEL – the only process actions allowed were sending and receiving SOAP messages, and all the good stuff happened inside external services. But with BPMN, every BPMS has its own notion of a human task and accompanying design tool for it. Except for those that support the WS-HumanTask standard (originally designed for BPEL), human tasks in BPMN 2.0 are never going to be portable between tools. And the same goes for most automated tasks, which are implemented by the set of service adapters that come with the suite. Each tool has its own set, and its own SDK for building your own, so this isn’t going to be portable either. That means that the only part of executable BPMN 2.0 that is conceivably portable between tools is the flow logic, not the task implementation.
But even without runtime portability, I think there is still a good reason for tool vendors to support BPMN 2.0 XML export for executable designs: it makes the design more transparent. The key reason is it exposes the process data model, all the business objects, variables, task inputs and outputs, and gateway conditions, and shows how they are received, produced, or manipulated by the process. In most tools these definitions are locked up inside tool-specific binary formats, visible to others sharing the same IDE workspace, but hidden to everyone else. BPMN 2.0 makes them visible in the same XML file that holds the process logic.
This should provide as much value to business analysts and architects as it does to developers. Just as the BPMN notation provides a graphical language for the activity flow that can be shared between business and IT, executable BPMN 2.0 provides an XML language for process data – and its interactions with the flow logic – that is shareable between business analysts and developers. Yes, it is more technical than a process diagram, but adopting this standard serialization of the execution-related details makes has a similar benefit of communicating the process definition more broadly. Remember, the XML is just an interchange format. Tools will provide user-friendly ways of navigating and displaying the information. It doesn’t matter if each tool does that differently, as long as the meaning of the information is defined by the standard not the tool.
Yes, with all the teasers in place I expected to get at least some feedback from readers of this blog. I am neither an educator nor BPMN tools vendor. Just a software developer with first hand experience in implementation of BPM systems based on BPMN specifications. My viewpoint is very practical and, judging on the mood of mainstream discussions and media buzz, is somewhat controversial.
However, there is absolutely no need to defend adoption of standards. As a practicing developer I am constantly suffering from incomplete and noncompliant implementations. Browsing through and editing fully standard compliant XML code produced by 3rd party tool on each and every update just to ensure jBPM may parse it is a pain. Converting into a process every task you need to attach a boundary event to (just because the framework allows boundary events for processes only) simply negates all the benefits of BPMN – while it is perfectly OK for framework developers to suggest such workarounds, I can’t imagine sending my customer a diagram, which was completely rehashed just to cope with framework shortcomings.
No tool developer should ever muse on whether adopting standard is a good idea. Nothing to discuss here.
The question, which is important from practical view point is whether full BPMN 2 specification may be implemented in portable way and if not, where should the boundary go. Which part of a process implementation must be managed by BPMN framework, and which is better to leave to system developer? Or, in other words, what level of detail BPMN specification should provide?
When deciding which BPMN elements the framework should handle itself and which it should delegate to an application (in our case BPMN – Java demarcation line), we relied on simple principle: if the element does not have proper presentation on diagram, it should go to Java domain. For example, the element may not be represented on diagram, that means it is not critical for the purpose of workflow description, and the execution engine should not impose any requirements on its implementation. Just provide an API to access the element if implementation relies on it. Data inputs, outputs, associations, transformations etc should not be a concern for the framework at all (but all the elements must be accessible through API if needed by application).
In other words (I can’t help quoting you) “the only part of executable BPMN 2.0 that is conceivably portable between tools is the flow logic, not the task implementation”.
So, the ideal executable BPMN framework should have two layers: the portable – flow logic layer and (actually, optional) task, data, user interface layer.
The flow logic layer should accept any standard compliant BPMN xml and interpret only those elements that are relevant for execution flow; it should keep track of process state (as a set of instantiated nodes and tokens) while delegating actual task execution and gate logic predicates to an application (or second layer).
Again, sorry for too many paragraphs :o)
Alex
Again, good comments. I guess I am saying the justification for supporting the execution part of BPMN 2.0 is not portability — we agree it is not portable — but visibility, particularly of the process data associated with activities, gateways, events, messages, etc. They make the process logic clearer to the observer, even if execution is tool-specific.
If someone asks me what’s the deal with executable BPMN I guess my first reply would be: “round trip engineering”.
I’ve just found your blog – very nice aspects of BPM world are lit here.
Thanks,
Yonathan.