Haven’t we beaten this to death yet? Apparently not, if Keith Swenson and Boris Lublinsky have anything to say about it. The discussion is leading nowhere. Boris inadvertently sums up the pointlessness of it in his conclusion:
The confusion about BPEL and BPM in general seems to keep growing in the industry. There is still no agreement on the most basic BPM issues:
- Is BPM a business Discipline or software engineering?
- Whose responsibility is it to implement (automate) a business process?
- Should we aim to move from design to deployment with no programming?
- Whose responsibility is it to maintain a business process?
Only by getting agreement on these issues will allow us to frame BPEL discussion and the whole BPMN/BPEL relationship discussion correctly.
No, Boris. We will never get agreement on these issues. It is impossible. If standards like BPMN and BPEL are to have value, that value must assume a BPM community where there is no agreement on these things.
So let’s reframe the discussion. Imagine a new version of BPMN. Let’s call it, for argument’s sake, BPMN 2.0. It adds new elements that allow, for example, fault and compensation handling more compatible with BPEL. It elevates data to first-class status and provides BPEL-like assignment and manipulation. In fact anything you would need to execute the model would be provided in BPMN-standard attributes. It would even provide an optional “compatibility mode” where one or two flow topologies incompatible with BPEL, such as arbitrary loopbacks, could be excluded from the diagrams. If you looked at the XML representation you would call it BPEL-like, except that the constructs are BPMN elements and they match the diagram. And let’s say further that you wouldn’t have to populate all this implementation detail, but you could if you wanted to.
Now, the question is, would you want this? The hypothetical BPMN 2.0 team imagines that you would. If I were a BPEL-based BPMS, I certainly would, if only to make BPEL relevant to the BPM community. But if I were a non-BPEL BPMN-based BPMS – like Appian, Adobe, Cordys, Fujitsu, Lombardi, Savvion, Singularity, SoftwareAG, Tibco, or Vitria – would I want this? I have my suspicions but I don’t really know. It addresses Keith’s basic objection to BPEL, since now the picture is indeed the process. But in a way it commoditizes the runtime, which only favors the “stackers” and open source guys.
It’s time to stop beating the dead horse, when there is a new one very much alive and kicking.
If you look at this from the perspective of the people actually modelling and implementing “process”, isn’t the commoditisation of the engine a good thing? From a Business perspective, I just want to model the process visually, I don’t care how it gets implemented, that’s for the nerds. From an IT perspective, I want to execute a process that is the same as what the business guy wrote. And when he wants to change it (this is the point of BPM), it’s dead easy. That way, IT delivers the value it should.
Good post, interesting subject. I have a response at:
http://kswenson.wordpress.com/2009/02/04/is-the-bpmnbpel-debate-a-dead-horse/
-Keith
Andy,
You are really saying that for you, commoditizing the runtime is a don’t-care more than a good thing. If you were going to do process implementations, a commoditized runtime lowers your cost but reduces the rich diversity of available tools, and in practice is biased to the developer-oriented tools rather than the business-oriented ones. So it’s a mixed blessing. I wasn’t saying it was a good thing or a bad thing, just something that a lot of BPMS vendors might not want to support.
Keith,
Your response is interesting but I still think slightly off target. BPMN 2.0 could certainly be viewed as BPMNEL, as you say, but it could also be viewed as BPMNujitsu. It’s basically about standardizing an executable process design language. The BPMN XML is not assumed to be the native language for either design or execution, but just an interchange format between tools… like, say, XPDL, but with a bit more formal semantics.
That is right. I call it BPMNEL to distinguish it from the BPMN 2.0 effort which, as you point out, is more oriented toward an XPDL-like interchange format. That format will be criticized for all the same things that XPDL is criticized today. It is not that XPDL lacks formal execution semantics, it is that the styles of execution of the different engines are incompatible. If you want to specify a single clear, unambiguous execution style (like BPEL) you will lose the ability to have a common notation across several execution styles.
Steve White was very clear early on that BPMN is designed to be flexible enough to be used in a variety of different execution styles. (He called this different methodologies.) There is, then, an advantage to NOT specifying the execution semantics precisely.
If you arbitrarily choose the BPEL execution style, which has incorporated in its design tradeoff decisions on what is possible and not possible, you will be prevented from doing things like on-the-fly process modification. You can’t have your cake and eat it too.
OK… but the bulk of the BPMN 2.0 effort actually is to create BPMNEL, not in the sense of requiring BPEL as the execution environment but allowing it via unambiguous bidirectional mapping, as one possible subset. Because it is linked to MOF, you could map BPMN to any execution language you want, with your own extensions and deletions. That’s the MDA side of it, as opposed to the diagramming standard side of it.
The diagram portability part they don’t care about so much. Without my continual ranting about it, I doubt they would do anything about that at all. Even there the best we will have is a way to create non-executable diagrams that are schema-valid. To achieve model portability across tools, I think we will have to map the BPMN to XPDL 2.1 (or maybe 3.0 at that point) and do the portability work that way. Certainly WfMC understands the need for this far better than the BPMN 2.0 team in OMG.
With regard to your larger point on BPEL vs in-flight changes, you may be right, but since this seems a special case to me, it just proves my larger point: there is no single style of BPM, no single way to do process design. Trying to gain consensus on these things is pointless.
Hi Bruce,
Let me add some fire here:
1) BPMN will not be the first nor the last modeling standard without semantics. The OMG wants to make us believe that by defining a “meta-model” (sort of class diagram of the language constructs with additional verbal explanations) you will get a semantics. False. A meta-model defines an abstract syntax, not a semantics. I bet you a beer (or two for that matter) that when vendors finish implementing BPMN 2++ in their tools, we will see how vendor X managed to interpret the standard differently from vendor Y. To achieve consensus on the semantics side, vendors will have to carefully look at one another and cooperate past the initial standardization process. Achieving a consensus on a semantics is usually an iterative process, not a one-off process. The only way to achieve some level of upfront consensus on the semantics would be if someone imposed a ‘reference implementation’ that everybody will agree to follow (e.g. something akin to the Sun Java Virtual Machine), or a super-detailed formal semantics, but this won’t happen.
2) Very often, standardized languages (or protocols for that matter) do not give tool interoperability. The truth is that standards are not enough to ensure tool interoperability. It also requires willingness to cooperate between companies (pushed by user demands). Take SOAP, WS-*, etc. In recent times, finally WS platforms from different vendors have achieved reasonable levels of interoperability, but standards were not the only reason for this, perhaps not even the main reason. It required willingness by the vendors to cooperate with one another, either collective efforts (e.g. WS-I), or bilateral efforts (e.g. Project Tango). I don’t think that BPM vendors at this stage are sufficiently willing to cooperate to achieve interoperability. And my take is that only their customers, not standardization bodies, can change their mind. When customers start paying to migrate from tool X to Y, vendors (especially vendors of Y) will jump into it and do it in whatever way it should be done, including using XPDL 3++ if needed.
3) BPMN 2.0 has to come up soon or not come up. There is a recurrent pattern of standards at the OMG taking forever to come up. I remember being at the UML conference in 2001 and hearing that UML 2.0 would come up soon. Soon meant something like three and a half years later.
Re: the BPMN vs. BPEL debate, can someone put this debate to rest once for all? It is not helpful at all for the BPM/BPMS community. The BPMS community has no more than 2 years to make a significant impact in industry, to the extent that it becomes mainstream. Past this window of opportunity, disillusionment will simply kill the momentum. The broader BPM community on the other hand might still make an impact, because BPM is much more than BPMS. And BPMN might very well survive, not because of the OMG effort, but because analysts (and not developers) will have made it their language.
Marlon,
I agree with almost everything you say. I have tried to call attention to what is going on in BPMN 2.0, but I am trying to work within the team to make it as good as it can be. It is up to other influencers such as you, Keith, Ismael, and others to urge OMG to wake up and smell the coffee. I can’t be the only voice out there on this.
The reference implementation idea makes sense. I think that is what Ismael was trying to do with BPMLab, where BPEL is the target environment. Since Oracle and IBM are the biggest BPEL vendors it would make sense for the BPMN 2.0 team in OMG to do that as well, but BPMN is still target environment-independent – a good thing – and I don’t see that effort happening.
I agree the goal of standardizing execution semantics across tools is probably far-fetched and is not clearly something many vendors want. That is why I am focusing on portability of abstract models – diagrams – across tools. I can tell you for certain that is something users desperately want. That is not a major focus of the BPMN 2.0 team, although they seem to be moving toward allowing it.
Not sure about Marlon?s background, but also not sure about his idea of what else would/could represent semantics of an arbitrary specs (like BPMN-2.0) in a formal way!
The metamodel to great extent provides a formality and solid ground to represent semantics in terms of defining concepts, attributes, relationships among concepts, restrictions on those relationships, data types and enumerations, etc.
It?s the way vendors implement the metamodel makes it valuable or not. The bad news here is, most vendors use either XML-based or DBMS technologies for such implementation. The former is handicapped when it comes to capture semantics, where XML is only syntax for data exchange. DBMS can do it less worse, but cannot manage complex relationships in a metamodel such the one comes with BPMN-2.0 specs, where it has 16 packages (not finished yet) with many relationships (200+), among concepts, and what makes it worse is that; many of those relationships are across packages.
Does that mean we?re out of option, of course not, if we use the right technology to capture the semantics in BPMN-2.0 metamodel, we might see light by end of the tunnel. Ontology and semantic technology come to rescue here. Ontology is a knowledge representation that inherently and natively captures semantics in formal standard explicit way. As a standard, it?s backed by W3C with a standard query language (SPARQL) and many mature open source API?s.
With BPMN-1.x, that metamodel was not available, and vendors made it their way, mainly based on their best interest.
The reluctant and opposition by some vendors against OMG BPMN-2.0 metamodel and its complexity is either because they got stuck in how to implement it, or they want to stay with proprietary toy XML models (such one donated to Eclipse by Intalio) for their products best interest.
It?s just a matter of time until these products will begin to be seen as proprietary and outdated.
Scott
Scott,
Mmmm… I’m not really buying it. Marlon is one of the most respected academics in BPM, and he’s published a lot about BPMN from the computer science perspective. He is a big boy and can defend himself, but I’ll take issue with your bit about vendors doing their own thing with BPMN 1.1 because it did not have a MOF metamodel. I don’t see much semantic variability in the part of BPMN 1.1 that BPMS vendors have actually adopted – task, subprocess, XOR gateway, parallel split and join. They didn’t adopt the execution attributes, because they were cumbersome and there was no benefit in doing so… not because of vague semantics. OMG did not insist that vendors support (in any fashion) all the BPMN shapes and symbols in v1.1, and they won’t in v2.0 either, MOF or not. The BPMN 2.0 spec does not even enumerate the rules of BPMN, and you can’t validate an instance against them using the metamodel, either… just conformance with the syntax. The only tool conformance standard in BPMN 2.0 is the ability to display all the shapes and symbols and view/enter the other elements and attributes in property sheets. So what is going to be so magically different about it now that it is based on a UML metamodel?
I am a supporter of BPMN 2.0, and I want it to be as good as it can be. That’s why I get up early 3 days a week for 7am teleconferences. It is adding a few valuable things to BPMN 1.1 – non-interrupting events and an official interchange format most important among them. But I would guess that 99% of BPMN users do not – and should not – care that it will be based on a formal metamodel.
Bruce,
My main point was that a formal standard metamodel is a big step forward to capture semantics in an explicit way, no matter a company X will adopt it or not. From users prospective, metamodel does not matter because they don?t care what?s behind the scene, but from vendor prospective, it provides (for the first time) a great opportunity to consistently handle semantics that drive BPMN diagrams.
Once you handle semantics, you can do a lot more than conforming syntax. A quick example would be validating your diagram (instance) against the metamodel. For example, you easily can check that a Choreography or Collaboration diagram must have 2+ Pools, which stated in the metamodel.
It does not make sense that because you won?t use the metamodel or you don?t know how to use it, you undermine what OMG is doing. Someone else or vendor will make the right use of it. So until you (or Marlon) can provide a formal alternative way/format/mechanism different from the metamodel as a base to handle semantics, stop trashing it out.
Scott
Scott,
I actually agree with you that having a syntax (e.g. a meta-model or a schema) is better than not having one. It allows vendors to implement tool functionality in order to validate BPMN models from a syntactic perspective.
I also partly agree with you that: “the metamodel to great extent provides a formality and solid ground to represent semantics in terms of defining concepts, attributes, relationships among concepts, restrictions on those relationships, data types and enumerations, etc.”
But my point was that the meta-model does not define a semantics, and particularly not an execution semantics. This is a myth that the OMG has been spreading for many years, and I’m afraid they’ll do it again with BPMN. By the way, David Harel (author of the statecharts notation) has written a nice article about this topic:
http://www.wisdom.weizmann.ac.il/~harel/papers/ModSemantics.pdf
In particular, I like this quote from Harel’s paper: “The metamodel is a way to describe the language’s syntax; it is a crucial precursor, but it is not the semantics itself. Knowing what a language looks like does not equate with understanding what it means”
For example, one can say that “a BPMN gateway may be an inclusive merge gateway”, but that tells me absolutely nothing about the meaning of an inclusive merge gateway (when should it fire, when should it not fire).
My second point is that standardizing on the semantics is not easy and it requires a lot of will from vendors. Part of this effort can be done within a standardization body, but part of this effort needs to be done in addition to the standardization effort.
As Bruce points out, one way to start building consensus around a semantics is by defining strict conformance criteria (what does it mean to be BPMN-compliant?). Secondly, it helps a lot to define conformance test suites to complement the standard. This approach has been used in many other standarization efforts, e.g.:
http://www.w3.org/XML/Test/
http://www.thefreelibrary.com/W3C+and+NIST+release+DOM+Conformance+Test+Suite.+(News+Briefs)-a090984484
Another way to build consensus on semantics is to have a reference implementation. Or yet another way is to define translations from BPMN to other languages that have a semantics, such as translations from BPMN to Petri nets or translations from BPMN to YAWL:
http://eprints.qut.edu.au/7115/1/7115.pdf
http://bpt.hpi.uni-potsdam.de/pub/Public/GeroDecker/bpmdemo2008-bpmn2yawl.pdf
There are many approaches to achieve consensus on the semantics of BPMN. But at the end, what matters is that there is a will from vendors to reach this consensus, as well as to achieve tool interoperability.
It’s great to see people like Bruce taking leadership on these issues.
Again me…
I forgot one small thing in my previous post. Scott mentions that
“Once you handle semantics, you can do a lot more than conforming syntax. A quick example would be validating your diagram (instance) against the metamodel. You easily can check that a Choreography or Collaboration diagram must have 2+ Pools, which stated in the metamodel.”
This is a typical example of syntax checking. In the same way that checking that a BPD that has a start event must also have at least one end event, is syntax checking.
Stating exactly when should an inclusive merge gateway fire, and when should it not fire, that’s semantics.
Is it correct to equate a schema with a formal metamodel? Any xml format has to have a schema to be usable in practice. XPDL, so often mocked by the metamodel people as having no metamodel, has a schema. Some of BPMN’s rules, such as Scott’s correct assertion that going forward you must have at least 2 pools in a diagram, can be validated using xsd. But other rules, just as basic, such as a sequence flow cannot connect between two pools, cannot be validated by xsd. I don’t think it can be validated even by the UML metamodel.
It seems that we?re hitting a technical issue rather than a business one.
First of all, there is a debate here about syntax and semantics, as they?re used interchangeably. In my opinion (could be wrong), relationships and restrictions on them represent core semantics of any datamodel/model (in addition to some business rules).
Any valuable information (semantics) you know about metamodel, you should add it up-front to become formalized. The inclusive merge gateway case (by Marlon), of when should it fire, and when should it not, is a good example of adding those extra information/restrictions to metamodel to get more robust behavior from that gateway at runtime.
The myth here is not what OMG is delivering; rather, it?s the way vendors carry out the metamodel to capture semantics and build an executable BPMN engine for example, and OMG does not enforce any specific implementation. It matters most to choose the right implementation that can accommodate such complexity in a consistent way.
Here comes the question of what the right implementation for complex metamodels would be. You would hear different opinions based on people?s backgrounds. XML camp would say XML/XSD can do everything (as Bruce pointed out). Personally and based on my technical background, I wouldn?t buy this.
As I pointed before, all XML-based technologies are handicapped when it comes to handling semantics (for reasons that can be discussed later). XSD might work fine for small models/metamodels, but they would certainly blow out and become un-manageable for complex ones such as BPMN-2.0.
DBMS can do little better, but it is very week to handle complex relationships (especially across packages) such ones in BPMN-2.0.
One straight shot here would be using ontology and semantic technology to solve this semantic issue inherently and natively, in addition of being formal standards by W3C. The power of using ontology to handle semantic complexity in a consistent and robust way is a natural capability that puts ontology in a superior position over other technologies. In addition, you will get inferencing capability to deduce implicit information not stated in the metamodel. This great capability is not provided by all other technologies and only attributed to ontology.
So, is there a catch here by easily solving that semantic problem? Yes, a small one. To do something useful using ontology and semantic technology, the learning curve is steep. So, it will either take a long time, or you hire an expert to get the job done (don?t let the carpenter design your house!).
The value of OMG metamodel in XMI (not XML) format as the blueprint for all kinds of implementations is a tipping point (as Bruce pointed out before). XMI format fits perfectly with ontology representation (RDF/OWL), where both have object oriented roots. It also establishes a solid foundation for automating the transformation between both formats, which is a huge advantage not visible for many people.
From now on, it does not matter who likes the metamodel or not, it only matters who can take it all the way to leverage it the most, and all others are going to stay behind in the dark.
Scott
Scott, I agree with you that there’s an awkward lot of misunderstanding and arbitrary use of of the terms “syntax” and “semantics”. However, I don’t think your post cleared any of this up. So let’s maybe figure out what you and Marlon meant when using those terms…
Without me wanting to elaborating on the entire Ontology-approach once more (history repeats itself anyway), I believe you are very much argumenting towards “semantics” from an object-related point of view. Meaning that you would like to put a clearer understanding to questions like “What exactly is an activity?” or more conrete “What does my activity ‘approve order’ do with my business object ‘order’?” (didn’t quite get what meta-level of annotation you’re on – doesn’t matter anyway because the matter discussed here is a completely different one).
This certainly is a correct use of the term “semantics” since it captures the semantics of different semantically related concepts. However, the kind of semantics discussed by Bruce and Marlon can more precisely be described as “execution semantics”…
That is, Marlon’s and Bruce’s (and my) understanding of “semantics” does not involve any annotations of anything in order to explain “what something is”, but rather about “how a syntactically correct model is to be interpreted during execution”. (put in your gateway example: it’s about how a gateway with n incoming and m outgoing arcs are actually interpreted rather than what certain condition expressions mean in a particular context)
And unless there will be any sort of reference implementation or at least semi-formal specification how to execute a BPMNEL/BPMN 2.0 model, this will result in impracticability of reuse of process models (so no difference to today).
More precisely: a BPMNEL/BPMN 2.0 model which is syntactically correct to whatever kind of meta model (and maybe even annotated with any fancy kind of ontological knowledge) will result in two different execution behaviours in two different vendor’s execution engines…
That’s nothing about liking or disliking any of the ongoing standardization efforts. Actually, its just about the _completeness_ of the efforts! The consequences are pretty clear: will any sufficiently detailed execution semantics definition included there might be a chance of interoperability within the next 2-3 years. Will there be not: history repeats itself (as I already put it). 😉
Cheers,
Torben
Bruce,
Just my opinion, but commoditization of the runtime isn’t really of concern to BPMS vendors – the Process Manager is only one important piece of the BPMS puzzle and there’s plenty of room for differentiation.
We’ll know when BPM has “really” reached the mainstream when we begin to see sofware companies making money selling “add ons” to the major BPMS suites.
-John
John,
Your comment is a breath of fresh air. Thank you for reacting to the substance of the post.
You may be right, I don’t know. It is clear that the BPEL-based BPMS vendors will adopt executable BPMN 2.0, since they are writing the spec. The question is whether the non-BPEL ones will, like the ones I listed, who in my view are the ones responsible for the current popularity of BPMN in the marketplace. I’m saying I doubt they will adopt more than the abstract flow portion (i.e. what displays in the diagram), because their incentive to do so is purely defensive, i.e. if enough others do they might be forced to go along or be considered proprietary. This would take a few years to play out. But your view, which is they have nothing to fear from a commoditized BPMN “engine”, could in the end be correct.
I suspect the issues around metamodels and semantics, etc., are more important to new tools, coming from a SOA or MDA direction, thinking about how to support BPMN in the future rather than to the BPM Suites based on BPMN 1.1 today.
–Bruce
[…] previous post on Reframing the BPEL vs BPMN Debate triggered a lively comment thread that somehow got wrapped up in the distinction between semantics […]