The coolest thing I saw at Process World was definitely ARIS Process Performance Manager (PPM), specifically its ability to autogenerate the as-is model from instrumenting the backend systems that perform its activities.  IDS CTO Wolfram Jost mentioned this in his keynote, and there were a number of comments about it in my Almost Live… post on Thursday.  If you missed the thread, Marlon Dumas pointed me to an excellent academic paper on this technology, called “process mining,” by Wil van der Aalst and colleagues.  Others commented that it couldn’t do magic, and Kiran Garimella did a strange riff on it as well.  I obviously didn’t explain it very well, because at that time I hadn’t seen it.  But now I have.  It’s not magic at all, and still I think very cool.  And something Kiran might actually want to take a second look at for webMethods.

Let’s strip away the automagical part.  First of all, PPM is just about the as-is process, not the improved to-be model, which would be modeled in ARIS Business Architect.  In that regard, PPM does not attempt to capture every bit of process logic in the flow, but just a diagram that provides context for KPIs.  One of the essential differences between BAM from BI vendors and BAM from BPM vendors is that BPM provides a process context for the KPIs. You can drill down to see the root cause of problems: which subset of instances, which step.  PPM focuses on exactly that.

Second, it requires the systems that it monitors to provide instance data in some structured form.  That structure includes some instance identifier (for correlation), timestamp, and various attributes about the activity and state.  The magical middleware is an “extractor” (I would call it an event adapter) that looks at events or logged instance data that the backend system already creates.  ARIS provides extractors for SAP, databases, and files (must be structured, e.g. delimited, not text files…  these woud require custom parsing extractors).  So if an application in your process is hardcoded in COBOL and doesn’t generate events or log data, PPM isn’t going to mine it successfully.  But did you really think it would?

PPM can work either with or without a predefined ARIS model for the process.  Here’s how.

You start by instrumenting key events in your process, represented for example by document creation events in your SAP system.  Instrumenting the event means watching it with an extractor.  You don’t do this for every activity in the process, just the key milestones you are using to monitor performance, typically measured by cycle time.  I suppose you could just put one at the beginning and another at the end, if you wanted to, but the idea is the capability to drill down to identify sources of problems in the KPIs.  Kiran, I hope this is starting to sound familiar to you.

The extractors, which could be distributed across multiple systems, generate process instance fragments in the EPC form of event-function-event and funnel them to a “process warehouse.”  If you have an ARIS process model already, I believe the names of events and functions are taken from that.  If you don’t (the autogenerate case), the designer has to provide those.  In the process warehouse, PPM then takes those fragments and merges them into a process chain for each single instance.  This doesn’t show the logic at decision nodes because it’s just a trace of a single instance, along with the instance attributes, KPIs, and activity timestamps at each node.  So far, nothing magic.

Then PPM can aggregate those instance models for any selected subset of instances – the ones with the fastest cycle times, or slowest, or the orders for blue widgets vs orders for red widgets, whatever instance data is avalable as a dimension for the KPI.  This seems trickier, but again not magic.  The aggregated models use some heuristic logic to generate the decision nodes based on the variety of paths taken in the aggregation.

So in operation it works like this.  You instrument your current process with extractors.  PPM creates a management dashboard of KPIs.  It tells you which attributes of the instances correlate with the biggest variation in the KPI.  You can then slice and dice the KPIs by one or two dimensions – say cycle time by activity and type of widget.  Here, by “activity” I suppose I mean the time between 2 of the instrumentation points in your process – that’s as granular as you can get.  This activity diagram is autogenerated by PPM.  Aha! You see the overall cycle time problem stems from the QC step for the blue widgets, and you can drill down from there to the list of actual instances.

To me this is quite similar to what webMethods and Lombardi (possibly others) are trying to do with as-is performance monitoring prior to modeling, except that ARIS is taking the extra step of autogenerating a process context for the data.  Not magic at all, but how long does it take?  Molson-Coors supposedly got their first PPM process done in 10 days, although they are clearly one of IDS’s top customers and it sounded like the ARIS guys may have worked round the clock.  But it’s not months.

PPM has been around for 3 years, but even IDS’s best customers are just beginning to try it out.  Probably not the best marketing around it.  I think this technology has great potential and I would expect it to take off once people understand it better, and not just from ARIS.  They have some patent protection (supposedly), but Dumas and van der Aalst suggest that others will offer similar things soon.  Kiran, forget about those lizards!  This stuff is great.