Thursday, March 27, 2008

The Limits of On-Demand Business Intelligence

I had an email yesterday from Blink Logic , which offers on-demand business intelligence. That could mean quite a few things but the most likely definition is indeed what Blink Logic provides: remote access to business intelligence software loaded with your own data. I looked a bit further and it appears Blink Logic does this with conventional technologies, primarily Microsoft SQL Server Analysis Services and Cognos Series 7.

At that point I pretty much lost interest because (a) there’s no exotic technology, (b) quite a few vendors offer similar services*, and (c) the real issue with business intelligence is the work required to prepare the data for analysis, which doesn’t change just because the system is hosted.

Now, this might be unfair to Blink Logic, which could have some technology of its own for data loading or the user interface. It does claim that at least one collaboration feature, direct annotation of data in reports, is unique. But the major point remains: Blink Logic and other “on-demand business intelligence” vendors are simply offering a hosted version of standard business intelligence systems. Does anyone truly think the location of the data center is the chief reason that business intelligence has so few users?

As I see it, the real obstacle is that most source data must be integrated and restructured before business intelligence systems can use it. It may be literally true that hosted business intelligence systems can be deployed in days and users can build dashboards in minutes, but this only applies given the heroic assumption that the proper data is already available. Under those conditions, on-premise systems can be deployed and used just as quickly. Hosting per se has little benefit when it comes to speed of deployment. (Well, maybe some: it can take days or even a week or two to set up a new server in some corporate data centers. Still, that is a tiny fraction of the typical project schedule.)

If hosting isn't the answer, what can make true “business intelligence on demand” a reality? Since the major obstacle is data preparation, then anything that allows less preparation will help. This brings us back to the analytical databases and appliances I’ve been writing about recently : Alterian, Vertica, ParAccel, QlikView, Netezza and so on. At least some of them do reduce the need for preparation because they let users query raw data without restructuring it or aggregating it. This isn’t because they avoid SQL queries, but because they offer a great enough performance boost over conventional databases that aggregation or denormalization are not necessary to return results quickly.

Of course, performance alone can’t solve all data preparation problems. The really knotty challenges like customer data integration and data quality still remain. Perhaps some of those will be addressed by making data accessible as a service (see last week’s post). But services themselves do not appear automatically, so a business intelligence application that requires a new service will still need advance planning. Where services will help is when business intelligence users can take advantage of services created for operational purposes.

“On demand business intelligence” also requires that end-users be able to do more for themselves. I actually feel this is one area where conventional technology is largely adequate: although systems could always be easier, end-users willing to invest a bit of time can already create useful dashboards, reports and analyses without deep technical skills. There are still substantial limits to what can be done – this is where QlikView’s scripting and macro capabilities really add value by giving still more power to non-technical users (or, more precisely, to power users outside the IT department). Still, I’d say that when the necessary data is available, existing business intelligence tools let users accomplish most of what they want.

If there is an issue in this area, it’s that SQL-based analytical databases don’t usually include an end-user access tool. (Non-SQL systems do provide such tools, since users have no alternatives.) This is a reasonable business decision on their part, both because many firms have already selected a standard access tool and because the vendors don’t want to invest in a peripheral technology. But not having an integrated access tool means clients must take time to connect the database to another product, which does slow things down. Apparently I'm not the only person to notice this: some of the analytical vendors are now developing partnerships with access tool vendors. If they can automate the relationship so that data sources become visible in the access tool as soon as they are added to the analytical system, this will move “on demand business intelligence” one step closer to reality.

* results of a quick Google search: OnDemandIQ, LucidEra, PivotLink (an in-memory columnar database), oco, VisualSmart, GoodData and
Autometrics.

Thursday, March 20, 2008

Service Oriented Architectures Might Really Change Everything

I put in a brief but productive appearance at the DAMA International Symposium and Wilshire Meta-Data Conference running this week in San Diego. This is THE event for people who care passionately about topics like “A Semantic-Driven Application for Master Data Management” and “Dimensional-Modeling – Alternative Designs for Slowly Changing Dimensions”. As you might imagine, there aren't that many of them, and it’s always clear that the attendees revel in spending time with others in their field. I’m sure there are some hilarious data modeling jokes making the rounds at the show, but I wasn’t able to stick around long enough to hear any.

One of the few sessions I did catch was a keynote by Gartner Vice President Michael Blechar. His specific topic was the impact of a services-driven architecture on data management, with the general point being that services depend on data being easily available for many different purposes, rather than tied to individual applications as in the past. This means that the data feeding into those services must be redesigned to fit this broader set of uses.

In any case, what struck me was Blechar’s statement the fundamental way I’ve always thought about systems is now obsolete. I've always thought that systems do three basic things: accept inputs, process them, and create outputs. This doesn't apply in the new world, where services are strung together to handle specific processes. The services themselves handle the data gathering, processing and outputs, so these occur repeatedly as the process moves from one service to another. (Of course, a system can still have its own processing logic that exists outside a service.) But what’s really new is that a single service may be used in several different processes. This means that services are not simply components within a particular proces or system: they have an independent existence of their own.

Exactly how you create and manage these process-independent services is a bit of a mystery to me. After all, you still have to know they will meet the requirements of whatever processes will use them. Presumably this means those requirements must be developed the old fashioned way: by defining the process flow in detail. Any subtle differences in what separate processes need from the same service must be accommodated either by making the service more flexible (for example, adding some parameters that specify how it will function in a particular case) or by adding specialized processing outside of the service. I'll assume that the people who worry about these things for a living must have recognized this long ago and worked out their answers.

What matters to me is what an end-user can do once these services exist. Blechar argued that users now view their desktop as a “composition platform” that combines many different services and uses the results to orchestrate business processes. He saw executive dashboards in particular as evolving from business intelligence systems (based on a data warehouse or data mart) to business activity monitoring based on the production systems themselves. This closer connection to actual activity would in turn allow the systems to be more “context aware”—for example, issuing alerts and taking actions based on current workloads and performance levels.

Come to think of it, my last post discussed eglue and others doing exactly this to manage customer interactions. What a more comprehensive set of services should do is make it easer to set up this type of context-aware decision making.

Somewhat along these same lines, Computerworld this morning describes another Gartner paper arguing that IT’s role in business intelligence will be “marginalized” by end-users creating their own business intelligence systems using tools like enterprise search, visualization and in-memory analytics (hello, QlikView!). The four reader comments posted so far have been not-so-politely skeptical of this notion, basically because they feel IT will still do all the heavy lifting of building the databases that provide information for these user-built systems. This is correct as far as it goes, but it misses the point that IT will be exposing this data as services for operational reasons anyway. In that case, no additional IT work is needed to make it available for business intelligence. Once end-users have self-service tools to access and analyze the data provided by these operational services, business intelligence systems would emerge without IT involvement. I'd say that counts as "marginalized".

Just to bring these thoughts full circle: this means that designing business intelligence systems with the old “define inputs, define processes, define outputs” model would indeed be obsolete. The inputs would already be available as services, while the processes and outputs would be created simultaneously in end-user tools. I’m not quite sure I really believe this is going to happen, but it’s definitely food for thought.

Tuesday, March 11, 2008

eglue Links Data to Improve Customer Interactions

Let me tell you a story.

For years, United Parcel Service refused to invest in the tracking systems and other technologies that made Federal Express a preferred carrier for many small package shippers. It wasn’t that the people at UPS were stupid: to the contrary, they had built such incredibly efficient manual systems that they could never see how automated systems would generate enough added value to cover their cost. Then, finally, some studies came out the other way. Overnight, UPS expanded its IT department from 300 people to 3,000 people (these may not be the exact numbers). Today, UPS technology is every bit as good as its rivals’ and the company is more dominant in its industry than ever.

The point of this story—well, one point anyway—is that innovations which look sensible to outsiders often don’t get adopted because they don’t add enough value to an already well-run organization. (I could take this a step further to suggest that once the added value does exceed costs, a “tipping point” is reached and adoption rates will soar. Unfortunately, I haven’t seen this in reality. Nice theory, though.)

This brings us to eglue, which offers “real time interaction management” (my term, not theirs): that is, it helps call center agents, Web sites and other customer-facing systems to offer the right treatment at each point during an interaction. The concept has been around for years and has consistently demonstrated substantial value. But adoption rates have been perplexingly low.

We’ll get back to adoption rates later, although I should probably note right here that eglue has doubled its business for each of the past three years. First, let’s take a closer look at the product itself.

To repeat, the basic function of eglue is making recommendations during real time interactions. Specifically, the system adds this capability to existing applications with a minimum of integration effort. Indeed, being “minimally invasive” (their term) is a major selling point, and does address one of the significant barriers to adopting interaction management systems. Eglue can use standard database queries or Web services calls to capture interaction data. But its special approach is what it calls “GUI monitoring”—reading data from the user interface without any deeper connection to the underlying systems. Back in the day, we used to call this “screen scraping”, although I assume eglue’s approach is much more sophisticated.

As eglue captures information about an on-going interaction, it applies rules and scoring models to decide what to recommend. These rules are set up by business users, taking advantage of data connections prepared in advance by technical staff. This is as it should be: business users should not need IT assistance to make day-to-day rule changes.

On the other hand, a sophisticated business environment involves lots of possible business rules, and business users only have so much time or capacity to understand all the interconnections. Eglue is a bit limited here: unlike some other interaction management systems, it does not automatically generate recommended rules or update its scoring models as data is received.

This may or may not be a weakness. How much automation makes sense is a topic of heated debate among people who care about such things. User-generated rules are more reliable than unsupervised automation, but they also take more effort and can’t react immediately to changes in customer behavior. I personally feel eglue’s heavy reliance on rules is a disadvantage, though a minor one. eglue does provide a number of prebuilt applications for specific tasks, so clients need not build their own rules from scratch.

What impressed me more about eglue was that its rules can take into account not only the customer’s own behavior, both during and before the interaction, but also the local context (e.g., the current workload and wait times at the call center) and the individual agent on the other end of the phone. Thus, agents with a history of doing well at selling a particular product are provided more recommendations for that product. Or the system could restrict recommendations for complex products to experienced agents who are capable of handling them. eglue rules can also alert supervisors about relationships between agents and interactions. For example, it might tell a supervisor when a high value customer is talking to an inexperienced agent, so she can listen and intervene if necessary.

The interface for presenting the recommendations is also quite appealing. Recommendations appear as pop-ups on the user’s screen, which makes them stand out nicely. More important, they provide a useful range of information: the recommendation itself, selling points (which can be tailored to the customer and agent), a mechanism to capture feedback (was the recommended offer presented to the customer? Did she accept or reject it?), and links to additional information such as product features. There is an option to copy information into another application—for example, saving the effort to type a customer’s name or account information into an order processing system. As anyone who has had to repeat their phone number three times during the course of a simple transaction can attest—and that would be all of us—that feature alone is worth the price of admission. The pop-up can also show the business rule that triggered a recommendation and the data that rule is using.

As each interaction progresses, eglue automatically captures information about the offers it has recommended and customer response. This information can be used in reports, applied to model development, and added to customer profiles to guide future recommendations. It can also be fed back into other corporate systems.

The price for all this is not insignificant. Eglue costs about $1,000 to $1,200 per seat, depending on the details. However, this is probably in line with competitors like Chordiant, Pegasystems, and Portrait Software. eglue targets call centers with 250 to 300 seats; indeed, its 30+ clients are all Fortune 1000 firms. Its largest installation has 20,000 seats. The company’s “GUI monitoring” approach to integration and prebuilt applications allow it to complete an implementation in a relatively speedy 8 to 12 weeks.

This brings us back, in admittedly roundabout fashion, to the original question: why don’t more companies use this technology? The benefits are well documented—one eglue case study showed a 27% increase in revenue per call at Key Bank, and every other vendor has similar stories. The cost is reasonable and implementation gets easier all the time. But although eglue and its competitors have survived and grown on a small scale, this class of software is still far from ubiquitous.

My usual theory is lack of interest by call center managers: they don’t see revenue generation as their first priority, even though they may be expected to do some of it. But eglue and its competitors can be used for training, compliance and other applications that are closer to a call center manager’s heart. There is always the issue of data integration, but that keeps getting easier as newer technologies are deployed, and it doesn’t take much data to generate effective recommendations. Another theory, echoing the UPS story, is that existing call center applications have enough built-in capabilities to make investment in a specialized recommendation system uneconomic. I’m guessing that answer may be the right one.

But I’ll leave the last word to Hovav Lapidot, eglue’s Vice President of Product Marketing and a six-year company veteran. His explanation was that the move to overseas outsourced call centers over the past decade reflected a narrow focus on cost reduction. Neither the corporate managers doing the outsourcing nor the vendors competing on price were willing to pay more for the revenue-generation capabilities of interaction management systems. But Lapidot says this has been changing: some companies are bringing work back from overseas in the face of customer unhappiness, and managers are showing more interest in the potential value of inbound interactions.

The ultimate impact of interaction management software is to create a better experience for customers like you and me. So let’s all hope he’s right.