Tuesday, May 22, 2018

Adobe's Magento Deal Makes Great Sense

Adobe yesterday announced its purchase of the Magento Commerce platform, a widely used ecommerce system, for a cool $1.68 billion.

That Adobe would purchase an ecommerce system was the least surprising thing about the deal: it fills an obvious gap in the Adobe product line compared with Oracle, Salesforce, IBM, and SAP, which all have their own ecommerce systems. Owler estimates that Magento had $125 million revenue, which would mean that Adobe paid 13x revenue. That seems crazy but Salesforce paid $2.8 billion for Demandware in 2016 on $240 million revenue, giving a similar ratio of I2x. It’s just what these things cost these days.

More surprising was the mismatch between the two business’s client bases. Magento sells primarily to small and mid-size firms, while Adobe’s Experience Cloud products are sold mostly to enterprises. The obvious question is whether Adobe will try to use Magento as an entry point to sell Experience Cloud products to smaller firms, or use Experience Cloud as an entry point for selling Magento to big enterprises. The easy answer is “both”, and that’s more or less what the company said when asked that question on an analyst conference call about the deal. But my impression was they were more focused on adding Experience Cloud capabilities like Sensei AI to Magento. References during the call to cloud-based micro-services also suggested they saw the main opportunity as enhancing the product Magento offers in the mid-market, not selling Magento to big enterprises.

This could be very clever. Selling enterprise software packages to mid-market firms doesn’t work very well, but embedding enterprise-class micro-services would let Adobe add advanced features without asking mid-market IT managers or business users to do more than they can handle. It would also nicely skirt the pricing problems that come from trying to make enterprise software affordable to smaller firms without cutting prices to large enterprises.

The approach is also consistent with the Adobe Experience Cloud Profile announced last month, which uses an open source customer data model co-developed with Microsoft and is hosted on Microsoft Azure. This is also at least potentially suitable for mid-size firms, a market where Microsoft’s CRM products are already very strong. So we now see two recent moves by Adobe that could be interpreted as aimed at penetrating the mid-market with its Experience Cloud systems. Given the crowded, competitive, and ultimately limited nature of the enterprise market, moving downstream makes a lot of sense. Historically, it’s been very hard to do that with enterprise software but it looks like Adobe has found a viable path.

(As an aside: it would make total sense for Microsoft to buy Adobe, a possibility that has been mentioned for years. There’s no reason to think Adobe wants to be bought and the stock already sells at over 16x revenue compared with 8x revenue for Microsoft. So it would be hard to make the numbers work. But still.)

Perhaps the most intriguing aspect of the deal is that Magento is based on open source.. This isn’t something that most enterprise software vendors like to buy, since an open source option keeps prices down. Like other open-source-based commercial products, Magento includes proprietary enhancements to justify paying for something that would otherwise be free. Apparently Adobe feels these offer enough protection, especially among mid-size and larger clients, for Magento to be a viable business. And, Adobe’s comments show it’s very impressed at the size of the open source community supporting Magento, which it pegs at more than 300,000 developers. That does seem like a large work force to get for more-or-less free. Again, there’s a parallel with the open source data model underlying Experience Cloud Profile. So Adobe seems to have embraced open source much more than its main competitors.

Finally, I was struck by Adobe’s comments in a couple of places that it sees Magento as the key to making “every experience shoppable”, an extension of its promise to make every experience personal. The notion is that commerce will be embedded everywhere, not just isolated in retail stores or Web sites. I’m not sure I really want to live in a world where everything I see is for sale, but that does seem to be where we’re headed. So, at least from a business viewpoint, let’s give Adobe credit for leading the way.




Tuesday, May 08, 2018

Will GDPR Burst the Martech Bubble?

Some people have feared (or hoped) that the European Unions’ General Data Protection Regulation would force major change in the the marketing and advertising ecosystems by shutting off vital data flows. I’ve generally been more sanguine, suspecting that some practices would change and some marginal players would vanish but most businesses would continue pretty much as they are. The most experienced people I’ve spoken with in recent days have had a similar view, pointing to previous EU privacy regulations that turned out to be mostly toothless.

But even though I respect those experienced opinions, I’m beginning to wonder GDPR might have a much greater than most of us think. The reason isn’t that GDPR requires major changes in how data is collected or used: by and large, consumers can be expected to grant consent without giving it much thought and most accepted industry practices actually fall within the new rules. Nor will the limited geographic reach of GDPR blunt its impact: it looks like most U.S. firms are planning to apply GDPR standards worldwide, if only because that’s so much easier than applying different rules to EU vs non-EU persons.

What GDPR does seem to doing is create a shake-out in the data supply chain as big companies reduce their risks by limiting the number of partners they’ll work with. The best example is Google’s proposed consent tool for publishers, which limits consent to no more than twelve data partners. This would inevitably lead to smaller firms being excluded from data acquisition.  Some see this as a ploy by Google to hobble its competitors, and maybe they're right. But the real point is that asking people to consent to even a dozen data sharing options is probably not going to work. So even though publishers are free to use other consent tools, there’s a practical limit on the number of data partners who can succeed under the new rules.

A similar example of market-imposed discipline is contract terms proposed by media buying giant GroupM , which requires publishers to grant rights they might prefer to keep. GroupM may have the market power to force agreement to its terms, but many smaller businesses will not. With less legal protection, those smaller firms will need to be more careful about the publishers they work with. Conversely, advertisers need to worry about using data that wasn’t acquired properly or has been mistreated somewhere along the supply chain before it reached them. Since they can’t verify every vendor, many are considering cutting off smaller suppliers.  Again, the result is many fewer viable firms as a handful of big companies survive and everyone else is shut out of the ecosystem.  (Addendum: see this Marketing Week article about data supplies being reduced, published the day after I wrote this post.)

There’s nothing surprising about this: regulation often results in industry consolidation as compliance costs make it impossible for small firms to survive. The question I find more intriguing is slightly different: will a GDPR-triggered reduction in data processing will ramify through the entire adtech and martech ecosystem, causing the long-expected collapse of industry growth?

So far, as uber-guru Scott Brinker recently pointed out, every prediction of consolidation has been wrong.  Brinker argues that fundamental structural features – including low barriers to entry, low operating costs of SaaS, ever-changing needs, micro-services architectures, and many more – favor continued growth (but carefully avoids making any prediction).  My simplistic counter-argument is that nothing grows forever and sometimes one small jolt can cause a complex system to collapse. So something as seemingly trivial as a reluctance of core platforms to share data with other vendors could not only hurt those vendors, but vendors that connect with them in turn. The resulting domino effect could be devastating to the current crop of small firms while the need to prove compliance could impose a major barrier to entry for new companies.

I can’t say how likely this is. There’s a case to be made that GDPR will have a more direct impact on adtech than martech and adtech is particularly ripe for simplification.  You could even note that all my examples were from the adtech world. But it’s always dangerous to assume trends will continue indefinitely and it’s surely worth remembering that every bubble is accompanied by claims that “this time is different”. So maybe GDPR won’t have much of an impact. But I suspect its chances of triggering a slow-motion martech consolidation are greater than most people think.



Monday, May 07, 2018

The Black Mirror Episode You'll Never

I’m no fan of the TV show Black Mirror – the plots are obvious and the pace is excruciatingly slow. But nevertheless, here’s a story for consideration.

Our tale begins in a world where all data is stored in the cloud. This means people don’t have their own computers but can instead log into whatever machine is handy wherever they go.

All is lovely until our hero one day notices a slight error in some data. This is supposed to be impossible because the system breaks every file into pieces that are replicated millions of times and stored separately, blockchain-style. Any corruption is noted and outvoted until it’s repaired.

As he investigates, our hero finds that changes are in fact happening constantly. The system is infected with worms – we’ll call them snakes, which has nice Biblical overtones about corruption and knowledge – that move from node to node, selectively changing particular items until a new version becomes dominant. Of course, no one believes him and he is increasingly ignored because the system uses a reputation score to depreciate people who post information that varies from the accepted truth. Another security mechanism hides “disputed” items when they have conflicting values, making it harder to notice any changes.

I’m not sure how this all ends. Maybe the snakes are controlled by a master authority that is altering reality for its own purposes, which might be benevolent or not. The most likely result for our hero is that he’s increasingly shunned and ultimately institutionalized as a madman. Intuitively, I feel the better ending is that he ends up in a dreary-but-reality-based society of people who live outside the cloud-data bubble. Or perhaps he himself has been sharded and small bits begin to change as the snakes revise his own history. I can see a sequence of split-second images that illustrate alternate versions of his story co-existing. Perhaps the best ending is one that implies the controllers have decided the episode itself reveals a truth they want to keep hidden, so they cut it off in mid

Tuesday, May 01, 2018

Facebook, Privacy, and the Future of Personalization

Readers of this blog and the CDP Institute newsletter know that I’ve been fussing for years about privacy-related issues with Facebook, Google, and others. With the issue now attracting much broader public attention, I’ve backed off my own coverage. It’s partly because people can now get the information without my help and partly because there’s so much news that covering it would consume too much precious reader attention. But, ironically, the high level of noise around the topic also means that some of smaller but significant stories get lost.

I’ll get to covering those in a minute. But first a general observation: the entirely coincidental convergence of the Facebook/Cambridge Analytica story and implementation of the European Union’s General Data Protection Regulation (GDPR) seems to have created a real possibility of changes to privacy policies everywhere, and most particularly in the United States. In a nutshell, the Facebook news has made people aware of how broadly their data is shared and GDPR has shown them it doesn’t have to be this way. Until now, few people in the U.S. really seemed to care about privacy and it seemed unlikely that they would overcome the resistance of commercial interests who largely determine what happens in the government. (Does that make me sound horribly cynical? So be it.) It’s still very much uncertain whether any significant change will take in U.S. laws or regulatory agencies. But that there is any significant chance at all is brand new.

So much for that. Just wanted to get it on the record so I can point to it in case something actually happens. Here are some developments on the Facebook / Walled Garden / Privacy fronts that you might have missed.

More Bad News

One result of the heightened interest in these issues that public agencies, academics, and especially the media are now looking for stores on the topic. This in turn means they find things that were probably always there but went unreported. So we have:

- CNN discovers that ads from big brands are still running on YouTube channels of extremist groups.
This has been a known problem forever, so the fact that it gets reported simply means that journalists chose to look for it and decided people would be interested in the results.

- Washington Post finds paid reviews are common on Amazon, despite being officially banned.  Again, this comes under the heading of “things you could always find if you bothered to try”.

- Journalism professor Young Mie Kim found that fully half the groups running political advertising on Facebook during the 2016 election couldn’t be traced.  Kim started her research before the current news cycle and it was probably accepted for publication before then too. But would Wired have picked it up?

- PricewaterhouseCooper’s FTC-mandated privacy review of Facebook in 2017 failed to uncover the Cambridge Analytica breach.  It’s more evidence for the already-obvious fact that current privacy safeguards don’t work. But it never would have seen the light of day if this hadn’t been a hot issue.

Attacks from All Sides

Politicians, government agencies, and business rivals are all trying to gain advantage from the new interest in privacy.

- Immediately after the Zuckerberg hearings in Congress, two Senators introduced a bill to give consumers more rights over their data.  The language was highly reminiscent of GDPR.

- A group of 31 state attorneys general opposed a bill to create a Federal law with standards for reporting about data breaches, fearing that Federal standards would override more stringent state regulations. Of course, this is exactly what the sponsors intend. But now the state AGs are more motivated to resist.

- The Securities and Exchange Commission (SEC) fined Yahoo $35 million for failing to discuss a 2014 data breach involving over 500 million accounts.  Data protection isn’t usually a SEC concern, so it’s equally interesting that they chose to make it an issue (arguing the breach was news that should have been shared with investors, which seems a bit of a stretch) and the Republican-majority Federal Trade Commission is steadfastly unengaged.

- Four major publisher trade groups have attacked Google’s proposed approach to gathering advertising consent, which places the burden on the publishers but requires them to share user data.  This would have been an issue under any circumstances but I suspect that publishers are emboldened to resist by the expanded interest in privacy and greater hostility to the Google, Facebook, et. al.

Scrambling by Facebook

Facebook has been scrambling to redeem itself, although it has so far avoided changes that would seriously (or even slightly) impact its business.

- It has ended a program to target ads using data from external compilers, such as Acxiom.  How this helps privacy isn’t clear but it sounds good and conveniently makes Facebook’s own data even more valuable.

- It announced major API changes that limit the amount of data shared with developers.  Note carefully: they’re not limiting data collected by Facebook, but only how much of that is shared with others. Similar changes applied to Facebook-owned Instagram. Again, the actual effect is to add value to ads sold by Facebook itself.

- It announced just today that it will let members block it from collecting data about their visits to non-Facebook Web sites.  By now you see a pattern: less data from outside of Facebook makes Facebook data more important. This reflects perhaps the most disturbing revelation from the Zuckerberg hearings: that Facebook collects such data even on non-members. But the change doesn’t address that issue, since only members can tell Facebook to stop the data collection. If you find this confusing, that’s probably no accident.

- It promised to add an “unsend” feature to Messenger.  Nice, but it only happened after reports that Facebook executives themselves already had this capability.

- It rolled out a new centralized privacy center that made settings easier to manage but apparently didn’t change what users can control.

- More substantively, it promised to apply GDPR consent rules globally.  Signals were a bit mixed on that one but maybe it will happen. Who wants to start a betting pool?

- It dropped opposition to a proposed consumer privacy law in California.  Good but it would have been a public relations disaster to continue opposing it. And who knows what they’re doing in private?

- On the Google front: Google-owned YouTube has touted its efforts to flag objectionable videos.  That’s not exactly a privacy issue but probably overlaps the public perception of how online tech giants impact society. Remember they’re also motivated by tough laws in Germany and France enacted early this year, which require to remove illegal content within 24 hours.

Business as Usual for Everyone Else

How much of this is unique to Facebook and how much reflects a fundamental change in attitudes towards data privacy? Certainly Google, Amazon, and others are tip-toeing quietly in background hoping not to be noticed. Per the above, YouTube has occasionally wandered into the spotlight, especially when extremist videos on YouTube intersect with extremist content on Facebook. Over-all, I’d say it’s very much business as usual for most firms that gather, sell, and employ consumer data.

- Amazon continues to offer amazingly intrusive concepts with little evidence of pushback. For example, they’re expanding their Amazon Key in-home delivery program to also leave packages in your car.  And they continue to expand the capabilities of Alexa ‘smart speaker’ (a.k.a. ‘always listening’) systems, most recently by making it easier for people to build their own custom capabilities into the system.

- Similarly, Waze has been merrily promoting its ability to share data about traffic conditions, setting up any number of integrations such as deals with Carto and Waycare to help traffic planning and, in Waycare’s case, warn drivers about current road conditions. Waze’s data is truly anonymized, at least so far as we know. But they certainly don’t seem to be worried a general privacy backlash.

- Another announcement that raised at least my own eyebrows was this one from Equifax, which headlined the blending of consumer and commercial data to predict small business credit risks. Anything that suggests personal data is being used for business purposes could worry people – but apparently it that doesn’t worry Equifax marketers.

What Do Consumers Think?

The big question in all this is whether consumers (should we just call them “people”?) remain concerned about privacy or quickly fall back into their old, carefree data sharing ways. It’s probably worth noting that Facebook was already uniquely distrusted compared with Google and Amazon, both by consumers and small business.

We do know that most have been following the Cambridge Analytica story in particular. But, to their credit, they also recognize that what they post on Facebook is public even if they don’t necessarily understand just how much tracking really takes place.

Sure enough, it seems that few Facebook users actually plan to close their account and, more broadly, there’s little support for government regulation of social media.

Indeed, most consumers are generally comfortable with sharing personal information so long as they know how it will be used. 

Surveys do show that EU consumers say they’ll exercise their privacy rights under GDPR, but it’s reasonable to wonder how many will follow through. After all, they’re notably lax on other cybersecurity issues such as changing default passwords on home networks.

But this doesn’t mean that Facebook and similar firms are home free. Consumers are smart enough to distrust recommendations from smart speakers, as indeed they should be.
They’re also not terribly enthusiastic about ads on smart speakers or, indeed, about personalization in general.
On the other hand, of course, many studies do show that consumers expect personalized experience, although there’s some reason to suspect marketers overestimate its importance compared with other aspects of the customer experience.

This matters because personalized experiences are the main public justification that marketers give for gathering personal data – so consumers who increase the value they place on privacy could quickly reach a tipping point where privacy outweighs the benefits of personalization. That could radically shift how much data marketers collect and what they can do with it. Given the dire consequences that would have for today’s marketing ecosystem, everyone involved must do as much as possible to make sharing data genuinely safe and worthwhile.

Friday, April 27, 2018

What I Learned at the Customer Data Platform Workshop

I ran a four hour workshop on Customer Data Platforms this week at the MarTech Conference in San Jose.* The attendees were a mix of marketers and technologists from brands, agencies, and vendor companies. We had surveyed them in advance and found, not surprisingly, goals ranging from understanding CDP market trends to optimizing data loads for technical performance. The agenda was correspondingly varied and I like to think that everyone learned something useful.  Based on attendee comments and my own observations, here’s what I myself learned.

- CDP is a vague category. This was voiced with some frustration at the end of the workshop, when several people said they had hoped to come away with a clear picture of what is and isn’t a CDP, but found instead that CDP systems differ widely. In the context of the workshop, I actually considered this to be a positive result: one of the main points I tried to get across was that CDPs have very different features and picking the right one requires you to first understand your own needs and then look carefully at which systems have the features needed to meet them.  Complaining about it is like going to a workshop on car buying and discovering that automobiles differ widely: if you didn't understand that before, you couldn’t possibly have made a sound choice. The variety may seem overwhelming but once you recognize it exists, you’re ready to take the next step of figuring out how to find the capabilities that match your needs.

- People want CDP-specific use cases. I knew in advance that people want to understand CDP use cases. This has become a very common question in the past year and the CDP Institute Library includes many papers on the topic. My personal problem has been that CDPs are like bacon: they make everything better. This made it seem silly to list use cases, because the list would include pretty much any marketing project that involves data. What I learned from the workshop is people are really looking for use cases that only become possible with a CDP. That’s a much different and more specific question: What can I do with a CDP that can’t do without one?

We discussed the answers as a group at the end of the workshop and the main conclusion was CDP makes possible many cross-channel activities that are otherwise impossible because cross-channel data isn't unified.  This isn’t exactly news – unified customer data is the whole point of a CDP – but it’s still good to focus specifically on the use cases that unification makes possible.

On reflection, I’d add the CDP also exposes data that’s otherwise trapped in source systems or not collected at all. This could be information from a single channel, so it’s distinct from the cross-channel use case. Our workshop group didn’t mention this one, so I’ll have to stress it more in the future.

The group also didn’t list the operational efficiencies of a CDP as unique benefits. That’s interesting because so much of our discussion stressed the lower cost, faster deployment, and lower risk of CDP compared with other solutions. Apparently that’s either not credible or not important. I’ll speculate that the technicians didn’t believe it and the marketing people didn’t really care. But of course that’s utterly unsupported stereotyping. (Speaking of stereotyping, I’m pretty sure the technical people sat in the back rows and the marketers talked a lot more during the small group discussions.  Next time I'll make them wear labels so I know for sure.)

- Marketers don’t care about technical details. Ok, that's really unfair stereotyping so let's change it to “some marketers”.  But it’s definitely fact-based: one of marketers complained as we started to drill into the technical parts and several others agreed. I pushed back a bit, arguing that you can’t make a sound system selection without looking at technical differences. I think I was polite about it, but have strong feelings on the subject: lack of research into specific product capabilities is by far the biggest reason people end up unhappy with a software purchase. (Yes, I have research to back that up.)

I suppose the counter-argument is what really matters are the functional differences and not the technical methods used to accomplish them. My counter-counter-argument would be the technical methods matter because they determine how efficiently a system executes those functions and how easily it can extend them. Architecture is destiny, as it were.  In my mind, the argument ends there and I win but maybe there’s more to be said for the other position. (If case you’re wondering, I did speed through the technical parts after that objection, and talked more about use cases instead. Squeaky wheels get the grease. And there was a later part of the agenda that circled back to technical questions anyway.)

So, that’s what I learned during the workshop. As you might imagine, preparing it forced me to think through several topics that I’ve been addressing casually. I’m most pleased with having clarified the relationships among strategy, marketing programs, use cases, resources, and requirements. The image below summarizes these: as you see, strategy drives marketing programs which drive resource needs**, while marketing programs drive use cases which drive system requirements. Those are two sets of objects that I usually discuss separately, so I’m happy to finally connect them. Plus, I think it’s a cute picture. Enjoy.



_______________________________________________________________________________________
* I'll likely be repeating it elsewhere over the next few months.  Let me know if you're interested in attending.

** The flow can also run the other way: available resources determine what marketing programs you can run which determine what strategy makes the most sense.




Friday, April 13, 2018

Building Trust Requires Innovation

Trust has been chasing me like a hungry mosquito. It seems that everyone has suddenly decided that creating trust is the key to success, whether it’s in the context of data sharing, artificial intelligence, or customer retention. Of course, I reached that conclusion quite some time ago (see this blog from late 2015) so I’m pleased to have the company.   But I’m also trying to figure out where we all need to go next.

I picked up a book on trust the other day (haven’t gotten past the introduction, so can’t say yet whether I’d recommend it) that seems to argue the problem of trust is different today because trust was traditionally based on central authority but authority itself has largely collapsed. The author sees a new, distributed trust model built on transparent, peer-based reputation (think Uber and Airbnb)* that lets people confidently interact with strangers. The chapter headings suggest she ends up proposing blockchain as the ultimate solution. This seems like more weight than any technology can bear and may just be evidence of silver bullet syndrome.   But it does hint at why blockchain has such great appeal: it’s precisely in tune with the anti-authority tenor of our times.

From a marketer’s perspective, what’s important here is not that blockchain might provide trust but that conventional authority certainly cannot. This means that most trust-building methods marketers naturally rely on, which are based in traditional authority, probably won’t work. Things like celebrity endorsements, solemn personal promises from the CEO, and references to company size or history carry little weight in a hyper-skeptical environment. Even consumer reviews and peer recommendations are suspect in a world where people don’t trust that they’re genuine. What’s needed are methods that let people see things for themselves: a sort of radical transparency that doesn’t require relying on anyone else’s word, including (or perhaps especially) the word of “people just like me”.

One familiar example is comparison shopping engines that don’t recommend a particular product but  make it easy for users to compare alternatives and pick the option they like best. A less obvious instance would be a navigation app that shows traffic conditions and estimated times for alternate routes: it might present what it considers the best choice but also makes it easy for the user to see what’s happening and, implicitly, why the system’s recommendation makes sense. Other examples include package tracking apps that remove uncertainty (and thus reduce anxiety) by showing the movement of a shipment towards the customer, customer service apps that track the location of a repair person as he approaches for a service call, or phone queue systems that estimate waiting time and state how many customers are ahead of the caller.  A determined skeptic could argue that such solutions can't be trusted because the systems themselves could be dishonest.  But any falsehoods would quickly become apparent when a package or repair person didn’t arrive as expected, so they are ultimately self-validating.

Of course, many activities are not so easily verified. Claims related to data sharing are high on that list: it’s pretty much impossible for a customer to know how their data has been used or whether it has been shared without their permission. This is the European Union’s approach to privacy in the General Data Protection Regulation (GDPR) makes so much sense: the rules include a requirement to track each use of personal data, documentation of authority for that use, and a right of individuals to see the history of use. That’s very different attitude from the U.S. approach, which has much looser consent requirements and no individual rights to review companies' actual behaviors.  In other words, the EU approach creates a forced transparency that builds trust, especially false information would be a legally-punishable offense.

There’s a slender chance that the GDPR approach will be adopted in the U.S. in the wake of Facebook’s Cambridge Analytica scandal, although the odds are greatly against it. More likely, companies that are not Facebook will unite to oppose any legislation, even if Facebook itself sits on the sidelines. (That’s exactly what’s happening right now in California.)  The more intriguing possibility is that Facebook alone will adopt GDPR policies in the U.S. – as it has not very convincingly promised – and that this will pressure other companies to do the same.  Color me skeptical on that scenario: Facebook will probably renege once public attention turns elsewhere and few consumers will stop using services they enjoy due to privacy considerations.  In fact, if you look closely at studies of consumer attitudes, what you ultimately see is that consumers don’t really put a very high value on their personal data or privacy in general.

What does scare them is identity theft, so it’s just possible that regulations addressing that issue might provide privacy protections as a bonus. That’s especially true if consumers decide they don’t trust the government to enforce data protection standards but, following the distributed authority model, instead demand transparency so they can verify compliance to “self-enforce” the rules for themselves.  Yet this too is a long shot: few current political leaders or privacy activists are likely to adopt so subtle a strategy.

In short, the government won't solve the trust problem for marketers, so they'll need to find their own solutions.  This means they have to devise trust building measures, convince their companies to adopt them, and then educate customers about how they work.  This is an especially hard challenge because the traditional, authority-based methods of gaining trust are no longer effective. Finding effective new methods is an opportunity for innovation and competitive advantage, which are fun, and for long hours and failed experiments, which are less fun but part of the package. Either way, you really have no choice: as that mosquito keeps telling me, trust is essential for success in today’s environment.

________________________________________________________________________
* both firms with corporate trust issues of their own, ironically.

Wednesday, March 28, 2018

Adobe Adds Experience Cloud Profile: Why It's Good News for Customer Data Platforms


"A CDP by any other name still stores unified customer data."
Adobe on Tuesday announced the Experience Cloud Profile, which it described as a “complete, real-time view of customers” including data from outside of Adobe Cloud systems. The announcement was frustratingly vague but some ferreting around* uncovered this blog post by Adobe VP of Product Engineering Anjul Bhambhri, who clarified that (a) the new product will persistently store data ingested from all sources and (b) perform the identity stitching needed to build a meaningfully unified customer view. Adobe doesn’t use the term Customer Data Platform but that’s exactly what they’ve described here. So, unlike last week's news that Salesforce is buying MuleSoft, this does have the potential to offer a viable alternative to stand-alone CDP products.

Of course, the devil is in the details but this is still a significant development. Adobe’s offering is well thought out, including not just an Azure database to provide storage but also an open source Experience Data Model to simplify sharing of ingested data and compatible connectors from SnapLogic, Informatica, TMMData, and Microsoft Dynamics to make dozens of sources immediately available. Adobe even said they’ve built in GDPR-required controls over data sharing, which is a substantial corporate pain point and key CDP use case.

The specter of competition from the big marketing clouds has always haunted the CDP market. Salesforce’s MuleSoft deal was a dodged bullet but the Adobe announcement seems like a more palpable hit.** Yet the blow is far from fatal – and could actually make the market stronger over time. Let me explain.

First the bad news: Adobe now has a reasonable product to offer clients who might otherwise be frustrated by the lack of integration of its existing Experience Cloud products. This has been a substantial and widely recognized pain point. Tony Byrne of the Real Story Group has been particularly vocal on the topic. The Experience Cloud Profile doesn’t fully integrate Adobe’s separate products, but it does seem to let them share a rich set of customer data. That’s exactly the degree of integration offered by a CDP. So any Adobe client interested in a CDP will surely take a close look at the new offering.

The good news is that not everyone is an Adobe client. It’s true that the Cloud Profile could in theory be used on its own but Adobe would need to price it very aggressively to attract companies that don’t already own other Adobe components. The could of course be an excellent acquisition strategy but we don’t know if it’s what Adobe has in mind. (I haven’t seen anything about the Cloud Profile pricing but it’s a core service of the Adobe Experience Platform, which isn’t cheap.) What this means is that Adobe is now educating the market about the value of a persistent, unified, comprehensive, open customer database – that is, about the value of CDPs. This should make it much easier for CDP vendors to sell their products to non-Adobe clients and even to compete with Adobe to deliver CDP functions to Adobe’s own clients.

I’ll admit I have a vested interest in the success of the CDP market, as inventor of the term and founder of the CDP Institute. So I’m not entirely objective here. But as CDP has climbed to the peak of the hype cycle, I’ve been exquisitely aware that it has no place to go but down – and that this is inevitable. The best CDP vendors can hope for is to exchange being a “hot product” for being an established category – something that people recognize as a standard component of a complete marketing architecture, alongside other components such as CRM, marketing automation, and Web content management. I’ve long felt that the function provided by CDP – a unified, persistent, sharable customer database – fills a need that won’t go away, regardless of whether the need is filled by stand-alone CDPs or components of larger suites like Adobe Experience Cloud. In other words, the standard diagram will almost surely include a box with that database; the question is whether the label on that box will be CDP. Adobe’s move makes it more likely the diagram will have that box. It’s up to the CDP industry to promote their preferred label.



________________________________________________________________________
*okay, the first page of a Google search. No Pulitzer Prize for this one.
** yes, I’ve just combined references to Karl Marx and William Shakespeare in the same paragraph, garnished with a freshly mixed metaphor. You’re welcome.








Tuesday, March 20, 2018

Salesforce Buys MuleSoft and Offers It as a Data Unification Solutions

The Customer Data Platform industry is doing very well, thank you, with new reports out recently from both Gartner  and Forrester  and the CDP Institute launching its European branch.  But the great question hovering over the industry has been why the giant marketing cloud vendors haven’t brought out their own products and what will happen when they do. Oracle sometimes acts as if their BlueKai Data Management Platform fills the CDP role, while Adobe has made clear they don’t intend to do more than create a shared ID that can link data stored in its separate marketing applications. Salesforce has generally claimed its Marketing Cloud product (formerly ExactTarget) is a CDP, a claim that anyone with experience using the Marketing Cloud finds laughable.

The flaws in all these approaches have been so obvious that the question among people who understand the issues has been why the companies haven’t addressed them: after all, the problems must be as obvious to their product strategists as everyone else and the attention gained by CDP makes the gaps in their product offerings even more glaring. My general conclusion has been that the effort needed to rework the existing components of their clouds is too great for the vendors to consider. Remember that the big cloud vendors built their suites by purchasing separate products.  The effort to rebuild those products would be massive and would discard technology those companies spent many billions of dollars to acquire. So rationalization of their existing architectures, along with some strategic obfuscation of their weaknesses, seems the lesser evil.

We got a slightly clearer answer to the question on Tuesday when Salesforce announced a $6.5 billion purchase of Mulesoft, a data integration vendor that provides connectors between separate systems. In essence, Salesforce has adopted the Adobe approach of not pulling all customer data into a single repository, but rather connecting data that sits in its system of origin. In Salesforce’s own words, “MuleSoft will power the new Salesforce Integration Cloud, which will enable all enterprises to surface any data—regardless of where it resides—to drive deep and intelligent customer experiences throughout a personalized 1:1 journey.”

This is a distinct contrast with the CDP approach, which is to load data from source systems into a separate, unified, persistent database. The separate database has some disadvantages – in particular, it can involve replicating a lot of data – but it also has major benefits. These include storing history that may be lost in source systems, using that history to build derived data elements such as trends and aggregates, and formatting the data in ways that optimized for quick access by marketing systems and other customer-focused applications.

Although the difference between these two approaches is clear, some practical compromises can narrow the distance between them. Most CDPs can access external data in place, reducing the amount of data to be moved and allowing the system to use current versions of highly volatile information such as weather, stock prices, or product inventories. Conversely, a system like Mulesoft can push data into a persistent database as easily as it can push it to any other destination, so it can build some version of a persistent database. In fact, many CDPs that started out as tag managers have taken this approach.

But pushing data into a persistent database isn’t enough. Mulesoft and similar products work with well-defined inputs and outputs, while CDPs often can accept and store data that hasn’t been mapped to a specific schema. Even more important, I’m unaware of any meaningful ability in Mulesoft to unify customer identities, even using relatively basic approaches such as identity stitching. It’s possible to build workarounds, such as calls to external identity management systems or custom-built matching processes. Again, these are solutions employed by some CDP vendors that also lack advanced identity management. But such solutions can be costly, complex, and incomplete. From a buyer’s perspective, they are compromises at best. No one – except a salesperson – would argue they’re the ideal approach.

In short, Salesforce’s purchase of Mulesoft offers a partial solution to the needs that have driven the growth of CDPs. It’s probably the best that Salesforce could do without making the impractical investment needed to rebuild its existing marketing cloud components.  Get ready for a lot more confusion about the best way to build unified customer data. To avoid getting distracted, focus on what marketers really need and let that, not theory or vendor hype, drive your evaluation of the alternatives.


Monday, March 19, 2018

Picking the Right First Project for Your Customer Data Platform

For the past year, the most common question about Customer Data Platforms has been how they differ from Data Management Platforms. Recently that seems to have changed.  Today, the question everyone seems to be asking is what project they should pick as their first CDP use case.

That’s certainly progress but it’s a much harder question to answer than the one about DMPs. Like any good consultant, I can only answer that question with “it depends” and by then asking more questions. Here are some of the factors that go into a final answer.
  • What resources do you have available? The goal for your initial use case is to get something done quickly that returns substantial value. Getting something done quickly means you want a project that uses existing resources to the greatest degree possible. Ideally, the only new element would be the CDP itself, and even the CDP deployment would use a small number of data sources. So, an ideal first project would use data in existing systems that is well understood, known to be of high quality, and can easily be extracted to feed the CDP. Alternately, the first project might involve new data collected by the CDP itself, such as Web site behaviors captured by the CDP's own page tag. If the first project is purely analytical, such as customer profiling or journey analysis, then you don’t need to worry about connecting to execution systems, although you do need staff resources to properly interpret the data and possibly some analytical or reporting systems. But if you happen to have good execution systems in place, it may make sense for the first project to connect with them. Or, you may pick a CDP that provides its own execution capabilities or can feed lists or offer recommendations to external delivery systems.
  • What use case will provide value? This is where good delivery resources can be helpful: it’s much easier to deliver value with a use case that involves direct customer interaction and, thus, the opportunity to increase revenue or reduce costs. Often this can still be quite simple, such as a change in Web site personalization (involving just one channel for both source and delivery), an event-triggered email, or a prioritized contact list for sales teams. If execution isn’t an option, an analytical project can still be valuable if it presents information that wasn’t previously available. This may mean combining data that was previously kept separate, reformatting data couldn’t be analyzed in its original form, or simply pulling data from an inaccessible system into an accessible database. The trick here is for the analysis to generate insights that themselves can be the basis for action, even if the CDP isn’t part of the execution process.
  • How much organizational change will be needed? Technical obstacles are often less significant barriers than organizational resistance. In particular, it can be difficult to start with projects that cross lines of authority either within marketing (say, separate Web and email teams) or between marketing and other departments (such as operations or customer support). When considering such changes, take into account the needs to revise business processes, to provide adequate training, to align compensation systems with the new goals, to provide reporting systems that track execution, and to measure the value of results. As a practical matter, the fewer parts of the organization affected by the initial project, the easier it will be to deploy and the higher the likelihood of success.
  • Where’s the pain? It’s tempting to search for an initial project that is primarily easy to deploy. But even an easy project is competing with other demands on company resources in general and on staff and managers’ time in particular. So it’s important to pick a first project that solves a problem that’s recognized as important. If the problem is big enough – and it’s clear the CDP can solve it – then you have a good chance of convincing the company to make a substantial investment from the start. Ultimately, this is the right approach: after all, the CDP isn’t an end in itself, it’s a tool for improving your business. You may see a broad range of applications for your CDP but for those who don’t share that vision, you’ll need to show its value at every step of the way.

Tuesday, March 13, 2018

Eager to Sell Your Personal Data? You'll Have to Wait

Should marketers pay consumers directly to access their personal data? The idea isn’t new but it’s become more popular as people see the huge profits that Google, Facebook, and others make from using that data, as consumers become more aware of the data trade, and as blockchain technology makes low cost micro-payments a possibility.

One result is a crop of new ventures based on the concept has popped up like mushrooms – which, like mushrooms, can be hard to tell apart. I’ve been mentioning these in the CDP Institute newsletter as I spot them but only recently found time to take a closer look. It turns out that these things I’ve been lumping together actually belong to several different species. None seem to be poisonous but it’s worth sharing a field guide to help you tell them apart.

Before we get into the distinguishing features, let’s look at what these all have in common. They’re all positioned as a way for consumers to get value from their data. I’ve also bumped into a number of data marketplaces that serve traditional data owners, such as Web site publishers and compilers. They can often use some of the same technologies, including micro-payments, blockchain, and crypto-currency tokens. Some even sell personal data, especially if they’re selling ads targeted with such data. Some sell other things, such as streams from Internet of Thing devices. Examples of such marketplaces include Sonobi, Kochava, Narrative I/O, Datonics, Rublix and IOTA. Again, the big difference here is the sellers in the traditional marketplaces are data aggregators, not private individuals.

Here’s a look a half-dozen ventures I’ve lumped into the personal data marketplace category (which I suppose needs a three letter acronym of its own).

Dabbl turns out to be a new version of an old idea, which is to pay people for taking surveys. There are dozens of these: here's a list.  Dabbl confused me with a headline that said “Everyone’s profiting from your time online but you.” Payment mechanism is old-school gift cards. On the plus side: unlike most products in this list, Dabble is up and running.

Thrive pays users for sharing their data, but only in the broad sense that they are paid to fill out profiles which are exposed to advertisers when the users visit participating Web sites. The advertisers are paying Thrive; individual users aren’t deciding who sees their data or paid to grant access on a buyer-by-buyer basis. Payments are made via a crypto-token which is on sale as I write this. The ad marketplace is scheduled for launch at the end of 2018. That sequence suggests there’s at least a little cryptocurrency speculation in the mix. (Another hint: they’re based in Malta. Yet another hint: the U.S. Securities Exchange Commission won’t let you buy the tokens.)

Nucleus Vision is also in the midst of its token sale.  But they’re much more interested in discussing a propriety technology that detects mobile phones as they enter a store and shares the owner’s data using blockchain as an exchange, storage, and authorization mechanism. Store owners can then serve appropriate offers to visitors. This sounds like a lot of other products except that Nucleus’ technology does it without a mobile app. (It does apparently need some cooperation from the mobile carrier.) Rewards are paid in tokens which can be earned for store visits, by using coupons or discounts, by making purchases, or by selling data. Each retailer runs its own program, so this isn’t a marketplace where different buyers bid for each consumer’s data.  Sensors are currently running in a handful of stores and the loyalty and couponing systems are under development.

Momentum is an outgrowth of the existing MobileBridge loyalty system.  It rewards customers with yet another crypto-token (on sale in late April) for marketer-selected behaviors. Brands can play as well as retailers but it’s still the same idea: each company defines its own program and each consumer decides which programs to join. The shared token makes it easy to exchange or pool rewards across programs. The published roadmap is ambiguous but it looks like they’re at least a year away from delivering a complete system.

YourBlock gets closer to what I originally had in mind: it stores personal data (in blockchain, of course), uses the data to target offers from different companies, and lets consumers decide which offers to accept. Yep, there’s a crypto-token that will be used to give discounts. Sales started yesterday (March 12) and are set to close by April 23. Development work on the rest of the platform will start after the sale is over, with a live product due this August.

Wibson calls itself a “consumer-controlled personal data marketplace” and, indeed, they fit the archetype: users install a mobile app, grant access to their data, and then entertain offers from potential buyers to read it. Storage and sharing are based on blockchain but payments are made via points rather than a crypto-token. At least that’s how it works at the moment: in fact, Wibson has just completed its initial mobile app and you can’t download it quite yet. During the initial stage, only Wibson will be able to buy users’ data and they’ll just use it for testing. If they’ve published a schedule for further development, I can’t find it.

So, that’s our little stroll through the personal data marketplace. Less here than meets the eye, perhaps – most players offer more or less conventional loyalty programs, although they use blockchain and crypto-tokens to deliver them.  True marketplaces are still in development. But it’s still an interesting field and well worth watching. As with mushrooms, look carefully before you bite.

Sunday, March 04, 2018

State of Customer Data Platforms in Europe


The Customer Data Platform Institute will be launching its European branch later this month with a series of presentations in London, Amsterdam and Hamburg. We’ve seen considerable CDP activity in Europe – nearly one quarter of the CDPs in the Institute's latest industry update are Europe-based, several others with European roots have added a U.S. headquarters, and some of U.S.-based CDPs  have significant European business. A recent analysis of CDP Institute membership also found that one quarter of our individual members are in Europe. So what, exactly, is the state of CDP in Europe?

 It’s long been an article of faith on both sides of the Atlantic that the U.S. market is ahead of Europeans on marketing technology in general and customer data management in particular. That (plus the larger size of the U.S. market) is why so many European vendors have relocated to the U.S. This study from Econsultancy suggests the difference is overstated if it exists at all: 9% of European countries reported a highly integrated tech stack, barely under the 10% figure for North American companies. North American firms were actually more likely to report a fragmented approach (48% vs 42%), although that was only because European countries were more concentrated in the least advanced category (“little or no cloud based technology”) by 20% vs 13%.


 The assumption that cloud-based technology is synonymous with advanced martech is debatable but, then again, the survey was sponsored by Adobe.  What is clear is that European firms have generally lagged the U.S. in cloud adoption -- see, for example, this report from BARC Research.


Lower cloud use probably hasn’t directly impeded CDP deployment: although nearly all CDPs are cloud-based, a substantial number offer an on-premises option. (The ratio was seven out of 24 in the CDP Institute’s recent vendor comparison report, including nearly all of the Europe-based CDPs.) But the slower cloud adoption may be a hint of the generally slower pace of change among European IT departments, which could itself reduce deployment of CDPs.

A Salesforce survey of IT professionals supports this view. Answers to questions about leading digital transformation, being driven by customer expectations, and working closely with business units all found that U.S. IT workers are slightly but distinctly more business-oriented than their European counterparts. Interestingly, there’s a split within the European respondents: UK and Netherlands are more similar to the U.S. answers than France and Germany. I should also point out that I’ve highlighted questions where the U.S. and European answers were significantly different – there were quite a few other questions where the answers were pretty much the same.



Organizational silos outside of IT are another barrier to CDP adoption. A different Salesforce survey, this one of advertising managers, also found that North American firms are generally more integrated than their European counterparts. The critical result from a martech perspective is North American marketing and advertising departments were much more likely to collaborate on buying technology.



Then again, a Marketo survey found that European respondents (from a mix of IT, marketing, sales, and service departments) were generally more satisfied with their tools and performance, even though they lagged North Americas in slightly innovation and more clearly in strategic alignment with corporate objectives. This isn’t necessarily inconsistent with the previous results: being less integrated with other departments may free the Europeans to pursue their departmental goals more effectively, even if they’re less fully aligned with corporate objectives. Other surveys have given similar results: people are generally happier with technology when they buy it for themselves.



Not surprisingly, one area where the Europeans are clearly ahead in preparation for GDPR: a Spiceworks survey at the start of this year found that 56% of European companies had allocated funds for compliance compared with just 31% of U.S. companies. (Almost half the U.S. respondents believe GDPR wouldn’t affect them, even though GDPR applies globally.) While the result clearly relates to the fact that GDPR is a European Union regulation, it may also reflect a generally higher interest in privacy among European consumers: to take one example, ad blocking is much more common in Europe than the U.S. That’s good news for CDP vendors, since GDPR has emerged as one of the primary use cases.



On the other hand, a survey from Aspect found that U.S. consumers are generally more demanding than Europeans about customer service: they care more about having a choice of service channels, are more willing to pay extra for good service and are quicker to stop buying after a poor experience. This is probably bad news for European CDP vendors, since unified customer data is a foundation for modern customer service.



In sum, things really are a bit different in Europe. Integration, the primary CDP use case, is lagging compared to the U.S. So it makes sense that CDP adoption is also lagging.  But GDPR may be changing the equation and consumer attitudes are certainly adding external pressure.  The need for CDP is growing and we hope the CDP Institute’s European operations will make it a little easier for European companies find right solutions.


Friday, February 23, 2018

Will CDP Buyers Consider Private Clouds as On-Premise Deployment?

Most Customer Data Platforms are Software as a Service products, meaning they run on servers managed by the vendor. But some clients prefer to keep their data in-house. So before releasing the CDP Vendor Comparison report – now available here – I added a line for on-premises deployment.

This seemed like a perfect fit: a clear yes/no item that some buyers consider essential. But it turned out to raise several issues:

- on-premises vs on-premise. I originally used “on-premise”, which is how the term is typically rendered. One of the commenters noted this is a common error. A bit of research showed it’s been a topic of discussion but on-premise is now more widely used relating to computer systems.  On-premises actually sounds a bit pedantic to me, but I’m using it to avoid annoying people who care. (Interestingly, no one seems too concerned about whether to use the hyphen. I guess even grammar geeks pick their battles.)

- private clouds. Several vendors argued that on-premises is an old-fashioned concept that’s largely been replaced by private clouds as a solution for companies that want to retain direct control over their systems and data. This resonated: I recalled seeing this survey from 451 Research showing that conventional on-premises [they actually used “on-premise”] deployments now account for just one-quarter of enterprise applications and the share is shrinking.


Percentage of Applications by Venue:
24% Conventional (on-premise, non-cloud)
18% on-premise private cloud
15% hosted private cloud
14% public cloud
13% off-premise non-cloud
Source: 451 Research, Strategy Briefing: Success Factors for Managing Hybrid IT, 2017

My initial interpretation of this was the on-premises private clouds meet the same goals as conventional on-premises deployments, in the sense of giving the company’s IT department complete control. But in discussions with CDP vendors, it turned out that they weren’t necessarily differentiating between on-premises private clouds and off-premise private clouds, which might be running on private servers (think: Rackspace) or as “virtual private servers” on public clouds (think: Amazon Web Services). Clearly there are different degrees of control involved in each of these and companies that want an on-premises solution probably have their limits on how far they’ll go in the private cloud direction.

- public clouds. One vendor speculated that most remaining conventional deployments are old systems that can’t be migrated to the cloud. The implication was that buyers who could run a CDP in the cloud would gladly do this instead of insisting on an on-premises configuration. This survey from Denodo suggested otherwise: while it found that 77% of respondents were using a public cloud and 50% were using a virtual private cloud, it also found that 68% are NOT storing “sensitive data” in the public cloud. Presumably the customer data in a CDP qualifies as sensitive. I don't know whether the respondents would consider a “virtual private cloud” as part of the public cloud.  But I think it’s reasonable to assume that a considerable number of buyers reject external servers of any sort as an option for CDP deployment, and that “on-premises” (including on-premises private clouds) is a reasonable term to describe their preferred configuration.




Monday, February 19, 2018

How Customer Data Platforms Help with Marketing Performance Measurement

John Wanamaker, patron saint of marketing measurement.
If you’ve been following my slow progress towards a set of screening questions for Customer Data Platforms, you may recall that “incremental attribution” was on the list. The original reason was that some of the systems I first identified as CDPs offered incremental attribution as their primary focus. Attribution also seemed like a specific enough feature that it could be meaningfully distinguished from marketing measurement in general, which nearly any CDP could support to some degree.

But as I gathered answers from the two dozen vendors who will be included the CDP Institute’s comparison report, I found that at best one or two provide the type of attribution I had in mind.  This wasn't enough to include in the screening list.  But there was an impressive variety of alternative answers to the question.  Those are worth a look.

- Marketing mix models.  This is the attribution approach I originally intended to cover. It gathers all the marketing touches that reach a customer, including email messages, Web site views, display ad impressions, search marketing headlines, and whatever else can be captured and tied to an individual. Statistical algorithms then look at customers who had a similar set of contacts except for one item and attribute any difference in performance to that.  In practice, this is much more complicated than it sounds because the system needs to deal with different levels of detail and intelligently combine cases that lack enough data to treat separately.  The result is an estimate of the average value generated by incremental spending in each channel. These results are sometimes combined with estimates created using different techniques to cover channels that can’t be tied to individuals, such as broadcast TV. The estimates are used to find the optimal budget allocation across all channels, a.k.a. the marketing mix.

- Next best action and bidding models.  These also estimate the impact of a specific marketing message on results, but work at the individual rather than channel levels. The system uses a history of marketing messages and results to predict the change in revenue (or other target behavior) that will result from sending a particular message to a particular individual. One typical use is deciding how much to bid for a display ad impression; another is to choose products or offers to make during an interaction. They differ from incremental attribution because they create separate predictions for each individual based on their history and the current context. Several CDP systems offer this type of analysis.  But it’s ultimately not different enough from other predictive analytics to treat it as a distinct specialty.

- First/last/fractional touch.  These methods use the individual-level data about marketing contacts and results, but apply fixed rules to allocate credit.  They are usually limited to online advertising channels.  The simplest rules are to attribute all results to either the first or last interaction with a buyer.  Fractional methods divide the credit among several touches but use predefined rules to do the allocation rather than weights derived from actual data.  These methods are widely regarded as inadequate but are by far the most commonly used because alternatives are so much more difficult.  Several CDPs offer these methods. 

- Campaign analysis. This looks at the impact of a particular marketing campaign on results. Again, the fundamental method is to compare performance of individuals who received a particular treatment with those who didn’t. But there’s usually more of an effort to ensure the treated and non-treated groups are comparable, either by setting up a/b test splits in advance or by analyzing results for different segments after the fact. The primary unit of analysis here is the campaign audience, not the specific individuals. The goal is usually to compare results for campaigns in the same channel, not to compare efforts across channels. This is a relatively simple type of analysis to deliver since it doesn’t required advanced statistics or predictive techniques. As a result, it’s fairly common or could be delivered by many systems even without the vendor creating special features to do it.

- Content performance analysis. This is very similar to campaign analysis except that audiences are defined as people who received a particular piece of content, which could be used across several campaigns. Again, there might be formal split tests or more casual comparison of results. Some implementation draw broader conclusions from the data by grouping content with similar characteristics such as product, message, or offer. But unless the groups are identified using artificial intelligence, even this doesn’t add much technical complexity.

- Journey analysis. Truth be told, no vendor in my survey described journey analysis as a type of incremental attribution. But it does come up in some discussions of marketing measurement and optimization. Like marketing mix and next best action methods, journey analysis examines individual-level interactions to find larger patterns and to identify optimal choices for reaching specified goals. But it looks much more closely at the sequence of events, which requires different technical approaches to deal with the higher resulting complexity.

Marketing measurement is one of the primary uses of Customer Data Platforms. Dropping attribution from the list of CDP screening questions shouldn't be interpreted to suggest it’s unimportant. It just means it’s that measurement  is too complicated to embed in a simple screening question. As with other important CDP features, buyers who want their CDP to support marketing measurement will need to define their specific needs in detail and then closely examine individual CDP vendors to see who can meet them.

Sunday, February 18, 2018

Will GDPR Hurt Customer Data Platforms and the Marketers Who Use Them?

Like an imminent hanging, the looming execution of the European Union’s General Data Protection Regulation (GDPR) has concentrated business leaders’ minds on their customer data. This has been a boon for Customer Data Platform vendors, who have been able to offer their systems as solutions to many GDPR requirements. But it raises some issues as well.

First the good news: CDPs are genuinely well suited to help with GDPR. They’re built to solve two of GDPR’s toughest technical challenges: connecting all internal sources of customer data and linking all data related to the same person. In particular, CDPs focus on first party (i.e., company-owned) personally identifiable information and use deterministic matching to ensure accurate linkages. Those are exactly what GDPR needs. Some CDP vendors have added GDPR-specific features such as consent gathering, usage tracking, and data review portals. But those are relatively easy once you’ve assembled and linked the underlying data.

GDPR is also good for CDPs in broader ways. Most obviously, it raises companies’ awareness of customer data management, which is the core CDP use case. It will also raise consumers' awareness of their data and their rights, which should lead to better quality customer information as consumers feel more confident that data they provide will be handled properly. (See this Accenture report that 75% of consumers are willing to share personal data if they can control how it’s used, or this PegaSystems survey in which 45% of EU consumers said they would erase their data from a company that sold or shared it with outsiders.)  Conversely, GDPR-induced constraints on acquiring external data should make a company’s own data that much more valuable.

Collection requirements for GDPR should also make it easier for companies to tailor the degree of personalization to individual preferences.  This Adobe study found that 28% of consumers are not comfortable sharing any information with brands and 26% say that too-creepy personalization is their biggest annoyance with brand content. These results suggest there’s a segment of privacy-focused consumers who would value a privacy-centric marketing approach. (That this approach would itself require sophisticated personalization technology is an irony we marketers can quietly keep to ourselves.)

So, what's not to like?  The downside to GDPR is that greater corporate interest in customer data means that marketers will not be left to manage it on their own.  Marketing departments have been the primary buyers of Customer Data Platforms because corporate IT often lacks the interest and skills needed to meet marketing needs.  GDPR and digital transformation don't give IT new resources but they do mean it will be more involved.  Indeed, this report from data governance vendor Erwin  found that responsibility for meeting data regulations is held by IT alone at 36% of companies and is shared between IT and all business units (not just marketing) at another 55%.  I’ve personally heard many recent stories about corporate IT buying CDPs.

Selling to IT departments isn’t a problem for CDP vendors. Their existing technology should work with little change.  At most, they'll need to retool their sales and marketing. But marketers may suffer more. Corporate IT will have its own priorities and marketing won’t be at the top of the list. For example, this report from master data management vendor Semarchy found that customer experience, service and loyalty applications take priority over sales and marketing applications. More broadly, studies like this one from ComputerWorld consistently show that IT departments prioritize productivity, security and compliance over customer experience and analytics. Putting IT and legal departments in charge of customer data is likely to mean a more conservative approach to how it's used than marketers would apply on their own.  This may prevent some problems but it's also likely to make marketers' jobs harder.

A greater IT role may also reverse the current trend of adding analytical and marketing applications to CDP data management functions. Marketers generally like those applications because it saves them the trouble of buying and integrating separate analytical and marketing systems. IT departments won’t use those features themselves and will probably be more interested in making sure CDP data can be shared by external applications from all departments. Similarly, IT buyers may favor CDP designs that are less tuned specifically to marketing needs and more open to multiple uses. This will favor some technical approaches over others.

The final result is likely to be clearer division of the CDP market into systems that focus on enterprise-wide customer data management and that give marketers integrated data, analytics, and customer engagement. If both types of vendors find enough buyers to survive, the expanded choice means that everyone wins. But the combined data, analytics and execution CDPs could be squeezed between data-only CDPs and the integrated applications of big marketing clouds. If there's not enough room left for them, marketers choices will be reduced.  Should that happen, GDPR will have done CDP vendors and marketers more harm than good.



Friday, February 02, 2018

Celebrus CDP Offers In-Memory Profiles

It’s almost ten years to the day since I first wrote about Celebrus, which then called itself speed-trap (a term that presumably has fewer negative connotations in the U.K. than the U.S.). Back then, they were an easy-to-deploy Web site script that captured detailed visitor behaviors. Today, they gather data from all sources, map it to a client-tailored version of a 100+ table data model, and expose the results to analytics and customer engagement systems as in-memory profiles.

Does that make them a Customer Data Platform? Well, Celebrus calls itself one – in fact, they were an early and enthusiastic adopter of the label. More important, they do what CDPs do: gather, unify, and share customer data. But Celebrus does differ in several ways from most CDP products:

- in-memory data. When Celebrus described their product to me, it sounded like they don’t keep a persistent copy of the detailed data they ingest. But after further discussion, I found they really meant they don’t keep it within those in-memory profiles. They can actually store as much detail as the client chooses and query it to extract information that hasn't been kept in memory.  The queries can run in real time if needed. That’s no different from most other CDPs, which nearly always need to extract and reformat the detailed data to make it available. I’m not sure why Celebrus presents themselves this way; it might be that they have traditionally partnered with companies like Teradata and SAS that themselves provided the data store, or that they partnered with firms like Pega, Salesforce, and Adobe that positioned themselves as the primary repository, or simply to avoid ruffling feathers in IT departments that didn't want another data warehouse or data lake.  In any case, don’t let this confuse you: Celebrus can indeed store all your detailed customer data and will expose whatever parts you need.

- standard data model. Many CDPs load source data without mapping it to a specific schema. This helps to reduce the time and cost of implementation. But mapping is needed later to extract the data in a usable form. In particular, any CDP needs to identify core bits of customer information such as name, address, and identifiers  that connect records related to the same person. Some CDPs do have elaborate data models, especially if they’re loading data from specific source systems or are tailored to a specific industry.  Celebrus does let users add custom fields and tables, so its standard data model doesn’t ultimately restrict what the system can store.

- real-time access.  The in-memory profiles allow external systems to call Celebrus for real-time tasks such as Web site personalization or bidding on impressions..  Celebrus also loads, transforms, and exposes its inputs in real time.  It isn't the only CDP to do this, but it's one of just a few..


Celebrus is also a bit outside the CDP mainstream in other ways. Their clients have been largely concentrated in financial services, while most CDPs have sold primarily to online and offline retailers. While most CDPs run as a cloud-based service, Celebrus supports cloud and on-premise deployments, which are preferred by many financial services companies.  Most CDPs are bought by marketing departments, but Celebrus is often purchased by customer experience, IT, analytics, and digital transformation teams and used for non-marketing applications such as fraud detection and system performance monitoring.

Other Celebrus features are found in some but not most CDPs, so they’re worth noting if they happen to be on your wish list. These include ability to scan for events and issue alerts; handling of offline as well as online identity data; and specialized functions to comply with the European Union’s GDPR privacy rules.

And Celebrus is fairly typical in limiting its focus to data assembly functions, without adding extensive analytics or customer engagement capabilities.  That's particularly common in CDPs that sell to large enterprises, which is  Celebrus' main market.  Similarly, Celebrus is typical in providing only deterministic matching functions to assemble customer data. 



So, yes, Celebrus is a Customer Data Platform.  But, like all CDPs, it has its own particular combination of capabilities that should be understood by buyers who hope to find a system that fits their needs.

As I already mentioned, Celebrus is sold mostly to large enterprises with complex needs.  Pricing reflects this, tending to be "in the six or seven figures" according the company and being based on input volume, types of connected systems, and license model (term or perpetual, SaaS, on-premise, or hybrid).  The company hasn’t released the number of clients but says it gathers data from "tens of thousands" of Web sites, apps, and other digital sources.  Celebrus has been owned since 2011 by D4T4 Solutions  (which looks like the word “data” if you use the right type face), a firm that provides data management services and analytics.