Research

Data-Driven Decision Making - Part 2: Systems of Record, Engagement, and Intelligence

By Geoffrey Moore, Nov 14, 2017

Data-driven decision-making: who doesn’t think it is a good idea? But it typically has a rough go in the real world of enterprise management, in part because the data itself often proves unreliable. For much of my business life IT has been tasked with building systems that could represent a single source of the truth. Unfortunately, that quest proved to be right up there with the holy grail and the fountain of youth—at best, aspirational, at worst, delusional.

Today we have an opportunity to make a great leap forward, however, because for the first time in history we have broad access to high-volume data from a variety of sources that, when matched against each other, dramatically increase the probability of something like truth, and do so in a time window that is actionable. Not everyone, of course, has access to all the sources, so to kick things off let me present a framework of the possible, within which each organization can determine what its actual will be.

Read Part 1: A Single Source of the Truth

Data-driven decision-making with Systems of Record

This has been the baseline for enterprise IT ever since we stopped calling it data processing and started calling it management information systems (roughly 1980) with it really coming into its own with the rise of business intelligence in the 1990s. The goal is to generate insights from the transaction data by transferring it to a data warehouse and applying a variety of tools to generate reports and analyses. Once these are made available to management, various teams do deeper dives to better understand and control their operations, generating insights that are input to a qualitative decision-making process conducted through staff meetings and quarterly business reviews.

One of the key challenges in this data-driven approach is a lack of a “single source of the truth.” The data is often fragmented, coming from disconnected internal systems, and typically supplemented with private data sources via spreadsheets that are not replicated elsewhere, all of which leads to conflicting representations of what is supposedly the same situation. Things are not helped by the fact that different factions within the enterprise often have different axes to grind, calling into question the integrity of the entire process. All this creates enormous frustration among decision-makers, and the most common way to resolve it is to default to the HiPPO (Highest Paid Person’s Opinion). This isn’t crazy—presumably that person is being paid a lot for a reason—but the likelihood that the data has been machined to manipulate their decision is high, and so also is the potential to drift toward increasingly bad decisions.

That said, in a prior era when demand exceeded supply for most goods and services, companies had time to recover from bad decisions since customers had few options to take their business elsewhere. With the turn of the century, however, the demand/supply equation has flipped, and now it is the customer who has the power to defect and the supplier who is left holding the bag. That makes it imperative to get closer to the customer, which in turn has been driving a massive and ongoing investment in systems of engagement. And that, as we shall see, creates a new kind of decision-making.

Data-driven decision-making with Systems of Engagement

The goal here is to generate insights from digitally enabled contacts with the customer supplemented with information collected in the customer-facing systems of record. The data is by definition external and for the most part explicit, coming from call center logs, chat rooms, A/B testing, customer satisfaction surveys, or marketing automation responses, for example, supplemented by some tacit data from things like website clickstreams and abandoned shopping carts. As with systems of record, this data is fed into a qualitative decision process, but this time the decision-makers are more likely to be part of a cross-functional team since the interests of the customer quickly transcend the boundaries of any one function within the enterprise. These teams are normally staffed by middle managers with no HiPPOs in the room, meaning that the decision process is more collaborative, and the outcomes tend to be more tentative and provisional.

The biggest technical challenge with this type of data-driven decision-making is integrating the customer-facing systems of engagement that are typically SaaS applications running in public clouds with data from the systems of record that are typically licensed software applications running in on-premise data centers. At the same time, at the business relationship level, there is a similar disconnect: customer issues rarely align with organizational boundaries, so there is an inherent complexity entailed in bringing any call to action to fruition. Cross-functional teams are empowered to look, see, and recommend, but they can’t then just pull the lever of change. Securing anything like a timely response in a hierarchical management system in nigh on impossible—hence the growing interest in a variety of more collaborative operating models: customer-centric design, agile development, and ongoing course correction—all designed with a bias toward action.

Overall, this is a data-driven approach, but more often than not it is anecdotes and dialogs that are doing the heavy lifting. It yields significantly better results than just navigating by one’s systems of record, but it is highly prone to error, so much so that it builds in error correcting cycles into its core methodology. Thus, if one could find a more productive approach, there is plenty of payback. This observation has not been lost on a host of disruptive innovators, led by the likes of Amazon, Uber, Airbnb, and Netflix, who are raising havoc in a host of industries who never thought of themselves as high tech before. Now they must, and that’s what’s driving all the interest and investment in systems of intelligence.

Data-driven decision-making with Systems of Intelligence

When we move to systems of intelligence as a platform for externally driven, customer-centric decision-making, the biggest difference is the shift from explicit data to tacit data. That is, rather than counting on people to report on what matters, and then count on other people to interpret those reports correctly, systems of intelligence interrogate log files that record the actual behavior of assets and people, and then use machine learning to extract signals of economic consequence. These signals are typically augmented with human expert input at the beginning to train algorithms to recognize actionable states and prescribe the appropriate responses. Over time, however, the algorithms become better than the humans, and at the point, you want them to be making the decisions, as they do today in algorithmic trading of equities, online fraud detection and security systems, and dynamic digital advertising. At the same time, you can’t allow them to go rogue, so these efforts must be constrained though policy, oversight, and human review.

Because it is early days, scarce data science expertise makes it challenging at present for many enterprises to invest in systems of intelligence, but that pressure is easing as the major cloud computing vendors—Amazon, Microsoft, Google, and IBM—are all making machine learning and artificial intelligence resources available on-line, supplemented with consulting help. In addition, the sheer data volumes involved are mind-boggling to anyone who grew up in a prior era, entailing wholly different approaches to where and how data is stored and analyzed. Again, however, the cloud vendors are more than eager to help. A more persistent challenge, on the other hand, is posed by data privacy. Where tacit data is being used to infer personal preferences, a boundary is crossed, and the issue is, what is the appropriate social contract to govern this domain? This is all new ground, and we can expect a broad range of responses. The immediate workaround is regulation through legislation, which we know from our experience with working out from under the financial crises of 2001 and 2008, will initially be awkward, painful and expensive. And finally, as if that were not enough to occupy our regulatory bodies, we have a fourth class of system looming on the horizon: systems of autonomy. These promise to bring to the physical world the astounding productivity improvements that algorithmic computing is bringing to digital processes.

Read Part 3: Systems of Autonomy

Originally published on LinkedIn Pulse.

About the author

Author photo

Geoffrey Moore is an author, speaker, and advisor who splits his consulting time between start-up companies in the Mohr Davidow portfolio and established high-tech enterprises, most recently including Salesforce, Microsoft, Intel, Box, Aruba, Cognizant, and Rackspace.

Moore’s life’s work has focused on the market dynamics surrounding disruptive innovations. His first book, Crossing the Chasm, focuses on the challenges start-up companies transitioning from early adopting to mainstream customers. It has sold more than a million copies, and its third edition has been revised such that the majority of its examples and case studies reference companies come to prominence from the past decade. Moore’s most recent work, Escape Velocity, addresses the challenge large enterprises face when they seek to add a new line of business to their established portfolio. It has been the basis of much of his recent consulting. Irish by heritage, Moore has yet to meet a microphone he didn’t like and gives between 50 and 80 speeches a year. One theme that has received a lot of attention recently is the transition in enterprise IT investment focus from Systems of Record to Systems of Engagement. This is driving the deployment of a new cloud infrastructure to complement the legacy client-server stack, creating massive markets for a next generation of tech industry leaders.

Moore has a bachelors in American literature from Stanford University and a PhD in English literature from the University of Washington. After teaching English for four years at Olivet College, he came back to the Bay Area with his wife and family and began a career in high tech as a training specialist. Over time he transitioned first into sales and then into marketing, finally finding his niche in marketing consulting, working first at Regis McKenna Inc, then with the three firms he helped found: The Chasm Group, Chasm Institute, and TCG Advisors. Today he is chairman emeritus of all three.


Tags

Unbiased Actionable Insights

Accelerate your organization’s journey to analytics maturity

Get the data sheet to learn how the Research & Advisory Network advances analytics capabilities and improves performance.

Download data sheet »

Become a RAN Client

Get answers to your toughest analytics questions with IIA's Research & Advisory Network.

»