Skip to content

What’s Your Deployment Score?

Analytics, we have often assumed, is all about modeling, statistics, and algorithms. The more generous explainers of the field would admit that it is also about data—the integration, cleaning, and massaging of it. Few, however, think first of “deployment” when the subject of analytics is raised.

And that is a problem. Great data and fantastic models have no impact on the world unless they are deployed into operational systems and processes. If they’re not used to improve marketing campaigns, decide on optimal inventory levels, or influence hiring processes, they might as well not exist. The same goes, by the way, for AI models and systems. If they’re not deployed they’re just expensive and interesting baubles.

For the last year or so I’ve been asking analytical and AI leaders in big companies what percentage of their models actually get deployed. Granted, it’s a tricky question, because most analysts or data scientists play around with lots of different models before they choose the best one. But ideally the period of exploration should stop at some point, and the model that’s most likely to predict or discriminate well is so designated. Then the question becomes, how many of those actually make it into production?

The numbers I’ve heard for deployment percentages are not generally impressive. One company said the number was zero—at least at that level the analytics group knew they had a problem, and began to address it. Most who guess at it say 50% or less. One survey, sponsored by Alteryx, said that the average deployment rate was 13%.

It is possible to have a high score, however. Kimberly Holmes, formerly head of “Strategic Analytics” at insurance company AXA XL (and recently moved to be Senior Vice President, Chief Actuary and Strategic Analytics Officer at Kemper Insurance), carefully measured her deployment rate at AXA XL and ensured that it was 100%. However, she notes that there are really two different deployment rates: the percentage of analytical models that are officially put into production, and the percentage that are actually used in management decision-making. She knows that the first number was 100%, but the second is difficult to measure precisely. She noted in an email, “Different users adopt [an analytical model] in different ways—some relying on it for almost all decision making and some using it for planning account strategies. I typically use "nearly 100%" to describe adoption.”

Improving Your Deployment Rate

How do you get better deployment rates? One is to define success based on deployment, not creating great models. At Procter & Gamble a few years ago, the analysts in the central analytics group were themselves “deployed” to business units, and a primary criterion for their compensation and promotion was developing at least one breakthrough business result each year (as judged by the business leader to whom they were assigned). That approach had the desired effect of improving deployment, and P&G no longer feels the need to use that approach.

Another approach I’ve heard of—at Farmers Insurance—takes a “pipeline” approach to implementation for their AI projects. They classify each project as “discovery,” “pilot,” or “production,” and try to move them toward production. Excessive piloting is a problem for AI that I have noted elsewhere, in part because companies are still experimenting with the technology. But that can’t be our excuse for analytics. Creating a pipeline means keeping track of all your analytics or AI projects, noting their current status, and discussing with project leaders what it will take to move them to the next step, and the one after that.

An entirely different approach to shifting the analytical focus toward deployment is to make modeling and algorithm development so easy and automated that it takes up much less of an analyst or data scientist’s time. That is what is possible with automated machine learning, to name just one technology. If an AutoML system is creating models for you, that should make it possible to spend time on understanding the business problem, building relationships with decision-makers, and other activities that facilitate deployment.

Deployment generally involves information technology, and full-scale production deployments involve integration with the existing technology architecture and enterprise systems. That means that AI and analytics folks need to develop close and trusting relationships with corporate IT folks if they are to improve their deployment rates. Everybody seems to like doing end runs around IT these days to implement shadow IT, but that strategy will eventually bite you in the rear.

Kimberly Holmes has two other approaches to improving deployment. They are, in her words:

  1. Solve stakeholders' problems/opportunities that they want solved. In other words, we don't build models for people who don't want models. This is THE most important factor in choosing which models to build. Way more important than ROI. Without this, there is no "R" in ROI. A corollary to this is that change management starts with project selection, not when the model is done.

  2. Deploy in a way that stakeholders cannot opt out of using the model. This includes automation but the most important thing is measuring decision making. What gets measured gets done. Is the outcome you are trying to affect really changing? The transparency of reporting at the decision-maker level is very motivational.

The Traits of Successful Deployers

As is often the case in analytics, we may be hiring people for quantitative jobs who are not well suited to deployment as the primary objective. Instead of—or at least in addition to--hiring for mathematical or statistical brilliance or a shiny Ph.D. in some quantitative field, we should be looking for attributes like these:

  • The ability to build and maintain relationships with stakeholders;

  • An understanding of organizational politics, and willingness to play them when necessary;

  • An orientation to long-term planning and patience—deployment can require both;

  • The ability to translate business needs into analytics or AI requirements;

  • The ability to engineer and build bulletproof systems, or oversee that development;

  • A knowledge of how managers typically make decisions and how to improve decision processes;

  • The confidence and diplomacy to “ruffle feathers” when deployment is threatened;

  • Some evidence of successful deployment in the past.

Perhaps obviously, these skills won’t always coincide with the math capabilities that have traditionally distinguished analytics and AI types. All the more reason why analytics and AI groups should have teams of people with different characteristics. Someone who understands analytics and algorithms, but who also has the above deployment-oriented capabilities, would be an incredibly valuable member of the team.

It may be somewhat painful to calculate your deployment score, but it is perhaps the single most important indicator of how valuable your analytics or AI group is to your organization. Biting the bullet and measuring the score, and then working diligently to improve it, will help to significantly extend both the lifespan of analytics and AI within your company and your own career in that exciting field.