“A new company starts out making a lot of mistakes. Eager to improve, they collect a lot of data and build new new models,” he says. “Over time, these models allow the new company to find the best answers and implement them with great precision. The new company is becoming a mature company that is great at analytics. Then one day the models stop working. The bugs that fueled the models are now gone, and the analytical models are starving.”
The paradox is that the better the business gets at gathering insights from analytics – and acting on those insights – the more streamlined its operations become. This in turn makes the data resulting from these operations more homogeneous. But over time, homogeneity becomes a problem: variable data—and, yes, mistakes—allow the algorithms to keep learning and optimizing. As the volatility in the new data shrinks, the algorithms no longer have much to work with.
The paradox leads to a rather surprising recommendation: “occasionally you need to intentionally mess things up,” Florian Zettelmeyer says.
“You’re plotting variation in your data so you can get the long-term picture,” explains Zettelmeyer, also a marketing professor at Kellogg.
Zettelmeyer and Anderson are academic directors of Kellogg’s Executive Education program at Leading with Big Data and Analytics; they are also writing a book on data science for leaders.
Here, they offer a look at how the best companies have found a way around the Analytics Paradox.
From Optimization to Stagnation
In some ways, the value of big data lies in its messiness—in the often unexpected variation in how events unfold, and in the myriad ways those events help make connections between variables that can help people get better decisions.
“Theoretically, the best manager for analytics is the one who comes into the office every morning and flips a coin to make all the decisions,” Anderson says. “Because if you make all your decisions by flipping a coin, you’re going to generate the best possible data for your analytics engine.”
“The problem,” he adds, “is that in every company, the coin-flipping manager gets fired very quickly. The managers who survive are the ones who are really good at implementing decisions with great precision.”
To understand how the best teams can find their operations too optimized for their own good, Anderson offers this hypothetical example.
“Right now your company offers two-day delivery and someone says, ‘I’d like you to go back and analyze the historical data. Please let me know if we should have two day delivery or move to one day delivery.’ Could you answer this question with your data?”
If the delivery process is overseen by a high-performance team that is focused squarely on efficiency, then most likely I can not answer this question with data.
“If you’re really good at delivery—if you run the operations efficiently—how many days does it take? Two days,” says Anderson. “The guy who messed up and took four days to deliver a package got fired. The one who delivered in three days sometimes and one day other times was fired. You’re stuck with all the managers who deliver in two days—you’ve built an organization that’s so good at delivering things that almost always happen in two days.”
Concerned about your own success, you don’t have the data to know if there is a better delivery strategy or how you could successfully transition to a new model.
“Unless I’m occasionally wrong, I’ll never know if what I think is the best is actually still the best,” says Zettelmeyer.
What the best companies do
Of course, companies have many good reasons for not wanting to lavishly reward incompetence or promote a manager whose decision-making appears to be limited to currency flips.
Instead, leading companies have adopted a fundamentally different strategy for thinking about big data.
“The best companies are investing heavily now in data creation, data design,” Anderson says. “They intentionally inject variability into the data.”
Whether they’re experimenting with how many days it takes to deliver a package, how to set prices, or how best to maintain an aging fleet of vehicles, these elite companies understand that experimentation and variability must be built into the DNA of the organization.
“It’s only a small fraction of companies” that do this, maybe five percent, Anderson says.
So what should most managers do differently?
“When you take a business action, you have to keep in mind what the impact is on the utility of the data that’s going to come out of it,” says Zettelmeyer.
This requires the foresight to understand the questions you might want to answer in the future, as well as the discipline to work behind those questions to ensure you are prepared to receive data that is rich and useful.
A company developing a national advertising campaign, for example, may decide to modify the campaign in significant ways only in select markets or to scale the rollout by region. While there may be short-term costs in terms of efficiency and optimization, the resulting data has the potential to teach the company going forward.
Don’t relegate Data Science to Data Scientists
Such foresight cannot be the purview of a single employee or group in an organization, the stress of the couple. This is because decisions about how to experiment must be made with specific problems in mind.
“It spans the entire organization, so it has to be a cultural shift in how we think about our day-to-day operations,” says Anderson.
The key, says Zettelmeyer, is to “get yourself into the situation you’re going to be in in the future.” What data would be useful to have to make the Next decision, and the Next a? What relationship between the variables do you want to prove? And how could you design an experiment to prove this connection, given your existing capabilities and limitations?
And keep in mind that the infrastructure this requires may be quite different from that needed to manage much of the big data flowing through an organization. For example, the high-level dashboards that senior leaders are used to may not be able to distinguish between many subtle but important differences in when a campaign was launched, for example, or how a delivery path was created.
“It’s a very different thought process in terms of how you would actually build an IT system to support experimentation,” says Anderson.
So instead of trying to outsource this task to a dedicated data science team—or worse, to a single piece of software—Anderson and Zettelmeyer recommend that companies train managers in how to think about and ask questions about data. .
“It requires a working knowledge of data science,” says Zettelmeyer. “That’s a skill set that managers need to realize that this is something they need to take on.”