It’s a common approach used when we’re looking for something we want but don’t know how to describe it perfectly. There’s just one problem: it only works when we’re in familiar territory. When you’re looking for something genuinely innovative—like a novice tennis player shopping for her first racquet or a pharmaceutical company determining the ideal dose of a new drug—it’s hard to recognize a good choice in the first place. In these cases, what is the best way to search and how do we know when to stop?
“Many situations in everyday life look like these search problems,” he says Suraj Malladiassistant professor of managerial economics and decision sciences at the Kellogg School. “What they have in common is that, to get a good result, you have to try things – and that will determine whether or not you keep looking and what kind of thing you look for next.”
For example, should the tennis player look for racquets that are very similar or very different from the first one she tried? Should the pharmaceutical lab experiment with similar dosage levels for their new drug or try much higher or lower? And when is it ideal for either of you to withdraw?
Understanding both how to search and how they should search goes a long way. An important first step is to find the best possible search strategy. Malladi derived such a strategy in a mathematical model that describes how to use the information learned from each trial to efficiently accommodate a “sweet spot” of reasonably good choices. As a research tool, this model could shed light on how people make decisions with imperfect information. It could also be turned into an algorithm to help optimize these real-world decision-making processes.
“Search is expensive—we can’t do it forever, so the goal isn’t to find the best thing we can,” Malladi explains. Instead, “the idea is that there are less and less painful ways” to find out where the good options are — “and I’m trying to solve in the least painful way.”
Avoiding pain, finding gain
Malladi’s model starts with the assumption that a researcher—say, the pharmaceutical company looking for an ideal dose for its new drug—wants to avoid two bad outcomes. The first is to settle for a suboptimal dose too soon, when a more effective one may be just around the corner. The second is on a wild goose chase: we waste time and resources trying doses that don’t work much better than the best they’ve already tried.
“These are both bad strategies,” Malladi explains. “If I give up too soon, then I’m missing out on something really good. But if I try to find the perfect dose, it could take many years and billions of dollars.”
To capture how people learn from past discoveries and trade-off risks when searching unfamiliar territory, the model makes two more assumptions about search.
The first is that, all things being equal, similar picks are likely to yield similar results – meaning it’s not always necessary to do an exhaustive search to guess the payoffs. In the case of the pharmaceutical laboratory, this means that similar doses of a drug will probably have similar effects. If they try a dose that works well, it stands to reason that they are in (or at least close to) a sweet spot, and so similar options may also work well or slightly better. But if the dosage works badly, there would be no point in trying a similar dosage in the same ballpark. searching elsewhere would probably yield better results.
The second assumption is that when searching for the unknown, the goal is to find strategies that work reasonably well regardless of what the searcher is dealing with. For example, imagine that the pharmaceutical laboratory conducts its first trial of a new drug and a high dose results in toxic side effects. What if they don’t try again? In a worst-case scenario, they give up too early and fail to find a low or intermediate dose that works well. On the other hand, if they planned to try five additional dosage levels, the worst-case scenario is that all those additional efforts still yield disappointing results. Which of the myriad strategies the searcher should pursue depends on whether the cost of missing out or the cost of additional search is higher.
The key, however, is that the model iteratively updates what the worst-case scenario looks like based on the information gleaned from each search and adjusts accordingly. Let’s say the drug company ran several trials with a series of lower doses, all of which had equally good—but not great—results. If they stop looking now, a slightly better dose may remain undiscovered. But since a neighborhood of good doses has already been discovered, this worst-case scenario is still better than the cost of running additional tests — so it makes sense to stop looking.
“You repeatedly ask yourself how things can go wrong, and then you compensate for those outcomes,” says Malladi. “Actually, you don’t expect the worst case scenario to happen. But when you follow a process like this, even in the worst case, you’re doing pretty well — which means that in all other cases, you’re also doing well.”
What smart searches look like
Malladi identified the best strategy and noticed that it exhibited certain patterns. As the difficulty of finding a neighborhood of good choices increases, it makes more sense to keep looking — but only up to a certain point. Because of the large effort required to locate good results, Malladi’s model implies that sometimes the optimal decision may be to not search at all.
“You can get a sense of how difficult the problem is, and that will influence how you actually conduct the search,” Malladi explains. “If a pharmaceutical company knows that a particular compound goes from ineffective to toxic very quickly with small changes in dosage, finding a good dose will be like finding a needle in a haystack. You’d have to look so hard to figure out where it is that you just say, “No, I’m not even trying.”
For searches worth starting—that is, ones that have a fairly wide range of good options, if any at all—Malladi found that an optimal pattern always looks the same. As the researcher repeatedly tries options that compensate for the worst case scenario, his choices naturally lead him toward the sweet spot.
In the case of the pharmaceutical lab, for example, imagine that the lab’s first attempt to find a good dose of drug returns bad news: the low dose they tried is ineffective. Since it’s clearly nowhere near the sweet spot, the lab chooses a much higher dose to test next. Bad news again: this higher dose turns out to be toxic. But these searches have now established a potential “floor” and “ceiling” that can help them avoid more bad outcomes. The lab tests a new dose near the middle, and this time, the drug works better, indicating it’s closer to the sweet spot than before. “In these cases, the optimal search process will involve bouncing back and forth” between a limited range of options, Malladi explains.
This funneling pattern can be translated into an algorithm that could, in theory, run on any computer. “If you’re really interested in computing a solution to this search problem in various cases, you can do it,” he says.
Channeling to success
Does this mean Malladi has created a mechanical oracle that can tell tennis racket buyers and pharmaceutical companies how to find exactly what they’re looking for? Not exactly. “But it gives us a framework to think about how people solve these problems,” he says. “And now there are things we need to check. Do people behave in ways that even approach this optimal approach?’
Malladi is currently running experiments to investigate this question. But in the meantime, his theoretical model could have practical applications. An online shopping platform, for example, could use it to offer more useful suggestions to customers looking for products they don’t know much about.
“This is a good model to use when you’re figuring out what your preferences are in a particular product category,” he explains. “If you’re trying to find the right digital camera, maybe earlier in your search I’ll show you related products that are very different from what you’re seeing now. And later, when you visit what you want, the cameras I recommend may be more closely related.”
Companies chasing innovation—whether it’s prototyping new products or developing new drugs—could also make use of this “funnel” model. “It says, ‘Here’s how you should really set your sequence of experiments to a sweet spot,'” says Malladi. “You and I could come up with all kinds of approaches, but what’s the point of choosing one of the other? With this process, whatever happens in your quest, you can’t go very far. And in that sense, it’s optimal.”