BT & DTS EXECUTIVE UPDATE VOL. 17, NO. 3
As business professionals we're trained (or born) to value data and analysis. Despite a plentitude of data and analysis, however, many decisions do not result in the desired outcome. It makes sense to me that among the most frustrating decision-making failures are those based on an apparently solid foundation of data and analysis.
Often the frustration of well-reasoned decisions that nonetheless lead to undesirable outcomes results more in executive concessions to reality (e.g., "guess we missed that nuance") rather than introspection on the decision-making process. In fact, we've all experienced the pang of doubts where we wonder whether we had "enough" data or performed "enough" analysis. We'll fall back on looking for the parts of reality that surprised us for which we didn't account and vow to account for them next time.
The problem, however, as I'll describe in this Executive Update, is not about the volume of data or days of analysis; it is the data and analysis mindset itself. We are trained to think that more data and more analysis is the solution when it actually contributes to the problem.
Data and analysis are the mainstays of deterministic decision making. In conversational language, deterministic decision making relies on five elements: data, analysis, logic, reasoning, and judgment. What could be bad about that? It turns out, plenty. There are three specific strikes that deterministic decision-making methods have against them:
- Four of the five deterministic methods elements rely on humans.
- The data that deterministic methods rely on is often limited, late, and old.
- We tend to rely almost exclusively on deterministic decision making to the exclusion of other methods and we apply it to far too many decisions.
Let's expand on each of these strikes. Analysis, logic, reasoning, and judgment are all "human-in-the-loop" elements of deterministic decision making. The infinite fallibility of humans is the all-too-easy excuse for notching this strike against the methods, but that would make it sound as though computers would do better. I am not the first person to point out that computers are programmed by humans, so all a computer can do is as much as the humans (in their infinite fallibility) have them do, only a lot faster and deeper.
This particular strike is not limited to the humans involved, but to the reasoning humans use -- whether in wetware (our brains) or software. This underlying reasoning is the culprit. Humans shape the data and its subsequent handling, and as soon as we do that, we are no longer dealing so much with the data as we are with what we think about the data. So, nearly instantly, we jump from data and facts to perception, and we all know the biases of perception. At that point, the data is nearly pointless.
Now let's look at the second strike: the problems with data. Is there ever enough data? Few people would argue that we ever have too much, although many lament the inability to make sense of it all. An excess of data makes it difficult, if not impossible, to use it. As a result, humans resort to reasoning as best they can to find definitive meaning within the data, even though such meaning is tenuous at best, or not to be found at worst. Making sense of it all would require advanced understanding of the machinations that created the data in the first place, which, as I'll describe later in this Update, is sorely lacking.
The primary drivers in the limitation of the data are just that: our inability to make sense of it coupled with the uncertainty of whether we're getting it from the best sources. We take it from where we can get it and accept that better data would be a gift. A further limitation is that the data's utility is questionable because the source is often unstable, unreliable, or inconsistent (although we are often not aware of this at all). Even when the source is solid, how quickly do we get the data? And after we get the data, how long do we take to process it, analyze it, and make decisions based on it? By the time we make a decision, is the reality of the situation even close to the conditions to which the original data applied? Too often, the answer is "no." The decisions we make typically apply to a reality that's long since past.
So what of the methods themselves? If not deterministic, what else is there? This third strike is my launch pad for an alternative approach to decision making and an antidote to our reliance on deterministic approaches for doing so. There are other methods, and I will divulge one shortly in this Update, but before I do that, there's a point I would like to make about scope. Deterministic decision making has its place; it just doesn't belong everywhere for everything.
Deterministic decision-making approaches actually are excellent. They certainly have the capability of being much stronger than many other approaches, including tossing a coin, throwing darts, "man-on-the-street" interviews, reading tea leaves, sacrificing to idols, among others. I am not putting deterministic decision making into the same category as these other laughable approaches. On the contrary, deterministic decision making evolved as the more rigorous alternative to these approaches. The strength of deterministic decision making is the very fact that it relies so heavily on human involvement. The problem arises when we use deterministic approaches for too many decisions.
Although human involvement in decision making can be a strength, it isn't necessary (or appropriate) for every decision. In the typical business environment, many decisions (too many, in my opinion) require the involvement of an executive -- and often a senior executive. These people's time is often limited and expensive. Is there a better way?
Many decisions really don't require the executives to be involved. The reason many of these decisions involve executives is because the organizations don't have other methods. Other methods would relieve the executives from the burden of very common decisions and would involve them only in uncommon situations. These would be the decisions where human thinking -- with all its biases, emotions, and perceptions -- adds the most value. In other words, involving executives in decision making is expensive. Far too many operations rely on executives making decisions for what ought to be routine day-to-day goings-on. So, not only is this expensive (in addition to disruptive and slow), but it's used more than needed. I've met many executives frustrated by this very situation.
For simplicity's sake, we can categorize decisions into one of two buckets:
- Decisions on ordinary shifts in work (aka "common causes of variation")
- Decisions on extraordinary shifts in work (aka "special causes of variation")
The point is that deterministic decision making has its greatest value when dealing with out-of-the-ordinary circumstances -- but is being used for much of the mundane grind.
We've all been there. Whether in software and IT, consumer electronics, or medical manufacturing, something breaks "unexpectedly" and now we have to budget, plan, and delay for its replacement. Or, we're faced with a choice of starting a new project now at the expense of making current projects wait, or waiting with the new project until enough capacity is vacated by current projects. How do we know what our odds are that the gamble will pay off? How confident are we that we can "have our cake and eat it too"? Another example is the all-too-common situation we face when confronted with trying to figure out why something went wrong (or right). There are too many contributing factors to make sense of the relevant from irrelevant data.
Want a dead giveaway sign you're being held back by your own deterministic decision-making approaches? I have two words for you: analysis paralysis.
As I noted earlier, more data and more analysis does not bring us closer to a good answer; it merely piles more of the problem onto the existing problem. We have been duped by our culture and business school paradigms to believe this is the solution. Instead of bringing us more certainty, it actually takes us further from it. Our very own trusted methods have been delivering greater uncertainty.
Instead of making decisions at a point where we know the most possible about a narrow set of illuminated options, we've unwittingly put ourselves in a space where we actually know very little about a small set of options that are within a wider range of possibilities; possibilities we inadvertently hid from ourselves.
The problem is that we're entirely unaware of our predicament. Why are the possibilities hidden? Why are there so many, and why are we in this position? Because our deterministic approaches ignored the options along the way and then left them out of the final analysis.
Regardless of how much data we have, when using deterministic approaches our analytic techniques result in a deliberate limiting of the data field. Whether or not we started with too much data that had to be winnowed down to be manageable, it is our successive iterations of analyses that end up taking ranges of data from a variety of sources and narrowing them down to the exclusion of the remainder of the data. Then, from these narrow and limited data sets we treat them as a single group that all act the same way. We then perform analysis after analysis on these points. Each successive analysis convinces us that we're moving closer to a good decision. But because the analysis has left out the remainder of the range, and the analysis handles the data based on our perceptions and biases, we're introducing errors just like we do when we round numbers -- only these errors are more far-reaching than the simple "rounding error."
Next, add in the delay incurred from the moment data is produced by the performance of the work to the moment any action is taken. Finally, notice that the lead time on the decision has chewed up all our wiggle room. We're left with no tolerance for errors because we have no time to go back and fix things. And worse, because we have left most of our options on the cutting room floor, we've reduced our tolerance even further. You can see why deterministic methods are so riddled with risks and induced uncertainty.
At issue is not the outright use of deterministic decision-making methods. Deterministic reasoning has its place, and it's an important place. Deterministic methods will always be needed. But let's limit our use of them only to where and when they're needed and not for every little decision.
Clearly, I'm driving toward methods that are a better fit than chewing up executive runtime for all these day-to-day decisions. These methods are probabilistic in nature.
I mentioned some examples of day-to-day decisions earlier in this Update. The broken equipment? That could have been prevented, or at least contained so that it wouldn't be a disruption. The new opportunity? How much would it have been worth to know with near absolute certainty that not only could you safely cut back on particular projects to fit it in, but that you would deliver the new work early and start earning immediate revenue? The mysterious broken process? What if the cause was so obvious that it was never a mystery and how to fix it was never in question?
A way of depicting probabilistic decision making is by the ability to base decisions on the quantitative behavioral performance of the operation instead of on analysis and intuition. In other words, let the process tell you what to do. Probabilistic methods allow us to be ready for a wider variety of options and circumstances. They have a tendency to shape how things will go moving forward and allow us to absorb the unexpected, which also allows us to be unaffected by a wider range of unexpected outcomes. For now, I'll give you a hint: there's more to Lean than reducing waste and empowering the workforce. Probabilistic decision making enables these and many other hallmarks of Lean. In my next Update, I'll show you what that looks like.