If it's stupid and it works, it isn't stupid
I understand. If you drive home snot-slinging drunk, but make it, it wasn't a stupid decision.
The argument in the article rests on top of three logical flaws that are common in analysis of military engagements.
The first flaw is direct attribution of causality for outcome to a single factor in an event with many factors. While there are thousands (or more) of factors driving the outcome of a typical military engagement, there are usually a few prime drivers for the high level outcomes. We know this, at least intuitively. But we tend to overextend the simplification concept and establish the idea of a single prime driver of outcome. Usually, there are a few prime drivers of roughly equal weight and often clusters of secondary drivers that in combination can override a prime driver of outcome. And there is the possibility of a large number of small influences having an aggregate major effect.
Given that many of those factors are uncontrollable, we get the second flaw – the idea that if it worked once, it would work again. We like to take scientific approaches to the analysis of military events (which is good for me because I had a decent, paying career doing that). In doing so, we must realize that we are only analyzing one execution of an event. Generally, in science we analyze large numbers of events so we have confidence that we have accounted for uncontrollable and unknown factors. When we look at one instance, we need different tools and draw different types of conclusions.
I just mentioned the third flaw – what we know. Or, more appropriately, what the people making the decisions we are judging knew. When we evaluate a military decision, we need to assess what the decision maker(s) knew, what they should have understood about the problem, and what they considered as the major factors influencing the decision. Again, this leads to a truism that we have a hard time applying in analytics – you can do good things and have bad outcomes and you can do bad things and have good outcomes.
Simply put, a single and one-time outcome of a decision isn't the best metric for the quality of the decision.