Home / Blog /

McNamara's Error and Modern ML Systems

AI Strategy

McNamara's Error and Modern ML Systems

avatar
CloudExplain Team

Juli 23, 2025 · 8 min read

McNamara's upcoming and his approach

During the Vietnam War, Secretary of Defense, Robert McNamara, was obsessed with numbers, a fixation that traced back to his Harvard Business School training and his role as one of Ford Motor Company's famous "Whiz Kids." McNamara developed his love for numbers as a student at Harvard Business School and later as its youngest assistant professor at age 24. This mathematical approach had also served him well at Ford, where statistical analysis helped revolutionize American automotive manufacturing. But in the jungles of Vietnam, his quantitative worldview would prove more problematic. Anything he could not quantify was not deemed important. Desmond FitzGerald of the CIA, the third-highest official and the one who briefed McNamara weekly on Vietnam, was unsettled by McNamara’s relentless focus on quantifying everything, viewing the war entirely through endless statistics. When FitzGerald told him bluntly that most of the statistics were meaningless and that the situation "just didn't smell right," McNamara's response was to demand even more data. He famously praised a general who switched mid-report from a qualitative breakdown to translating everything into numbers and percentages. One of the most important KPIs became the body count - increasing casualties on the enemy side while minimizing their own became the objective function, with military success reduced to what they called "net body count", a count of the total number of enemy casualties less US casualties.

Reinterpreting McNamara's Fallacy

The traditional interpretation of McNamara's fallacy focuses on his preference for quantitative data over qualitative insights, the idea that measurable factors are inherently more valuable than unmeasurable ones. But I believe this misses a deeper problem: McNamara wasn't just ignoring qualitative data, he was solving the wrong problem entirely. His real challenge wasn't measuring progress in the war, it was that "winning the war" itself was poorly defined and possibly unmeasurable. Faced with this impossibility, he did what many leaders do: he substituted a metric he could measure (body count) for the outcome he actually wanted (victory). This wasn't simply a case of choosing numbers over intuition; it was a case of metric-target misalignment, where the thing you can measure becomes a proxy for the thing you actually care about.

Metric-Target Misalignment in Modern ML Systems

This same pattern of metric-target misalignment pervades modern machine learning systems. Just as McNamara substituted body count for victory, we routinely build ML models that optimize for measurable proxies rather than our true objectives.

Predictive maintenance systems: What we really want is to eliminate unexpected equipment failures and minimize downtime costs. But "eliminating failures" is complex and multifaceted - it involves understanding root causes, optimizing maintenance schedules, and sometimes accepting that certain failures are more cost-effective to handle reactively. Instead, we build models that predict when equipment will fail, treating prediction accuracy as success. A model with 95% accuracy in predicting failures might seem excellent, but if it doesn't help maintenance teams understand why failures occur or how to prevent them most cost-effectively, we've optimized for the wrong thing.

Customer churn prediction: Our actual goal is to retain valuable customers and grow revenue, which requires understanding the complex drivers of customer satisfaction and loyalty. But we measure success by how accurately we can predict which customers will leave. A churn model might perfectly identify at-risk customers, but if the insights don't translate into actionable retention strategies, we've built an impressive predictor for a proxy metric while failing at our real objective.

Fraud detection systems: What we truly want is to minimize financial losses from fraud while maintaining customer trust and minimizing friction for legitimate users. Yet we typically measure success by detection rates and false positive rates. A fraud model might catch 99% of fraudulent transactions, but if it creates so much friction that legitimate customers abandon their purchases, or if it focuses on easily detectable low-value fraud while missing sophisticated high-value attacks, we've optimized for detection metrics rather than business outcomes.

In each case, the measurable proxy becomes the target, and we lose sight of what we actually wanted to achieve - just as McNamara's focus on body count obscured the real question of whether the war strategy was working.

Breaking Free from Proxy Metrics

Metric-target misalignment occurs for practical reasons: time pressure pushes teams toward easier-to-model proxies, and limited data constrains our options. Building an ML model that tells you whether to offer customers a 10%, 20%, or 30% discount requires granular behavioral and outcome data that most organizations simply don't have. So we settle for predicting churn instead of optimizing retention strategies.

The solution lies in shifting our focus from "what predicts outcomes?" to "what actually causes them?" Machine learning models excel at discovering patterns in complex data, and modern AI tools can help us interpret these patterns to generate business insights. However, identifying patterns alone isn't enough - we need to understand which factors actually drive the outcomes we care about, not just which ones happen to occur together. This means building models that map out the true cause-and-effect relationships behind business results.

This approach enables something powerful: the ability to simulate the results of different actions before implementing them. Instead of running lengthy A/B tests to see if a retention campaign works, we can use causal models to predict its impact and iterate rapidly through different strategies. This transforms decision-making from reactive measurement to proactive experimentation, allowing us to test ideas in weeks rather than months. Just as McNamara might have benefited from understanding what actually led to military success rather than just counting bodies, modern organizations can escape their own proxy metric traps by building systems that reveal the true drivers of their objectives.

CloudExplain's Solution

At CloudExplain, we help organizations break free from these proxy metric traps by making AI models transparent and actionable. We're building a platform that combines explainable AI with causal analysis to reveal not just what your models are predicting, but why those predictions matter for your business outcomes. Instead of building black-box models that optimize for easy-to-measure proxies, we help you understand the true cause-and-effect relationships driving your results. This enables rapid simulation of different business strategies, e.g. letting you test retention campaigns, pricing changes, or operational improvements virtually before implementation, transforming months of trial-and-error into weeks of data-driven experimentation.

TLDR

  • During Vietnam, McNamara obsessed over body counts instead of actual victory. Modern ML makes the same mistake, we build churn models that predict who leaves but don't help retain customers.
  • The Problem: We optimize for measurable proxies (prediction accuracy) instead of real goals (business outcomes).
  • The Solution: Focus on causal understanding. At CloudExplain, we build a platform combining causal analysis + explainability to reveal what actually drives results and simulate strategies before implementation.

Caught your interest? Get in touch with us.

Disclaimer: This blog post was drafted with the help of a language model, but all opinions expressed are my own.

Ready to get started?

Explore documentation