There is a sentence you hear surprisingly often in companies: "We work data-driven." It sounds good. Rational, modern, professional. It suggests a world where decisions don't arise from gut feeling or hierarchy, but from measurable facts. In reality, however, this sentence often means something entirely different.
The real problem: Interpretation is not neutral
I work a lot with data professionally. Spreadsheets, dashboards, campaign evaluations, reports, presentations. Ultimately, these are all attempts to reduce complex processes to a series of numbers so that one can at least roughly understand what works and what doesn't. And exactly here begins the real problem.
Numbers appear objective. But the stories we tell about them rarely are. Data is not manipulated, at least not in most cases. It is interpreted. And interpretation is initially completely legitimate, because without it, data would just be a collection of columns. Only when you put them into context, establish connections, and compare periods, does a statement emerge.
The problem: The transition between analysis and window-dressing is shockingly fluid. And it rarely begins with a grand intention to deceive, but with a very banal goal: The numbers need to work.
How it happens, step by step
Perhaps a client expects growth. Perhaps a team has developed a campaign into which a lot of time and budget has flowed. Perhaps a manager has already communicated internally that a certain strategy will be successful. If reality then does not quite align with this expectation, a process begins that outwardly still looks like analysis, but internally increasingly becomes a search for a fitting representation:
- You shift the observation period a little because the first month was unusually weak.
- You decide that a certain metric is actually not the right one and replace it with one that looks better.
- You remove a campaign from the comparison group because it was "not representative".
- Or you explain that a certain effect must be thought of long-term and is therefore not yet visible.
All of this can be justified argumentatively. And that is exactly why it works so well.
It almost always happens collectively
The truly interesting part is not that such things happen, but how they happen. Rarely does someone sit alone in front of a dataset and consciously decide to manipulate it. Much more often, several people sit in a room, look at the same numbers, and collectively develop an interpretation that feels plausible enough to everyone involved to write it into a presentation. And as soon as it's there, something strange happens.
A number in a presentation quickly becomes reality. It becomes part of a narrative. And narratives have astonishing stability in organizations because they don't just describe facts, they also define roles: Who chose the right strategy? Who generated growth? Who did a good job? Once this narrative is established, it becomes very difficult to reel it back in, even if all participants know that the original interpretation was at least generously formulated. Because whoever raises a finger and says that the data might suggest something else after all, questions not only the numbers but also the story that has formed around them. And stories often carry more weight in organizations than numbers.
A common scenario: Handling incomplete data
A typical phenomenon often occurs when measures need to be evaluated for which no seamless tracking was set up from the start. You are faced with the task of evaluating success, but the data foundation is incomplete. Some metrics are simply missing or were not captured in a technically valid way. In an ideal world, one would transparently communicate at this point: "We cannot reliably evaluate this specific aspect because we lack the historical data."
In practice, however, a different reflex often emerges, especially when expectations have to be met. People begin to close the gaps. Missing values are replaced by assumptions, rough projections, or general benchmarks. Eventually, highly precise-looking numbers appear on a slide, projecting a certainty that the actual data simply does not support.
I have observed this dynamic in various agencies over the years. The spectrum ranges from the supposedly harmless filling of gaps to the completely brazen, retroactive alteration of numbers. The driving force behind this is often a fatal misjudgment: the unspoken assumption that the client is not savvy enough to question the calculations.
Just how risky this arrogance is became particularly clear in one case. A client, who was analytically much deeper into the subject than the agency team had assumed, simply gained access to the raw data herself. When comparing it with the smoothed-out report, she rightfully pointed out that the presented numbers mathematically simply could not be correct. Trust was irreparably damaged in that moment. If the story is prioritized over the data, the construct collapses as soon as someone actually does the math.
The creeping effect: You begin to believe yourself
Over time, something almost ironic happens. People begin to believe their own representation. What was initially just a benevolent interpretation eventually becomes the official version of reality. And because this version is repeated again and again—in meetings, reports, and presentations—it appears more stable than the original data from which it emerged.
The problem rarely shows itself immediately. In the short term, this practice even works well. The presentation convinces, the client is satisfied, nobody has to explain why an expensive project didn't have the hoped-for effect. In the long term, however, this data culture has a very unpleasant side effect: It decouples decisions from reality. Budgets are allocated, campaigns scaled, entire departments align their work with metrics that were originally just a favorable interpretation of a rather average result. At some point, an allegedly successful model can no longer be reproduced. Then the great guessing game begins. Yet the actual cause is much more banal: At some point, people stopped looking honestly at the numbers.
What "data-driven" really means
What amazes me most about this dynamic is not that it exists. People simply tend to present things in a favorable light. It is the stubbornness with which organizations simultaneously label themselves "data-driven" while in practice doing everything to adapt data to an existing narrative. The word itself is the problem. "Data-driven" sounds like discipline, like a process that is larger than individual opinions or hierarchies. It promises that decisions do not depend on who argues the loudest or holds the highest title, but on what the numbers say.
That is an attractive promise. But it only works if you are willing to accept the result even when it is uncomfortable. Truly working data-driven means something different than what most companies understand by it. It does not mean using data to illustrate decisions that have already been made. It means:
- Enduring numbers that you do not like.
- Accepting campaigns as a failure, even if a lot of work went into them.
- Correcting strategies, even though you presented them with full conviction just a few weeks prior.
This requires an organizational culture in which bad news is not punished. Because the real problem rarely lies with the individual analysts or the teams preparing data. It lies in the incentives that organizations set. Whoever openly names failures risks being seen as a pessimist. Whoever communicates successes is rewarded. In such a system, the temptation to interpret data in the right direction is not a character flaw. It is a rational reaction to an irrational environment.
True data culture therefore does not begin with better dashboards or more sophisticated analysis software. It begins with the question of whether an organization is actually capable of reacting to unpleasant truths without punishing the messenger. As long as that is not the case, "data-driven" remains a label that you give yourself without wanting to bear the consequences. Above all, it means distinguishing between analysis and self-deception. That is more difficult than it sounds, because the transition is fluid and because nobody in the room has an interest in naming it clearly.
Data manipulation (more precisely: the manipulation of interpretation) is based on a simple misconception: that reality is negotiable. That if you just look long enough, you will find a representation that does justice to both the numbers and the expectations. It is not. Reality is not a matter for negotiation, even if you can treat it as one for a while. Sooner or later it reports back. Not as a moral problem, but as a practical one. As a budget that shows no effect. As a strategy that cannot be reproduced. As a question to which you no longer have an honest answer, because you haven't asked yourself in too long.
← Back to Overview