When Marketing Performs for the Wrong Audience
April 15, 2026
The real measurement problem in health marketing isn't what you're tracking. It's why you're tracking it.
Gregory Ng | EVP, Decision Science

April 15, 2026
The real measurement problem in health marketing isn't what you're tracking. It's why you're tracking it.
Gregory Ng | EVP, Decision Science

A few years ago, a marketing leader told me something I haven't forgotten. Her organization had purchased a billboard. Not because the location indexed well against their target audience. Not because the market research suggested an awareness gap in that corridor. Because a board member drove that route to work every morning.
Does this sound like you?
The thinking was straightforward: if he sees our name every day on his commute, he'll believe the marketing is working. And if he believes the marketing is working, we keep our budget. Nobody said that out loud, of course. But everyone in the room understood it.
I've told that story to marketing leaders across industries, and the response is almost always the same: a knowing laugh followed by a slightly uncomfortable silence. Because most of them have a version of it. Maybe not a billboard. Maybe a TV spot that ran in a market that didn't make strategic sense. A campaign that launched for a service line that wasn't ready. A report metric that got elevated to the dashboard because an executive asked about it once…two years ago…and it was hardcoded into the template ever since.
This is marketing performing for an internal audience, not an external one. I think of it as internal audience bias: marketing is shaped less by the people you’re trying to reach and more by the people you’re trying to convince. Then, you design measurement to confirm this setup.
In health and wellness organizations, this pattern has a particular texture. Service line priorities get set at the executive or board level, often driven by legitimate strategic intent. It may be a new physician hire, a facility investment, or a competitive threat. But somewhere between the boardroom and the media plan, the hypothesis gets lost.
The question stops being "who is most likely to need this service, where are they, and what would move them?" And it starts being "how do we show meaningful reach and activity against this priority?"
Those are very different briefs. And they produce very different campaigns. The first brief produces targeting decisions. The second produces volume decisions. And volume, in marketing, is the enemy of attribution.
When, say, orthopedics is the priority, and marketing responds by messaging the entire database or buying broad demographic targeting across mass channels, it feels defensible. “We reached a lot of people!” But it is nearly impossible to connect to an outcome. Did those people need orthopedic care? Were they in a moment of active consideration? Were they even in your service area?
The Impression served.
Box checked.
Report showed reach.

The Decision Science team at MERGE has worked with hundreds of organizations, and here's what we see working differently in organizations that have broken this pattern:
And critically, they build the attribution infrastructure before the campaign launches, not after. They know how a person who sees their ad is going to eventually raise their hand (whether that's a specific phone number, an online scheduling path, or a referral pattern). They instrument that path before the spend begins.
This is what I described in a recent piece on next best action: the real work isn't in having data; it's in building the connective tissue between what the data shows and what the organization does next. That same principle applies here, one layer upstream. Before you can act on insight, you need campaigns that are designed to produce it.
When that foundation is in place, organizations can actually answer the question that most health marketing teams cannot: did this campaign bring patients in the door?
The billboard story is funny because it's absurd. But the more sophisticated version of it: campaigns built around internal optics—the downstream effect of internal audience bias, metrics chosen for their ability to show green, reporting designed to satisfy stakeholders rather than surface truth? Well, that isn't absurd at all. It's extremely common. And it compounds quietly over time.

When measurement is designed to confirm rather than to learn, organizations stop getting smarter. Every campaign cycle reinforces the same assumptions. The same audiences. The same channels. The same reporting cadence that makes everyone feel good about work that may or may not be producing results.
The organizations I see breaking out of this are the ones willing to ask an uncomfortable question before the plan gets written: what are we actually trying to prove, and how will we know if we proved it?
That question is a true hypothesis with a measurable outcome attached. It’s what separates marketing that performs from marketing that produces. It's what makes targeting decisions defensible to a skeptical CFO. And then, it's what turns a CRM from an email tool into an attribution engine. This is what eventually makes the case for the budget, the headcount, and the seat at the table that every marketing leader is trying to hold onto.
And it starts long before the campaign launches. Before the channel mix. Before the creative brief. It starts with being honest about who the marketing is actually for.
If your reporting consistently looks good but you can't connect it to patient growth, referrals, or service line growth, the gap probably isn't in your execution. It's in how the priorities were set and how success was defined before any of the work began. Start there.
Audit not just what you're measuring, but why those metrics were chosen and whether they were ever designed to prove what actually matters. That audit usually reveals three things worth addressing in sequence.
First, a prioritization framework that puts the hypothesis before politics. Not a rejection of leadership input (service line priorities from the executive level are legitimate and important), but a structured process that translates those priorities into testable assumptions before committing a dollar of media spend. Who is the audience? What behavior are we trying to change? What does success look like in measurable terms? These questions must have answers before the brief is written, not after the campaign runs.
Second, an attribution model that connects marketing activity to clinical or business outcomes. This doesn't require a complete technology overhaul. It requires an honest map of where the measurement chain currently breaks down, where a prospective patient disappears between seeing your message and walking through a door, and a deliberate plan to close those gaps. In many cases, organizations already have the tools. What's missing is the architecture that connects them.
Third, a business case that makes the investment defensible upward. The honest truth is that more precise targeting, better attribution infrastructure, and hypothesis-driven campaign design all cost something. The cost is in time, resources, and organizational will. The leaders who successfully make this case do so not by arguing for better marketing, but by quantifying the cost of the current approach. What does a year of unmeasured, broadly-targeted campaigns against the wrong audiences actually cost in wasted spend and missed results? That number, once visible, tends to make the conversation much easier.
This is exactly the work the Decision Science team at MERGE does with health and wellness organizations. We find that marketing teams struggle because their campaigns were never designed to prove anything. Not to drive a specific behavior. Not to reach a clearly defined audience. Not to produce a measurable outcome.
When our team audits, we find most campaigns were designed to show activity, demonstrate reach, and reassure the room. And that’s the difference. When campaigns are grounded in real hypotheses, real audiences, and real paths to action, measurement starts becoming proof. The work we do is not to audit your creative or renegotiate your media mix, but to build the framework, the attribution model, and the business case that turns scattered marketing activity into a coordinated, measurable growth engine. The kind where leadership trusts the numbers because the numbers were designed to earn that trust.
If you're ready to start asking the harder questions about what your marketing is actually producing, let's talk!