When a Market-Research Brief Should Become a Financial Model

Researchers and modellers are good at different things. Five signals that your research engagement has hit its boundary, why the handover is where most of the value leaks, and how to commission both well.

By , Principal Consultant & Founding Director · · 6 min read

A founder commissions market research. The brief is reasonable: understand the addressable market, sketch the competitive landscape, surface the three or four buyer archetypes, and come back with a short presentation. The researcher delivers something usable. The founder thanks them, sends them a glass of wine, and files the deck.

Two months later, an investor asks a question that the research deck cannot answer. The question is, "what does this business look like at scale, if we are right about all the things we think we are right about?" The deck shows the size of the prize; it does not show the path to the prize. The founder needs a financial model.

The mistake the founder makes, almost universally, is to commission the model from the same researcher. The mistake the researcher makes, almost universally, is to accept. The result is a spreadsheet that is plausible, defensible, and quietly wrong, because researchers and modellers are good at different things and the moment you ask one to do the other's job, the work degrades.

This piece is about how to spot the boundary. When does a market-research brief stop being a research question and start being a modelling question? The signals are clearer than you might think.

What research is actually for

A market-research engagement, done well, produces five things:

  1. An estimated addressable market, with the methodology written down.
  2. A segmentation of buyers, with the dimensions named (size, geography, sector, urgency, willingness to pay).
  3. A competitor map, with positioning differences described in the buyer's language.
  4. A small number of named risks or constraints (regulatory, technical, behavioural).
  5. An evidenced recommendation about where the easiest first dollar is.

That is the output. It is qualitative-quant blended, descriptive, and decision-oriented. It does not tell you what your business will be worth in five years; it tells you what the world you are entering looks like today and what you need to believe to win in it.

What a financial model is actually for

A financial model is a different artefact entirely. Its purpose is to take a set of assumptions, propagate them through time, and produce a defensible projection of cash flow, P&L, balance sheet, and the sensitivity of each to changes in the assumptions.

A well-built model is therefore an engine for asking "what would have to be true for this to work?" rather than "what do we think is true today?" The good news is the engine is reusable. The bad news is that it is only as good as its inputs, and most of those inputs come from research.

So the two artefacts are sequential, not interchangeable. Research feeds the model. The model challenges the research. Done in the right order, they sharpen each other.

Five signals that you have hit the boundary

Here are the five signals we look for, with our advisory clients, that tell us a research brief has done its job and the work has crossed into modelling territory.

Signal one: the investor question. Someone (an investor, a board member, a senior peer) has asked a question that requires propagating an assumption through time. "If we capture 5% of the market in three years and price holds, what is the EBITDA in year five?" That is not a research question. The research can tell you whether 5% is plausible; only a model can tell you what 5% does to your P&L.

Signal two: the unit economics question. You find yourself asking "what is the contribution margin per customer at steady state?" or "what does the payback period look like at our current CAC?" These are not researchable in the field. They are arithmetic on a set of assumptions, several of which the research has informed.

Signal three: the scenario question. You want to explore "what if?" comparatively. What if pricing is 20% lower than we hoped? What if churn is 50% higher? Research can tell you what each input is likely to be; only a model lets you stack and combine them.

Signal four: the funding-need question. "How much do we need to raise, and when?" This is fundamentally a model question. The research tells you the shape of the opportunity; the model tells you the cash trajectory and therefore the dilution.

Signal five: the irreversibility question. You are about to make a decision that is hard to reverse: a hire, a long-term contract, a market entry, a product line. The cost of that decision being wrong is high enough that you want to see it stress-tested. Models stress-test; research does not.

If any one of these signals shows up in a working conversation, the engagement has moved. If three or more do, the engagement should have moved already and you are paying the wrong people the wrong way.

The handover, done well

The handover from research to modelling is where most of the value leaks out, and it is also the easiest stage to get right with a little discipline.

The researcher should produce, in addition to the deck, a short data appendix in writing: the named assumptions, the source for each, the confidence band, and the methodology used to estimate it. Three to five pages, plain prose, no charts. This is the input feedstock for the model.

The modeller should produce, before they build, a one-page architecture document: the structure of the model, the named inputs (matching the researcher's appendix), the time horizon, the granularity, and the named outputs. The researcher reads this and signals where the model would be using their work in ways their work cannot support.

That conversation, half an hour at most, is the conversation that saves the founder from owning a model that is structurally wrong. We have lost count of the times we have seen modelling work begin without it; we have rarely seen modelling work go wrong when it happens.

Where this matters most: pre-Series-A

The boundary between research and modelling matters most at pre-Series-A, when the founder is preparing for a raise. Investors at that stage have seen hundreds of decks; they can spot the deck where a researcher modelled the unit economics from a confident position they did not have the basis for. They can also spot the model where a modeller has invented the market sizing because they wanted the chart to be tidy. Either pattern lowers the founder's credibility before the conversation has properly started.

The pattern that lifts credibility is the opposite: a researcher's deck that names what it cannot say, a modeller's projection that traces back every line to a sourced assumption, and a founder who can move fluently between them. That founder, in our experience, raises faster and on better terms than the one who tried to make one artefact do both jobs.

When to call us

Two paragraphs of bias before we finish. We work on both sides of this boundary, with two heads: GIVE Consultancy when the question is qualitative and decision-shaped, GIVE Analytics when the question is quantitative and modelling-shaped. We are deliberately one team across two brands because the boundary problem this piece describes is the most common failure mode we see in early-stage founders' work.

If you are about to commission research, or just have, and you suspect a modelling question is forming in the background, get a second pair of eyes on the brief before the research starts. It costs little, it slows nothing down, and it can save a quarter of badly-spent advisory budget. Worth a 30-minute call.