Product-market fit is the point at which a product has found a group of customers who genuinely need it, use it repeatedly, and would be meaningfully worse off without it.
That sounds clear. In practice, it's one of the most misread signals in a product's lifecycle. Most founders either think they have PMF when they don't, or they've had it for six months and haven't recognised it.
The vanity metrics trap
The metrics that feel like product-market fit but aren't:
- Sign-ups or downloads. Measures awareness and marketing effectiveness. Tells you nothing about whether people find the product valuable.
- Total active users. Doesn't distinguish between engaged users and people who've essentially abandoned the product but haven't cancelled yet.
- Revenue growth. Can be hiding terrible retention — growth through acquisition masks churn until it can't anymore.
- Press coverage and inbound interest. People are curious. Curiosity and commitment are different things.
None of these are useless. All of them can mislead if you mistake them for evidence of fit rather than indicators of something else.
Leading indicators that actually matter
Retention curves that flatten. If you plot cohort retention over time and the curve keeps declining toward zero, you don't have fit. If it flattens — even at a modest level — you have something worth understanding. The absolute level matters less than the shape of the curve.
Unprompted organic advocacy. Are users recommending this to others without being asked? Are referrals a meaningful share of your acquisition? This is one of the hardest signals to fake and one of the most reliable indicators that the product is solving a real problem.
The disappointment test. Ask a meaningful sample of active users: how would you feel if this product no longer existed? If 40% or more say "very disappointed," you likely have fit. Below 25% and you don't. This is the Sean Ellis threshold — not a law of physics, but a useful calibration point that's held up across many products and markets.
Usage frequency versus expectation. Is the product being used as often as you'd expect if it was genuinely solving the problem? A product designed for daily use that's being used weekly suggests either that the habit hasn't formed or that the value isn't strong enough in most users' lives.
The retention lens
The most honest way to assess PMF is cohort retention — not aggregate active users, not rolling average, but cohort by cohort. Does each new group of users retain at roughly the same rate as previous groups?
If retention is declining cohort over cohort, you're probably expanding into a market that's a worse fit for the product, or the early adopters had a specific characteristic that later users don't share. Either way, something has changed and it's worth understanding what.
If retention is consistent or improving across cohorts, and the curve flattens rather than declining to zero, that's a meaningful signal. It suggests the product is doing something real for a consistent set of people.
When you actually have it — and what to do next
PMF isn't a moment — it's a range. You'll move through it gradually. The signal that you're in it: growth starts to feel pulled forward rather than pushed. Acquisition costs drop as word-of-mouth does more work. Teams that struggled to articulate the value proposition start to find it obvious.
The harder question isn't "do we have PMF?" — it's "for which specific customer type, and at what level of intensity?" Because PMF is usually narrower than founders assume. The decision to expand from the beachhead into adjacent markets is one of the most consequential decisions in a young company's life, and most make it before they've properly mapped the limits of their initial fit.
Know exactly who your product works for before you try to make it work for everyone else.