Founders deserve to know how their idea is being scored. So do investors. This article is the public, transparent version of how NexTraction's Market Validation Index (MVI) works — the same model that drives every memo we produce.

Multiple analytics dashboards on a workspace, illustrating the transparent multi-source scoring approach behind the MVI.

237

Startups in calibration set

6

Pillars scored

0–100

Final score range

Quarterly

Weight recalibration

What the MVI is — and isn't

The MVI is a single 0–100 confidence score for whether a venture has the evidence to justify its next stage of investment (time, money, hires). It is not a prediction of success. It is a synthesis of evidence weighted against a methodology we publish openly.

If a model can't be argued with, it's a black box. If a black box scores founders, founders won't trust it. So we open the box.

The six pillars

Each pillar carries a fixed weight, calibrated against historical outcomes. Weights add to 100%.

PillarWeightWhat we score
Problem Severity20%Frequency · urgency · willingness to pay
Solution Fit15%Sharpness of wedge vs surface-area sprawl
Market Size15%Bottom-up TAM (top-down penalized)
GTM Defensibility15%Distribution edge, not feature edge
Traction Evidence20%Raw metrics · growth · retention · references
Team–Market Fit15%Domain insight · prior work · customer relationship

1 · Problem Severity (weight: 20%)

How acute is the pain you address? We score on three sub-dimensions: frequency (how often it occurs), urgency (cost of inaction), and willingness-to-pay (whether interview subjects volunteer money before being asked).

2 · Solution Fit (weight: 15%)

How tightly does the solution map to the problem? We look for sharp wedges (one tight job done extremely well) over broad surface area (many shallow features).

3 · Market Size (weight: 15%)

Bottom-up TAM only. Top-down "if we get 1% of a $50B market" reasoning is automatically penalized — we've watched it kill too many seed-stage decks.

4 · GTM Defensibility (weight: 15%)

Distribution edge, not feature edge. Founders who already have a wedge into their distribution channel score significantly higher than those who don't.

5 · Traction Evidence (weight: 20%)

The heaviest pillar after Problem Severity. We score raw metrics, growth rates, retention curves, and the quality of customer references.

6 · Team–Market Fit (weight: 15%)

Why this team for this problem? Domain insight, prior shipped work, and the founder's documented relationship to the customer all feed in.

Founder working through frameworks and metrics on a notebook — the calibration discipline behind weighted scoring.

How weights were calibrated

The 20/15/15/15/20/15 weighting isn't pulled from a hat. It's the result of fitting our scoring model against 237 historical startup screenings where we know the outcome (raised follow-on / pivoted / shut down). We publish the weights and we update them quarterly as new data comes in.

Score interpretation

80–100 · Strong

Strong evidence across most pillars. Ready for the next stage.

65–79 · Solid

Solid foundation with one or two pillars to strengthen.

50–64 · Mixed

Mixed signal. Worth investigating, but not investing yet.

Below 50 · Weak

Weak validation. Pivot or kill before adding more capital.

Run your venture through the MVI

See your score across all six pillars in under an hour.

NexTraction's project analysis applies the exact methodology this article describes — same six pillars, same calibrated weights, same evidence requirements — and produces a structured memo with the gaps to close before your next stage.

Score your venture →

What the MVI explicitly does NOT do

It does not predict success. It predicts evidence quality. Plenty of high-MVI startups fail; plenty of low-MVI startups succeed. We're scoring readiness, not destiny.

It does not replace founder judgment. If the model says 73 and your gut says no, listen to your gut and re-score.

It does not work for every business model. Deep-tech, biotech, and hard-science ventures need a different scorecard. We have a separate methodology for those — published soon.

Why we publish this

Two reasons. First, founders should be able to argue with the model — that's how it gets better. Second, opaque scoring is a bad look for an industry that already has a trust problem with founders. We'd rather be wrong in public than mysteriously right behind a paywall.

Conclusion

The MVI is one tool among many. Treat it as a checklist that forces honesty, not as a verdict. Run your venture through it and tell us where the model gets it wrong — that's how we improve.