etell.
Experience Intelligence

How scoring works

Updated: May 2026

Every audit on etell carries three numeric scores out of 10. We score from observable signals in the captured artifact (the rendered email, the homepage screenshot) — not gut feel — so two persona-grounded reviews of the same artifact converge on the same number. This page is the rubric the persona applies; you can read each criterion, agree or disagree, and recalibrate.

The three scores

ChannelScore 1 — OverallScore 2 — Funnel AScore 3 — Funnel B
EmailBusiness ImpactOpen LikelihoodClick Likelihood
WebBusiness ImpactEngagement LikelihoodConversion Likelihood

Business Impact answers: how well does this artifact target a person like the persona? Funnel A answers: will the persona move past the inbox / past the first screen? Funnel B answers: would the persona click / convert / add-to-cart?

The scoring method (every score)

Each score works the same way:

  1. The score starts at 1.
  2. The persona walks a 10-criterion checklist. For every criterion that's TRUE based on what's visible in the artifact, add +1 to the score.
  3. The score caps at 10.
  4. Each score is rendered with the per-criterion tally in the audit detail page so you can see exactly which signals counted.

That means a score of 10 is rare and means almost every signal is present. A score of 5 means about half of the signals are present. Most everyday emails / homepages land 4–7.

Email rubric

Business Impact (Email)

Open Likelihood (Email)

Click Likelihood (Email)

Web rubric

Business Impact (Web)

Engagement Likelihood (Web)

The persona is already on the homepage — engagement scores whether they'd scroll past the first screen / tap a category / interact, vs. bounce.

Conversion Likelihood (Web)

Why a checklist instead of a vibe

Persona-grounded scoring is inherently subjective — the persona is a 62-year-old comfort shopper, not a regression model. But the question we want answered isn't “what's the feeling?”; it's “what's actually on the page that a 62-year-old would respond to?” Anchoring scores to observable signals lets reviewers — including humans — disagree with the persona on a specific criterion (“the loyalty callout is there, you missed it”) instead of arguing about a vibe.

What scores are not

These are notprobability predictions. A “9/10 Open Likelihood” doesn't mean the email has a 90% open rate in production. It means the persona, with their particular shopping habits and their accumulated history with the brand, would likely open this one — based on the signals listed above. Treat the scores as relative: useful for comparing two emails or two homepages from the same persona's point of view, less useful as forecasts.

Questions or feedback?

Email alon@etell.app. The rubric is living — if a criterion is missing or wrong for your category, we'll iterate.