The Sandbox · Eurovision 2026 · Vienna · May 16

We scored every entry without hearing a single note.

Greece came first. The bookmakers have it eighth.

35Entries Scored
10Criteria
3Lens Frameworks
31%Finland's Implied Odds
Read the analysis
Chris Kelly
Fractional CPO & CTO
The Unanswered

Everyone knows that Eurovision voting is geographically tied. It's an unspoken agreement that hangs over every result — the Balkan bloc, the Scandinavian sympathy vote, the diaspora surge. I wanted to understand just how much that actually mattered. Could a substandard song accumulate enough from being simply from the right location at the right moment? Or are there more important criteria hiding underneath the politics?

The real question underneath all of that is simpler: is Eurovision actually a song contest? Or is it a system the right country can game — hit the Loreen button, optimise for the instrument rather than the art, and collect your trophy.

So we built an instrument and applied it. Without hearing a single song. Just from what's been written about them, what's been performed at national finals, and what's happening in the world right now. It might get everything wrong. But it might also show that Eurovision is considerably less random than it looks — and that the market, for all its confidence, is missing something obvious.

Greece came first. The bookmakers have it eighth.


The Verdict

Our predicted winner
and the full top ten.

Ranked by Combined Lens score — the framework's best model of what actually wins Eurovision. No hearing required. The scores are built entirely from what's been written, what's been performed at national finals, and what the world looks like right now.

2
Ukraine
LELÉKA — Ridnym
7.79Combined
7.85Public

Maximum scores from both jury and public at national selection. Geo/diaspora potential of 9. The clearest undervalue in the field — the market has it at 3% implied. At that price, this is where the instrument most strongly disagrees with the bookmakers.

1
Greece
Akylas — Ferto
7.85Combined
7.82Public

Highest Combined score in the entire field. Won national final with maximum points from all three voting categories. Already topping Spotify Viral 50 in Greece and Cyprus. The "mama" bridge is the emotional pivot of the contest. The bookmakers have this eighth. That gap is the story of this page.

3
Finland
Linda Lampenius & Pete Parkkonen
7.57Combined
7.87Jury

Legitimately great. 30+ page staging document, pyrotechnics, violin virtuoso. The market favourite at 31% implied — but only third by merit. The premium the market has paid here is the defining mispricing of this contest.

Positions 4 & 5
Positions 6–10

The Findings

What the market
refuses to see.

"The top three by Combined Lens are Greece at 7.85, Ukraine at 7.79, and Finland at 7.57. The market prices them at 8%, 3%, and 31% respectively. That spread is the sharpest, most defensible finding in this analysis."

The Instrument — Combined Lens Rankings
🟢 Undervalued
Greece
Akylas — Ferto
7.85Combined
+4Positions
8%Market

Hyper-techno banger with genuine social commentary and an authentic artist backstory. Already viral regionally. The instrument's top score across the entire field — yet the bookmakers have it level with Australia. Four places above where the market has priced it.

🟢 Undervalued
Ukraine
LELÉKA — Ridnym
7.79Combined
+8Positions
3%Market

Maximum jury and public scores at Vidbir. Bandura-led Ukrainian folk tradition. Geo/diaspora potential of 9. Eight positions above where the market has placed it. At 3% implied, this is the single largest rank gap between merit and market in the dataset.

🔴 Overvalued
Finland
Linda Lampenius & Pete Parkkonen
7.57Combined
−2Positions
31%Market

Finland is not bad. Finland is just not this good. Critics note a weak hook and chorus-title disconnect. The market has it first; the instrument has it third. Two positions — closer than it looks — but the 31% implied probability implies near-certainty the scores don't support.

🔴 Overvalued
Denmark
Søren Torpegaard Lund
6.93Combined
−11Positions
10%Market

The biggest single overvaluation in this dataset. The instrument ranks Denmark 14th — a solid jury-bait ballad, low on originality, low on earworm. The market has it third. Eleven positions separate the bookmakers from the scores. That gap needs a better explanation than the music provides.

⭐ Dark Horse
Bulgaria
DARA — Bangaranga
7.21Combined
+8Positions
2%Market

Balkan-Trap-Pop with sampled Kukeri bells — originality scored 9, the joint highest in the field. Returning country after a three-year absence. The ESC Insight predictive model has been moving this upward week on week. Eight positions above its market rank. The market is looking elsewhere.

🔴 Overvalued
France
Monroe — Regarde !
7.13Combined
−7Positions
11%Market

Jury lens of 7.52 is genuinely strong — elegant, polished, classic jury-bait. But the public lens drops to 6.69 because the earworm simply isn't there. If France wins, the televote will have been split and a jury-led result will have carried it. Seven positions above its merit rank.


All 35 Entries

The complete dataset.

Every entry scored and ranked. Position Delta shows the gap between where the bookmakers rank an entry and where the instrument ranks it by merit. Positive = market has underestimated it. Negative = market has paid a premium it hasn't earned.

How to read the Position Delta

Both the betting market (by implied win %) and the instrument (by Combined score) produce a ranking of all 35 entries. The delta is bookmaker rank minus merit rank. Positive = Undervalued — ranked further down the field than the scores say it deserves. Negative = Overvalued — placed higher than the merit justifies. Within ±3 positions = Solidly Priced. Tied entries share an average rank.

Predicted Winner (Greece)
Predicted Top 5
Predicted Top 10
Sort:

Click any entry to expand full scores. Position Delta = bookmaker rank minus merit rank. Positive = Undervalued · Negative = Overvalued · Within ±3 = Solidly Priced.


The Method

Build the instrument first.
Then see what it shows you.

The weights were fixed before a single entry was scored. Ten criteria, scored 1–10. Three differently weighted lenses. Every number is visible and every weight is arguable — that's the whole point.

01
Artist / Jury Lens

What professional juries reward. Vocal ceiling and originality carry the most weight. Geo/diaspora potential registers at just 3% — versus 15% in the public lens. That structural gap is where the two systems most fundamentally disagree.

Vocal Ceiling20%
Originality18%
Production Quality15%
Jury Signals12%
Emotional Resonance10%
Staging Ambition8%
+ 4 others17%
02
Public / Televote Lens

What casual viewers and diaspora communities actually vote for. Earworm and emotional resonance dominate. Geo/diaspora potential at 15% here versus 3% in the jury lens — the most structurally honest acknowledgement in this whole framework that Eurovision is not purely a song contest.

Earworm Strength20%
Emotional Resonance18%
Staging Ambition15%
Geo/Diaspora Potential15%
Vocal Ceiling10%
+ 5 others22%
03
Combined / Win Lens

The instrument's best model of what actually wins — a hybrid that rewards entries capable of performing across both voting systems. This is the primary ranking lens. An entry that dominates on jury but collapses on public, or vice versa, will not top this ranking.

Earworm Strength15%
Vocal Ceiling14%
Emotional Resonance12%
Staging Ambition12%
Production Quality10%
+ 5 others37%
10 Criteria · Scored 1–10
C1
Musical OriginalityHow distinctive vs the whole field?
C2
Vocal CeilingBest possible live vocal performance?
C3
Production QualityStudio and staging production value?
C4
Earworm StrengthDoes the hook stick after one listen?
C5
Emotional ResonanceDoes it land in the chest?
C6
Staging AmbitionIs the live concept genuinely memorable?
C7
Jury SignalsHistoric jury affinity for this style?
C8
Geo/Diaspora PotentialReliable bloc or diaspora vote ceiling?
C9
Hype SustainabilityWill momentum hold to finals night?
C10
Winner Pattern FitDoes it match the profile of recent winners?

Transparency

What this is.
What it isn't.

The same note that runs on every Sandbox project. The instrument was built before any scores were applied. These are not predictions. They are a structured opinion, made visible and arguable.

What the scores are

Structured estimates built from publicly available information — national final results, critical reception, fan community reaction, streaming data, and betting market signals, cross-referenced against each other. Every score is arguable. The methodology is there to be challenged.

What the scores aren't

Predictions. Empirical data. Live performance evaluations — staging won't be fully visible until Vienna. The geo/diaspora scores are based on historical voting patterns. Israel's score of 9 reflects documented government advertising campaigns from prior years, not a value judgement on the entry.

Where this scoring disagrees with others

Greece was upgraded most significantly from an earlier pass: earworm to 9, staging ambition to 9, emotional resonance to 8 — all research-justified after reviewing national final performances and fan response. Poland was downgraded: originality and earworm both reduced after multiple reviewers called it technically strong but generically forgettable.

Come back on 17 May

This page will be updated after the Grand Final with actual results against all three lenses. Every predicted position checked against reality. Every delta named. The question the instrument was built to answer publicly: how right were we, and where did the framework fail?

Built in conversation with Claude by The Unanswered. Part of the Sandbox alongside No Guilty Pleasures and No Soft Opinions. Not an Anthropic product. Not sponsored by any bookmaker.