Skip to main content
ABC News
How Our 2022 World Cup Predictions Work

The World Cup is back, and so is another edition of FiveThirtyEight’s World Cup predictions. For those of you familiar with our club soccer predictions or our 2014 and 2018 World Cup forecasts, much of our 2022 forecast will look familiar.

As always, we show the chance that each team will win, lose or tie every one of their matches, as well as a table that details how likely each team is to finish first or second in their group and advance to the knockout stage. We also have a bracket that illustrates how likely each team is to make each knockout-round match that it can advance to, as well as its most likely opponents in those matches. Along the way, you can explore some what-ifs by advancing teams through the tournament bracket to see how that would affect the forecast. And just like last time, our predictions incorporate in-game win probabilities that update in real time.

Below is a summary of how the forecast works, including a description of FiveThirtyEight’s Soccer Power Index (SPI) ratings, how we turn those ratings into a forecast and how we calculate our in-game win probabilities.

SPI ratings

At the heart of our forecast are FiveThirtyEight’s SPI ratings, which are our best estimate of overall team strength. In our system, every team has an offensive rating that represents the number of goals that it would be expected to score against an average team on a neutral field and a defensive rating that represents the number of goals that it would be expected to concede. These ratings, in turn, produce an overall SPI rating, which represents the percentage of points — a win is worth 3 points, a tie worth 1 point, and a loss worth 0 points — the team would be expected to take if that match were played over and over again.

Our World Cup SPI ratings are made up of two separate systems; 75 percent comes from the team’s match-based SPI ratings, which are generated from recent international match results. The other 25 percent comes from our roster-based SPI ratings, which estimate team strength by combining each team’s roster with our database of club soccer matches.

_

Match-based SPI ratings

To generate our match-based SPI ratings, we run through every past match in our database of international matches — back to 1905 — evaluating the performance of both teams with four metrics:

  1. The number of goals they scored.
  2. The number of goals they scored, adjusted to account for red cards and the time and score of the match when each goal was scored.
  3. The number of goals they were expected to score given the shots they took.
  4. The number of goals they were expected to score given the non-shooting actions they took near the opposing team’s goal.

(These metrics are described in more detail in our post explaining how our club soccer predictions work. For matches that we don’t have play-by-play data for, only the final score is considered.)

Given a team’s performance in the metrics above and the defensive SPI rating of the opposing team, it is assigned an offensive rating for that match. It is also assigned a defensive rating based on its pre-match defensive rating and the attacking performance of the other team.

These match ratings are combined with the team’s pre-match ratings to produce new offensive and defensive SPI ratings for the team. The weight assigned to the new match’s ratings is relative to the game’s importance; a World Cup qualifier, for example, would be weighted more heavily than an international friendly.

Roster-based SPI ratings

Just as we’ve generated offensive and defensive ratings for every international match in our database, we’ve generated SPI ratings for thousands of club teams across the globe:

2022 World Cup SPI ratings

Team Match Roster Overall
Brazil 94.4 90.6 93.5
Spain 89.4 89.9 89.5
Germany 88.1 90.7 88.8
Portugal 88.0 87.2 87.8
France 86.9 89.8 87.7
Argentina 88.4 83.4 87.2
Netherlands 85.8 86.7 86.0
England 83.6 91.6 86.0
Belgium 82.4 82.8 82.5
Uruguay 81.9 77.9 80.9
Denmark 80.4 78.8 80.0
Croatia 78.5 79.8 78.8
Switzerland 78.4 75.4 77.6
Serbia 77.1 71.9 75.8
Morocco 76.0 74.4 75.6
United States 74.8 74.8 74.8
Mexico 77.3 64.9 74.3
Senegal 71.8 79.2 73.8
Ecuador 77.3 58.3 72.7
Canada 76.3 56.5 71.6
Japan 71.1 72.3 71.4
Poland 66.7 72.8 68.3
South Korea 67.8 61.5 66.1
Tunisia 68.8 57.6 65.9
Wales 65.2 66.7 65.6
Cameroon 62.1 69.8 64.2
Iran 65.3 53.6 62.2
Australia 65.0 48.2 60.8
Ghana 52.9 73.2 58.6
Saudi Arabia 59.2 50.9 56.9
Costa Rica 56.2 53.4 55.5
Qatar 50.2 53.0 51.0

Alongside these club team SPI ratings, we maintain ratings specific to each player that are based on his club team’s performances and the amount of time he played in each match. A player gets 75 percent credit just for being named to the squad for a club match; the other 25 percent is based on the percentage of available minutes played. For example, a player who played every minute of every match for a club team in a season would have essentially the same SPI rating as his club team. A player who sat on the bench for the entire season would have an SPI rating equivalent to 75 percent of his club team’s rating. The model is indifferent to each player’s performances in his club matches; it cares only about how good his club team is and the number of minutes he played.

Each World Cup team’s roster-based SPI rating is a composite of the roster’s player ratings, scaled to the same range as our international SPI ratings. So regardless of national team results, a team like Germany — which is mostly made up of players from elite club teams in the Premier League and the Bundesliga — will receive a much higher player rating than a team like Costa Rica, which has many players from MLS and lesser European teams.

Match forecasts

Given each team’s SPI rating, the process for generating win/loss/draw probabilities for a World Cup match is three-fold:

  1. We calculate the number of goals that we expect each team to score during the match. These projected match scores represent the number of goals that each team would need to score to keep its offensive rating exactly the same as it was going into the match.
  2. Using our projected match scores and the assumption that goal scoring in soccer follows a Poisson process, which is essentially a way to model random events at a known rate, we generate two Poisson distributions around those scores. Those give us the likelihood that each team will score no goals, one goal, two goals, etc.
  3. We take the two Poisson distributions and turn them into a matrix of all possible match scores from which we can calculate the likelihood of a win, loss or draw for each team. To avoid undercounting draws, we increase the corresponding probabilities in the matrix.1

Take, for example, the 2014 World Cup opening match between Brazil and Croatia. Before the match, our model was very confident that Croatia would score no goals or one goal. Brazil’s, distribution, however, was much wider, leading to its being a significant — 86 percent — favorite in the match.

Although the host country, Brazil, was eliminated from the 2014 World Cup in spectacular fashion, and home-field advantage in the Premier League is shrinking, there is still historical evidence that teams get a boost in performance when playing the World Cup on home soil. Similarly, teams from the same confederation as the host nation experience a smaller but still measurable improvement in their performances. In the 2022 World Cup, we’re applying a home-field advantage for Qatar of about 0.4 goals and a bonus about one-third that size to all teams from the Asian Football Confederation. These are both a bit smaller than the advantage that historical World Cup results suggest.

Tournament forecast

Once we’re able to forecast individual matches, we turn those match-by-match probabilities into a tournament forecast using Monte Carlo simulations. This means that we simulate the tournament thousands of times, and the probability that a team wins the tournament represents the share of simulations in which it wins it.

As with our other forecasts, we run our World Cup simulations “hot”, which means that each team’s rating changes based on what is happening in a given simulation. For example, as of this writing, if Brazil and Uruguay were to meet in the round of 16 after the former finished first in Group G and the latter finished second in Group H, Brazil would have about a 77 percent chance of winning. But if the teams were to meet in the round of 16 with their finishes reversed — Brazil underperforming expectations to finish second in its group and Uruguay finishing above Portugal in Group H — Brazil’s chance of winning the match would be only about 66 percent.

Live match forecasts

Our live match forecasts calculate each team’s chances of winning, losing or drawing a match in real time. These live win probabilities feed into our tournament forecast to give a real-time view of the World Cup as it plays out.

Our live model works essentially the same way as our pre-match forecasts. At any point in the match, we can calculate the number of goals we expect each team to score in the remaining time. We generate Poisson distributions based on those projected goals and a matrix of all possible scores for the remainder of the match. When the matrix is combined with the current score of the match, we can use it to calculate live win probabilities.

For example, in the 65th minute of that same Brazil vs. Croatia match from 2014, with the score tied 1-1, our projected distributions for the remainder of the match had narrowed considerably. A Brazil win was still the most likely outcome, but much less so than at the start of the match.

_

Before a match, we can determine each team’s rate of scoring based on the number of goals it’s projected to score over the entire match. This rate isn’t constant over the entire match, however, as more goals tend to be scored near the end of a match than near the beginning.2 We account for this increase as the match progresses, which results in added uncertainty and variance toward the end of the match.

We also account for added time. On average, a soccer match is 96 minutes long, with two minutes of added time in the first half and four minutes of added time in the second half. The data that powers our forecast doesn’t provide the exact amount of added time, but we can approximate the number of added minutes in the second half by looking at two things:

  1. The number of bookings so far in the match. Historically, each second-half booking tends to add about 11 seconds of time to the end of the match.
  2. Whether the match is close. There tends to be about 40 extra seconds of added time when the two teams are within a goal of each other in the 90th minute.

Our live model also factors in overtime and shootouts, should we see any in the knockout phase of this World Cup. Our live shootout forecasts follow the same methodology described in this 2014 article.

Finally, we make three types of adjustments to each team’s scoring rates based on what has happened so far in the match itself.

Red cards are important. A one-player advantage is significant in soccer and adjusts scoring rates by about 1.1 goals per match, split between the two teams (one rate goes up; the other down). Put another way, a red card for the opposing team is worth roughly three times home-field advantage.

Consider a match in which our SPI-based goal projection is 1.50-1.50 and the home team has a 37 percent chance of winning before the match. If a red card were shown to the away team in the first minute, our projected goals would shift to 2.05-0.95, and the home team’s chance of winning would go up to 62 percent.

Good teams tend to score at a higher rate than expected when losing. The most exciting matches to watch live are often ones in which the favored team goes down a goal or two and has to fight its way back. An exploration of the data behind our live model confirmed that any team that’s down by a goal tends to score at a higher rate than its pre-match rate would indicate, but the better the team that’s behind is, the bigger the effect.

Take the 2014 Brazil vs. Croatia match once again. Before the match, Brazil was a substantial favorite, with an 86 percent chance of winning, but it went down 1-0 after Marcelo’s own goal in the 11th minute. Without adjusting for this effect, our model would have given Brazil a 58 percent chance to come back and win the match, but with the adjustment, our model gave them a 66 percent chance of winning. (They went on to win the match 3-1.)

Non-shot expected goals are a good indication that a team is performing above or below expectation. Anyone who has watched soccer knows that a team can come very close to scoring even if it doesn’t get off a shot, perhaps stopped by a last-minute tackle or an offside call. A team that puts its opponent in a lot of dangerous situations may be dominating the game in a way that isn’t reflected by traditional metrics.

As a match progresses, each team accumulates non-shot expected goals (xG) as they take actions near the opposing team’s goal. Each non-shot xG above our pre-match expectation is worth a 0.34 goal adjustment to the pre-match scoring rates. For example, if we expect non-shot xG accumulation to be 1.0-0.5 at halftime but it is actually 0.5-1.0, this would be a swing of 1.0 non-shot xG, and a 0.34 goal adjustment would be applied to the original scoring rates. This isn’t a huge adjustment; at halftime, the away team in this example would have about a 5 percentage point better chance of winning the match than if non-shot xG were proceeding as expected.

In the case that there has been a red card in a match, the red card adjustment takes precedence over the non-shot xG adjustment.

We took particular care to calibrate the live model appropriately; that is, when our model says a team has a 32 percent chance of winning, it should win approximately 32 percent of the time. Just as important is having the appropriate amount of uncertainty around the tails of the model; when our model says a team has only a 1 in a 1,000 chance of coming back to win the match, that should happen every 1,000 matches or so. The 2022 World Cup is only 64 matches, so it’s unlikely that our model will be perfectly calibrated over such a small sample, but we’re confident that it’s well-calibrated over the long run.

We hope you follow along with us as the tournament plays out.

Check out our latest World Cup predictions.

Footnotes

  1. There has been some debate about what kind of distribution best models scoring in soccer. We’ve found that two independent Poisson distributions work well with the addition of diagonal inflation. That is, we generate the two distributions independently but increase the value of each cell in the matrix where the scores are equal by some constant (somewhere around 9 percent, but this differs by league and is based on the degree to which we would have undercounted draws had we not inflated the diagonal).

  2. The rate of scoring in the 85th minute is about 1.4 times the rate of scoring in the fifth minute.

Jay Boice is a computational journalist for FiveThirtyEight.

Comments

Latest Interactives