## Election Model

We have built an election forecasting model for the next UK general election. It comes in two versions:

- The
**nowcast**takes the latest polls and simulates the election as if it were tomorrow. It takes account of potential errors in the polls and variation in swing across the country. It makes no attempt to estimate candidate effects (such as incumbancy or scandel). - The
**forecast**is exacly the same as the nowcast, but before simulating the election it tries to estimate how the polls will change between now and the election. It is built on the assumption that polls are impossible to predict and that each party’s vote share may go up or down before the election.

Both models simulate the election 10 000 times to give the conclusions that are presented here. Our approach is ‘polls only’, we do not add any additional subjective factors to our model as we think this is as likely as not to introduce bias. We think that the forecast gives the best sense of what will happen but the nowcast is useful for considering potential outcomes.

In the below tables we group seats in to four categories, from safest to most marginal: safe, likely, lean and toss-up. The minimum seats listed for each party is the sum of their safe, likely and lean seats. The maximum seats includes all the toss-up seats where a party has a chance of winning. You can read our full methodology at the bottom of this page or dive right in and see our predictions:

#### Current Forecast

Party | Minimum Seats | Maximum Seats |
---|---|---|

Labour | 363 | 394 |

Conservative | 181 | 215 |

Liberal Democrats | 33 | 41 |

SNP | 12 | 18 |

DUP | 6 | 6 |

Sinn Féin | 6 | 7 |

Plaid Cymru | 3 | 3 |

SDLP | 2 | 2 |

Alliance | 2 | 2 |

Speaker | 1 | 1 |

Green Party | 1 | 1 |

Ulster Unionist Party | 1 | 2 |

#### Forecast Seat Classifcation

Party | Toss-Up | Lean | Likely | Safe | Total |
---|---|---|---|---|---|

Labour | 17 | 41 | 305 | 363 | |

Conservative | 24 | 55 | 102 | 181 | |

Liberal Democrats | 2 | 10 | 21 | 33 | |

Sinn Féin | 1 | 5 | 6 | ||

DUP | 4 | 2 | 6 | ||

SDLP | 2 | 2 | |||

Plaid Cymru | 1 | 1 | 1 | 3 | |

Alliance | 1 | 1 | 2 | ||

Green Party | 1 | 1 | |||

Speaker | 1 | 1 | |||

SNP | 5 | 7 | 12 | ||

Ulster Unionist Party | 1 | 1 | |||

39 | 39 |

#### Current Nowcast

Party | Minimum Seats | Maximum Seats |
---|---|---|

Labour | 363 | 394 |

Conservative | 181 | 215 |

Liberal Democrats | 33 | 41 |

SNP | 12 | 18 |

DUP | 6 | 6 |

Sinn Féin | 6 | 7 |

Plaid Cymru | 3 | 3 |

SDLP | 2 | 2 |

Alliance | 2 | 2 |

Speaker | 1 | 1 |

Green Party | 1 | 1 |

Ulster Unionist Party | 1 | 2 |

#### Nowcast Seat Classification

Party | Toss-Up | Lean | Likely | Safe | Total |
---|---|---|---|---|---|

Labour | 17 | 41 | 305 | 363 | |

Conservative | 24 | 55 | 102 | 181 | |

Liberal Democrats | 2 | 10 | 21 | 33 | |

Sinn Féin | 1 | 5 | 6 | ||

DUP | 4 | 2 | 6 | ||

SDLP | 2 | 2 | |||

Plaid Cymru | 1 | 1 | 1 | 3 | |

Alliance | 1 | 1 | 2 | ||

Green Party | 1 | 1 | |||

Speaker | 1 | 1 | |||

SNP | 5 | 7 | 12 | ||

Ulster Unionist Party | 1 | 1 | |||

39 | 39 |

### Methodology

The election model we have developed is quite simple in design but depends on robust datasets that have been meticulously cleaned and validated. Our philosophy is that we should depend on the polls but take only weak inferences from the results. In essence we think that the polls are the best evidence we have got but they are not very precise or particularly accurate. We depend on them because alternative methods would require subjective intervention in the modelling which is ripe for introducing all sorts of bias.

When building the model, we considered including several other factors but find them to be either spurious (macroeconomic measures), lacking in publicly available consistent data (satisfaction ratings) or adding complexity but not insight (incumbency or other candidate level factors). We dedicated a lot of time to back testing the model to see how it performs in previous elections. The results were, that it depends on the polls – when they missed badly so did the model otherwise the model performed well.

The bare bones of the model are that it takes the previous election results and estimates the vote share in each constituency based on the change in the polls; subject to errors in the polls and geographic variation in vote shares.

To run such a model therefore, we need election results at constituency level, a measure of poll changes and reasonable estimates of the errors in polls and geographic variation.

#### Election Results

This should have been the simplest part of the exercise as results for past elections are available from several public sources. However, the next general election will be fought on different boundaries to those used in 2019. Hence, we needed to estimate the votes each party would have received in each of the new constituencies (notional 2019 results). When we published our model at the start of 2024 there were no published notional 2019 results available so we created our own. However from 17th January 2024 onwards the model uses the notional results produced by election experts Colin Rallings and Michael Thrasher along with David Denver in Scotland and Nicholas Whyte in Northern Ireland. These are the notional results that will be used by the major broadcasters when analysing the results of the upcoming election.

#### Polling Index

Next, we needed a measure of how voting intention has changed since the last election. Our attitude to polls is that generally they are much the same and all polls should be treated equally with two minor adjustments:

- Polls commissioned by political parties should be thrown out and those commissioned by campaign groups should be weighted down.
- Polls from pollsters with long track records and good accuracy should be weighted up.

Otherwise our philosophy is to include all polls and average them. We have tried to find all the publicly available polls and include them in our database. The database is used to calculate the polling average which is weighted based on our pollster ratings. The average is smoothed using a 14 day moving average.

Generally we rely on the average voting intention for Great Britain (i.e. England, Scotland and Wales) as this is the most commonly polled geography. Where we have enough polling to calculate an average for a nation, we do so, and we use this instead of the Great Britain average. This means that estimates for Wales and Scotland tend to be based on polling in these nations.

When we do not have enough polling for a party to calculate a polling average we assume that voting intention is the same as it was in the last election (but still subject to polling error and geographic variation) i.e. we assume they are polling what they got last time and this may be as wrong as a regular poll. This explains why our model shows so little change in Northern Ireland, where there is very little polling and different parties stand compared to the rest of the UK.

##### Pollster Ratings

Our pollster ratings are meant as a guide to recent performance of pollsters in UK general election polls. They are based on three components: accuracy, longevity and membership of the British Polling Council. Ratings range from 0 to 100.

Seventy percent of a pollster’s score comes from their accuracy in predicting the most recent three UK general elections. The measure used depends on the differences between the national share of the vote (for the three largest parties) and the final poll fielded before polling day. A perfect prediction of each of the last three elections would contribute seventy points to a pollster’s score.

Ten percent of a pollster’s score comes from their longevity in polling UK general elections. A pollster is awarded ten points if the published a poll conducted during each of the last three parliaments (i.e. between 2010-15, 2015-17, 2017-19) New pollsters have a longevity score of zero.

Twenty percent of a pollster’s score comes from membership of the British Polling Council. Members receive twenty points and non members zero.

#### Estimating Polling Error

Accurately estimating polling error is crucial to get a good estimate of future election results. Polling error comes from a variety of sources (see our article on polling bias). Our view based on our analysis and research is that it is not possible to estimate polling error in advance of an election and that it is, as likely to favour one party, as another. However, it is clear from the data that the magnitude of the polling error varies by party in the UK. Somes of this is simply because the biggest parties poll higher numbers than minor parties. You would expect bigger misses for parties polling between 25% and 40% then for parties polling less than 10%. However, beyond this, the polls have repeatdly missed on the Labour vote more than on the Conservative vote and we want this to be reflected.

To estimate polling error in our model we assume that it is normally distributed with mean zero and standard deviation equal to the average sample standard deviation for past elections for a given party. Where there is not enough data to calculate the average sample standard deviation we assume that it is 2. In simple terms, we assume that 95% of the time the polls will be as wrong as they have been in previous elections for a given party.

In each iteration of the model we estimate polling error at the Great Britain level and apply this to all seats, unless there is enough data for a nation to do the calculations at that level. This means that generally England is estimated based on polling data for Great Britain, Scotland and Wales are estimated based on polling data of themselves and Northern Ireland is assumed not to have changed since the last election (as there is next to no polling there).

#### Estimating Geographic Variation

The change in the national voting intention does not translate simply into changes at the constituency level. In the middle of the twentieth century it was possible to calculate swing between the two biggest parties and use that to forecast vote shares. The emergence of a strong third party along with nationalist strength all over the UK has killed off two party swing as a reliable metric. Instead we rely on national change in vote share adjusted for a random component for each party in each constituency.

Unlike the polling error we see no trends in the data by region or party so we make no adjustments for these things. The random component is drawn from a normal distribution with a mean 0 and standard deviation of 4 (percentage points of vote share). The choice of parameters is based on analysis of past elections. It is always the case that some seats swing by substantially more (see our article on geographic variation) but we think it impossible to estimate where that will be ahead of time.

#### Estimating Changes in Polling Before Election *(Forecast Only)*

The forecast (but not the nowcast) tries to take account of how the polls might change between today and the election. At the launch of the forecast in Janaury 2024, it was not clear when the election would be called. Hence the forecast assumed that it will take place on the last Thursday possible (23^{rd} January 2025). The general election was called on 22nd May 2024, from this point forward the model uses the acutal election date of 4th July 2024.

Our view is that the future path of polls is uncertain and the best way to forecast it is to assume each party is as likely to go up as go down. Hence we use a random walk to forecast the polls for each party. This takes the polls are they are today and adds or subtracts a small amount for each day until the election. The forecast polls are then put through the same code as the nowcast.

The upshot here is that if Labour are 10 points ahead today then the model thinks that they are as likely to be 20 points ahead on polling day as they to be level with the Conservatives. The nowcast on the other hand just predicts the election based on the 10 point lead it sees today. As the election nears the forecast and nowcast will converge as the polls have less and less time to change before polling day.

#### Classifying Seats

The model is run 10 000 times and we then classify seats in to four categories based on how often they are won by one party:

**Safe**seats are very unlikely to be won by another party. Seats are considered safe if the model thinks they will be won by the same party at least 90% of the time.- It is improbable that a
**likely**seat is won by another party. Seats are considered likely if the model thinks they will be won by the same party at least 80% of the time. - Seats that
**lean**to one party may be won by another party. Seats are considered to lean to one party if the model thinks they will be won by the same party at least 70% of the time (or 60% of the time if three parties win 10% of the vote). - All other seats are described as
**toss-ups**.