338Canada Ratings of Canadian Pollsters

Last update: October 26, 2020

This is not an easy exercise. Or at least it is, but it is not easy to make the whole thing public.

The 338Canada electoral model has been ranking and rating Canadian pollsters since the very launch of the site, but I had never made these figures public before. For the sake of transparency, I offer you here an explanation of how this calculation is made. 

Each election in this country has its own quirks, challenges, and surprises. Political polling can be an ungrateful endeavour: A single miss, and partisans and observers will never let you forget it. This is something crucial to realize: Even great polling firms have their occasional miss, and it is important not to throw the baby out with the bath water. Study what went wrong, learn from it, and carry on.

One of many examples: Mainstreet Research completely blew the Calgary Municipal election in 2017. It was historically bad: They had challenger Bill Smith in the lead during the whole campaign, but incumbent mayor Naheed Nenshi won by a comfortable 8-point margin in the end. Mainstreet then had an independent panel study their numbers and calculations to understand what the heck had gone so wrong. Months later, Mainstreet nailed the PCPO leadership race almost to the decimal point and were among the most precise firms in the 2018 Ontario provincial election (earning a 338 Grade of A+). Yet, whenever I relay the numbers from a Mainstreet poll, some are quick to reply "yeah, but Calgary!", the same way Trump fanatics will blurt "Hilary was supposed to win!" every time Biden leads in a poll. 

There are other examples of course, but the main lesson here is to take a look at the big picture and understand that sometimes, when dealing with statistical ensembles and moving targets, the results are not what you expect. Meteorologists run into this all the time: with 80% chances of rain on a given day, many people would cancel their plans, only to see bright sunshine on that very day. The odds of not getting a six on a dice roll is 83% - yet, are you completely shocked when you get a six? You have to look at the big picture. When a poll is imprecise, it doesn't mean polling itself doesn't work, the same way you will still lose a few hands even if you count cards at blackjack - in the long run however, you will see that the track record is generally much better if you do count cards.

On this page*, I have listed polls from 18 Canadian elections of the past decade (two federal elections and provincial elections from all ten provinces) and compared them to actual election results. Using these numbers, I calculate the "338 Score": The lower the score, the better the grade a firm gets. (*This page contains large tables with lots of data and may be not ideal for phone screen. Tablets and PCs should work fine.)

Here are the factors that determine how the 338 Score is calculated:
  1. We compare a firm's last published poll before the end of a campaign and use the numbers provided for the main parties rounded to the nearest percentage point. Some firms released figures with the first decimal, but, unless the poll has over 10k respondents (which never happens, at least in Canada), the number after the decimal is useless. e.g. 34.4% will be rounded to 34%;
  2. We calculate the total error in absolute value for each of the main parties. For example: A hypothetical poll has the Conservatives at 40%, the Liberals at 35% and the NDP at 20%. The election result is 38% Conservative, 36% Liberals, and 19.5% NDP. The poll's total error for those three parties would be |40-38| + |35-36| + |20-19.5| = 3.5; 
  3. Additionally, we look at the top-line numbers: if a poll has the Conservatives in the lead by three points, and the Liberals end up winning by two, that's a top-line error of five points. In this grading system, getting the main parties right is more valuable that nailing the scores of smaller parties;
  4. Also, we take a look at the poll in context with every election's string of polls: A poll that misses an election results by a fair margin will be penalized if other pollsters have actually called the election right. In other words, an imprecise poll cost a firm less in the ratings when every other pollster misses the results (see Alberta 2019 for a collection of imprecise polls even though all pollsters had the right winner... An election where voter turnout unexpectedly jumped to 64% - the highest in the province since 1982 - which partly explains the discrepancy.);
  5. Finally, a polling firm's recent work weighs more in the calculation than polls from four years ago. While I graded pollsters from all the way back to the 2014 Quebec election, recent elections carry more weight in the grading.

The 338Canada Rating will take into account all of these factors above. Additionally, points will be given or substracted depending on a firm's consistency and transparency

As for consistency: A polling firm that publishes, say, Liberal +10, then Conservative +2, then Liberals +9, etc. while other polling firms don't and the polling averages don't move much suffers from a lack of consistency. Outlier polls usually do well in the news media because it catches the eye and generates traffic (don't blame the media here, blame the polling firm or the readers who keep clicking). A direct example is from the 2018 Ontario election: May 8, Forum Research has the PC up by 7 points. Two weeks later, Forum has the NDP leading by 14 points - a 21 point swing, no less! Then, a week later, Forum had the PC back in front by 4 points. There were several polls from many different firms during that election, and no other firm showed such swings. See those polls here.  

As for transparency: Does the firm publish a complete poll report? Are the field dates, polling method(s), sample sizes readily available? Are the sub-sample information given, such as regional numbers, genders, age distributions? This information has to be readily available in the poll report. And, in cases of election campaign polls, the data has to be published, in its entirety, before election day. Polling firms that fail to do so will lose points in the 338Canada Rating.



There it is. Obviously, those grades will be updated after every elections (provincial and federal) in this country.

Have a look for yourself. For data nerds like myself (and probably you too since you read the whole thing), it is quite a bit of fun to look at. 




Philippe J. Fournier is the creator of Qc125 and 338Canada. He teaches physics and astronomy at Cégep de Saint-Laurent in Montreal. For information or media request, please write to info@Qc125.com.


Philippe J. Fournier est le créateur de Qc125 et 338Canada. Il est professeur de physique et d'astronomie au Cégep de Saint-Laurent à Montréal. Pour toute information ou pour une demande d'entrevue médiatique, écrivez à info@Qc125.com.