The 338Canada Mailbag, July 2020

This is the first 338Canada mailbag, where I answer various questions from readers about polls, politics, my work or whatever else they have in mind. Obviously, I cannot answer every single question, so please do not be offended if I didn't pick your question - there are just too many. Many of the questions came from this Facebook post, some come from Twitter, and others from Reddit.

Also, there are some questions that I simply would prefer not to answer for various reasons. Some of the questions I received would probably require a detailed master's of doctoral thesis, so I leave that to experts - I won't pretend to know stuff I just don't know. Other questions came from a conspicuously partisan angle and it would not be appropriate for me to get into. 

But some questions were fun and a bit more personal, so I figured why not!

Here we go.



Looking back over the last few years, can you provide a success percentage? Some kind of generalized gauge on how accurate the polls have been vs real results? Tracey, Ottawa, Ontario

Polling often gets a bad rep from its occasional misses, and naturally an outlier poll error or just plain whack numbers will often get more media coverage, the same we don't report in the news trains and planes that don't crash. 

I have compiled polling from federal and provincial elections since 2014 and tabulated it all on this page (the 338 Canadian pollster ratings, explained here). Take a look at those tables and you will see the successes and misses of Canadian pollsters in recent general elections. It's not perfect, because it's statistics; it cannot be perfect. For the most part, it's pretty good.

Say polling firm X misses the mark in a given election, when X publishes a new poll afterwards, some readers will just reply: "Yeah, but how can you trust X? They blew it last time." It's a natural reaction, but not a rational one. Pollsters deal with statistical ensembles and are constantly aiming at ever-moving targets. They are supposed to miss the mark once in a while. Even the greatest polling firms have had their occasional misses. We have to look at the greater picture and consider a firm's complete track record. 

Oh, and what happens to consistently bad pollsters? They go out of business. 

I hope this answers your question, Tracy.




What was the catalyst for starting the site? Were you a political junkie first and then a stats nerd second or the other way around? Andrew, Cambridge, Ontario 

How did you get your start reporting polls? Was it something you've always been attracted to, or was it more of a spur of the moment type of affair? Jon, Brantford, Ontario

Hi Jon and Andrew, please allow me to answer these two questions together. 

I have a degree in astrophysics, so dealing with tons of data, all with varying levels of uncertainty, and working on computer modeling to make sense of said data has been both part of my academic training and a passion of mine for 20+ years. Before becoming a prof at my college, I worked as a croupier at the Montreal Casino... and in my spare time I had designed programs to understand the odds of many card games (and of course realizing that one has to be a real sucker to ever play against the House). So yeah, I was a numbers nerd before writing about politics.

The catalyst really was the 2016 U.S. Presidential election. I followed the numbers daily in what was at first a simple Excel spreadsheet that became more and more complex (I had to switch to programming in python, because Excel didn't make sense at some point). I was floored by many pundits who had Clinton as a 95% to 98% favourite to win, not because of partisanship or even bad data, but because it just wasn't what the data was saying. Don't get me wrong, Clinton was the favourite, but - allow me a sports analogy here - she was favourite the same way the Pittsburgh Penguins are favorite to win against the Montreal Canadiens in August. I had Clinton at around 70% chances of winning. Therefore I had Trump winning about 30% of my simulations. 

Roll a dice: 1,2,3,4 = Clinton wins; 5 or 6 =Trump wins. Trump winning was not this unfathomable scenario many made it up to be, because American polls in 2016 were not as bad as many believe. They were awful in Wisconsin and bad in Michigan. Aside from those two (key) states, they were pretty accurate. The national polls were, on average, right on the mark. Had pollsters correctly measured voting intentions in Wisconsin and Michigan, my odds on election night would have been really close to 50-50.

And so, later in November 2016, I bought the domain name Qc125.com and began analyzing numbers for the next Quebec election. Shortly after, since I wanted to cover other elections in Canada, I founded 338Canada. And here is my track record so far.




What justifies counting Innovative Research polls when they cant even provide a margin of error? Ryan, Cochrane, AB.

This is a very good question, Ryan, but one which is hard to answer in a non-highly nerdy way. Let me give it a shot.

The term "margin of error" is often misused in the media for semantic reasons: We confuse margin of error with uncertainty. In statistics, "margin of error" has a very specific meaning: in layman terms, it's a measure of the confidence of the results from the probabilities of random event, and it can be calculated.

For instance, imagine you have a bag full of coloured coins - some are red, some are green, some are blue, but you don't know how many of which. 

You pick one. It's red. Can you say all the coins are red? Of course not. You pick ten. Five are red, four are blue, one is green. Can you see half the coins in the bag are red? Well, not really because it's just 10 coins. But what about 200? 500? 1000? At some point, since you pick randomly, your sample will be closer and closer to the actual proportions of each coloured coins, even if there are millions of coins. 

In our context, the "event" means calling a conservative/liberal/NDP supporter. You ask hundreds or thousands of people. If your sample is random (and representative of the whole population), you will get voting intentions close to within range of the actual number. However, for a poll to have a margin of error, the respondents polled have to come from a random sampling (or as random as possible). Telephone surveys are generally probabilistic and because everybody with a phone has a roughly equal chance of being polled - although it can never be completely random (some people don't have a phone, others never answer it, and such other factors). 

But technology changes and pollsters face new challenges to actually reach respondents. To save efforts and costs, many polling firms have moved online and run polls among a pool of respondents from their database (to which one has to sign up for). From there, you run your poll from people in your database and weigh the results by age, region, gender and such to build a representative sample of the electorate (this is where data from the Canadian census becomes so important). Therefore, since not every voter has a random chance of being polled, the margin of error - by its textbook definition - is simply not applicable. 

That does NOT mean the poll has no uncertainty, but the margin of error is not applicable

So how do we know the uncertainty of an online poll? Simple: We don't. The only way to estimate it would be to do it again and again and again and try to see some patterns of variance. Nevertheless, as one major Canadian pollster once told me: "We don't know exactly how precise it is, but it is precise enough to get good results and sell our services to client outside politics, which is good enough for me."

Besides, look at the polls from the last federal election here: five pollsters received an A or A+ in that election, including three pollsters doing online polls (namely Léger, Ipsos, and Abacus Data). It can still work, but it takes a large database, and the better polling firms have them. 




Who is going to win the Stanley Cup? Which Canadian team is closest to win? - Mary-Ann, Vancouver B.C.

Not the Habs, that's for sure.

I think both Alberta teams have the best shot among the Canadian teams. 

The Canucks? Oh, wouldn't that be a riot. *drum emoji*

Sorry, Mary-Ann. That's all I got. 




Will you be covering the up coming provincial election in Saskatchewan? The election is on October 26. Thank you. Jamie Brandrick, Mayor of Borden, Saskatchewan.

Yes, Mr. Mayor, I will. However I must warn you: there probably won't be too many polls to sink our teeth in. In 2016, only four polling firm went on the field during the Saskatchewan election. Fortunately though, they all did pretty well. See results from 2016 here. The 338 Saskatchewan projection has been live for a month now, you can see the numbers here.




Have you come across polling outfits that refuse to reveal their methodology or have a dodgy methodology? Thanks. Mark, Nanaimo, British Columbia.

Yes I have, Mark. And I now ignore them. I only use polls from professional polling firms with a track record I can analyze. If a firm is not transparent enough, I will ignore it. Since I began the site, I have been in contact with many pollsters in Canada and have generally good relationships with them. If something is fishy or needs some explanation, I don't hesitate to reach out and ask questions. 

And in more general terms, as far as integrity goes, I have to be able to confront my students and show them I have done work that is honest and which follows the data. I teach my students the scientific method and we try to keep an objective eye on the numbers. If I ever deviate from that for clicks and likes, I would never be able to go back into a classroom.




Why do the polls vary significantly from each other? Debra, Didsbury, Alberta

Well, Debra... it would be much more suspicious if all polls were identical, would it not? The answer to your question is this: Different methods, different samples, different means to get to voters, but - above all - a constantly moving target. Polling really is a science, but it deals with humans, which are funny animals. Humans change their minds sometimes. And they are moody. 

This is exactly why we need to analyze polls carefully, and make sure pollsters remain honest. And we have to be careful: A Facebook clicker is not a poll. The local station "Question of the day" is not a poll. Well, not scientific polls at least.

I met astrophysicist Hubert Reeves when I was in College and I will always remember one of his quotes from his speech. Paraphrasing: "Stars, galaxies, black holes, elections and photons... all of which are infinitely complex, but all are also much simpler than the human mind."




What do you do to unwind? It appears you are always working! Robert, Mount Pearl, Newfoundland

I try to read more, like I use to before social media. 

I am not a gamer per se, but I love playing open-world video games: Grand Theft Auto, Minecraft, and such. 

I used to play a lot of hockey (goalie), but in my late 30s my back and ankles began to hurt too much after games, so I sadly had to quit.

Thanks for asking, Robert.

* * *

Thank you, dear readers for your questions. Sorry I could not get to them all. We'll do this again in August. Be safe and be kind. 




Philippe J. Fournier is the creator of Qc125 and 338Canada. He teaches physics and astronomy at Cégep de Saint-Laurent in Montreal. For information or media request, please write to info@Qc125.com.


Philippe J. Fournier est le créateur de Qc125 et 338Canada. Il est professeur de physique et d'astronomie au Cégep de Saint-Laurent à Montréal. Pour toute information ou pour une demande d'entrevue médiatique, écrivez à info@Qc125.com.