Ontario Election Polling: Don’t be afraid of innovation
On Tuesday, Abacus Data released results of a poll that used a different type of question to measure vote intention.
The traditional way of measuring vote intention involves asking respondents how they would vote if the election were held today. If a respondent says they are undecided, many pollsters would ask them if they are leaning towards one particular party. Abacus has used this method before and we were quite successful at forecasting the 2011 federal election.
The traditional methodology works well to forecast election results close to election day and the industry in Canada has a very good record of forecasting elections at both the federal and provincial levels. Sometimes firms are off but that’s the nature of survey research – there will always be one or two rogue polls.
Difference between end of campaign polls and mid-campaign polling
But one week into the election, voters are only starting to pay attention and think about their options. I wanted to try a new method of understanding the dynamics of vote intention. I think it is a better way of measuring how hard or soft political support may be and the chances for last minute swings.
We all saw what happened in Quebec during the 2011 federal election with the NDP surge. No one was able to anticipate the rapid change in vote intention that occurred and this has something to do with the questions we ask.
When pollsters release their final numbers before voting day, everyone can hold them to account for their accuracy. But how do we know that polls released between elections or even during an election are accurate? How do we know that the question we use to accurately measure the final vote is accurately measuring voting intentions when the vote is several weeks away?
Polls vary early in a campaign because of (1) different methodologies (not just the question but the data collection method) and (2) because many voters are not solidly committed to one party. Our poll released on Tuesday clearly demonstrated that. In Ontario, the PCs have the most committed voters but there’s a large group of potential voters who are wavering between more than one party. Is it right to force those respondents to make a choice? Is that choice reported in a poll a false choice?
The Abacus Data poll released on Tuesday looked at vote intentions and decision-making a little differently. It assumes that many voters do not come to a vote choice easily and waver between many choices.
It is a new methodology? Yes. But only when it comes to measuring political opinion. Pollsters have been using a sliding scale for years to gauge purchasing intent and brand loyalty.
Is it flawed? No. Just because a few competitors don’t like it or haven’t seen it before, doesn’t make it “snake oil”.
Our method is a different way of measuring vote intentions but it doesn’t mean it’s incorrect or flawed. As a social scientist I’m trying to understand the best way I can what people are thinking, what they are likely to do, and why they are likely to do it. I felt strongly that the traditional way of measuring vote intention during a campaign was not sufficient in a multi-party environment where people are less partisan, voting at lower rates, and likely to shift allegiances. When I see a potential improvement in the methods we use as social scientists to understand public opinion and behaviour, I will try it.
If some of our competitors aren’t open to innovation, that’s a real shame. New ideas should be celebrated not attacked by established companies that have a vested interest in the status quo.
Online Surveys vs. Telephone Surveys
Finally, I want to comment on the debate between online and telephone surveys.
The validity of online research has been questioned once again. But I will remind everyone that Angus Reid Public Opinion, a practice of Vision Critical, have a great track record of forecasting both federal and provincial elections. Angus Reid, the CEO of Vision Critical and one of the sages of Canadian polling, built his new firm around online technology and research.
After the 2011 federal election he said in a company release, “… online technology continues to provide accurate forecasts of electoral contests, shattering some myths and ill-informed assertions that should definitely be put to bed.”
Moreover, our own track record and that of Leger Marketing in the 2011 federal election as well as others around the world who use online survey methods demonstrate that online research, when done correctly, can be just as accurate if not more than traditional telephone polling. Sure there are challenges with online research, but they are not any bigger than those faced by researchers using telephone surveys.
A recent study released by academics Brian Schaffner and Stephen Ansolabehere for the American Association of Public Opinion Research (AAPOR) conference has found that opt-in surveys are just as accurate random telephone surveys.
To completely write off online research is totally off-base and does not take into account the progress the industry has made in making it an accurate tool for measuring public opinion.