The Quality of Public Opinion Research in Canada

Joan Bryden’s recent article on the polling industry in Canada and the media’s roll in reporting the results is very interesting.  She touches on a number of important issues related to the quality of polls influenced by declining response rates and the difficulty in obtaining a truly random sample.  Eric’s defence of Canadian polling and our industry’s past performance also adds some context to this discussion.

As I used to  tell students in my political science research methods class (most of whom wished they didn’t have to sit through those lectures), the assumptions of random probability sampling are becoming increasingly difficult to meet.  It is no longer true that every Canadian has an equal chance of being interviewed.

First, more and more people lack a land line.  I haven’t lived in a house with a land line in over a decade and many Canadians in my generation who have left home are in the same situation.  While we can contact people on their cell phones, many screen their calls and only answer calls from people they know (I blame scams and telemarketing for this).  Second, for those who do still have a land line, the prevalence of call display and voice answering systems means that call screening is not just a reality for cell-only households.

The industry responded to the increasing costs of conducting telephone surveys (in-person and mail-back surveys now very rare) by developing new online tools for market and public opinion research.  Almost every major firm in Canada and in Britain are developing online research communities.  While there are different methods of recruiting members (and these distinctions are important), online research is becoming a popular and affordable alternative to traditional forms of research.

I realized this when I attended an academic conference in Scotland in 2009 and found that all of the survey research was being done online.  Knowing my collegues in the academy are often the last adopters of new technology and the biggest skeptics of it, I was surprised by how prevalent it was being used.

Certainly online research is not perfect and we have to be mindful of its weaknesses.  But I think we have to be careful when dismissing research as “shoddy” when it does not meet all the assumptions of traditional survey research.

Those who participate in online research panels do so because they want to answer surveys.  But we can’t ignore the growing difficulties survey researchers face in declining participation rates on telephone surveys.  Those who answer surveys by telephone also want to and the number of those willing is declining.  Telephone surveys, live or automated, are not perfect either – and most (unless you’re spending tens of thousands of dollars on your surveys) are failing to meet those assumptions.

It is our responsibility as researchers to be transparent with our methodology, open about their shortcomings, cautious with our conclusions, and demand that media outlets who cover our research report these appropriately.

So where do we go from here?

I’m very interested in the prospects of online research.  Canada is not the only country trending to online survey methodologies.  In the May 2010 British election, four of the nine firms conducting political research for media outlets were using online samples.  At the end of the day, most of the online research firms were as accurate, if not more, than the telephone or in-person methods.

A quick look at the final polls released a day or two before the election finds that overall, most of the polls were well within the standard margin of error used in random probability surveys.  All the surveys over reported Liberal Democrat support while under capturing Labour support.  Whether there was a list minute swing away from the LDs to Labour is not important – it is the similarity in accuracy between the telephone and online survey methodologies.  Generally speaking, the polling was accurate despite the fact that most firms have different methodologies when it comes to weighting and sampling.

General Election Results
May 6, 2010
Telephone surveys
Average results (spread from actual)
Online surveys
Average results
(spread from actual)
Research Firms Ipsos-MORI, Populus, Comres, IMC YouGov, Harris, Angus-Reid, Opinium
Conservative 37 36.5 (-0.5) 35.3 (-1.7)
Labour 30 28.3 (-1.7) 27 (-3)
Liberal Democrat 24 27 (+3) 27.5 (+3.5)
Other 10 8.3 (+1.7) 9.8 (-0.2)

In Canada, pollsters in 2006 were generally accurate.  I worked for Nik Nanos at then SES Research when we got the results within a few decimal places.  In 2008, Angus-Reid, using online methods, was rated the most accurate.

The difference between British and Canadian polling is illustrated in the fact that I could easily find detailed methodologies for all British polls but had a harder time finding the same det.  The expectation in Britain is that firms provide information on how the polls were conducted.  One thing they do is to report the difference between the weighted and unweighted counts.

Since sampling often produces “unrepresentative” results – for example a larger proportion of older Canadians than is in the population – firms weight the respondent sample so it matches the actual population (available from census data).  There are various methods of doing this – for national political surveys Abacus weights by region, gender, age, official language, and previous vote.

In all of our releases we report both the unweighted counts (the distribution of who we actually interviewed) and the weighted counts (for interpreting the ‘national’ numbers).  A quick look at some of the other firms that release results, finds that we are the only ones who do.  This is a small step that may increase confusion, but at the end of the day it will increase confidence in the work we do and allow everyone to critically assess the methods we use.  Transparency cannot hurt in this case.

One final beef – firms that release results to the media for public consumption should make their reports available free of charge.  How are we to assess the methodology critically if we can’t even access the detailed report?

My long diatribe ends now.  I think it is a good thing that we are debating the value of research and how the media uses it.  Joan Bryden’s article should be welcomed by all and should serve as a standard for everyone involved in public opinion research to be transparent with their methodologies, admit the possible biases and errors that enter into any survey (telephone or online), and be cautious of the conclusions we infer from the results.  Public opinion research is valuable and while I don’t agree that it hurts our discourse or political system, it probably does impact behaviour just as a bad physical from our doctor should change  our eating or exercise habits.

Update:

Just to add some humour into the mix, a great clip from Yes, Prime Minister on survey research.