}

Saturday, June 15, 2019

Polls apart

Recent opinion polls in New Zealand offered up wildly different results and presented completely different trends. There are many possible explanations for that, including timing, polling methods, sampling errors, question construction—or even pure chance. Whatever’s going on, there’s one thing that’s certain: They can’t both be right. Let the buyer beware—and be very sceptical.

The polls (graphic above) were conducted at roughly the same time, but the results are clearly very different. The Newshub Reid Research Poll was taken May 29-June 7 (sometimes they’ve reported May 30 as the start date), and the One News Colmar Brunton Poll was taken June 4-8. This is one of the first possible explanations for the differences: The timing.

The New Zealand Budget was introduced in Parliament on May 30, and the Newshub poll was begun before Budget Day (or on the day…), and finished a week after. The One News poll began several days after Budget Day and ended about the same time as Newshub’s, after four days. In reporting the story, both news shows said their poll indicated what happened as a result of the Budget’s announcement. One News said there was no “Budget Bounce”, meaning the government didn’t go up in the polls and, in fact, they claimed Labour went down in the poll. Newshub, on the other hand, claimed that Labour did get a “Budget Bounce” and said they went up in the poll. Which claim is true depends, obviously, on which poll is more accurate.

The similarity of polling period suggests that the timing of the polls is probably not relevant, though the longer polling period for the Newhubs poll might matter. Overall, this means other factors are probably be at play. Another obvious source of the divergence would be how the polls were conducted.

In the past, Colmar Brunton rang only landlines, which has been a source of complaints for years. This poll was conducted by randomly dialling people on both NZ landlines and NZ cellphones using probability sampling. Their sample was of 1002 people, which is a typical amount. They describe their margin of error this way (read their PDF report on the poll for full details):
The maximum sampling error is approximately ±3.1%-points at the 95% confidence level. This is the sampling error for a result around 50%. Results higher and lower than 50% have a smaller sampling error. For example, results around 10% and 5% have sampling errors of approximately ±1.9%-points and ±1.4%-points, respectively, at the 95% confidence level.

These sampling errors assume a simple random sample of 1,000 eligible voters.
Reid Research hasn’t yet posted the current poll results on their website, so we don’t know for sure if their methodology has changed from previous polls. However, in their older polls, they sampled 1,000 people, 750 of whom were telephoned (they don’t disclose whether they were landline or mobile, but in the past they were landline only). The other 250 were from online polling.

Over the years, there have been many debates on whether the phone method matters and whether Reid’s use of online polling is valid. The fact that Colmar Brunton now rings mobile phones suggests, at the very least, that they listened to critics. But that doesn’t by itself make their results more accurate no less accurate, and Reid’s use of online polling doesn’t by itself make theirs less accurate or more accurate. On the other hand, we could tell if the sampling is problematic if we knew the extent of the correction they applied to their raw survey results.

The things we can’t know, and no one will tell us, is how they correct sampling errors, so we can’t judge if their methods appear to be sound or not. However, they both use industry-standard methods, and, in any case, that, too, doesn’t necessarily mean that one poll or the other might be less accurate than the other.

We also can’t rely on history to help us work this out. In the companies’ final election polls before the 2017 General Election, both polls were remarkably similar—and off. Both polls overstated support for National and the Greens, but of the two, Reid was closest to the result that New Zealand First had, where Comar Brunton greatly understated the party’s support (“greatly” because it was about 50% below the actual result). Both were pretty accurate on Labour’s eventual result. This suggests that their correction for sampling errors may not be fine-tuned enough, that they skewed conservative, toward national in particular.

So, history is no guide, and, at first glance, it looks like we can’t use the differences in these polls' methods or analysis to tell us much, either. This is reinforced by the fact that both polls had remarkably similar results on the possible legalisation of marijuana, which frankly seems odd: Why are they so hugely different on political polling, but so similar on the referendum? The answer could well be the question asked.

We don’t (yet?) know what, specifically, but the question reported on Newshub was simply, “Should we legalise cannabis?”, which is a very broad question. On the other hand Colmar Brunton asked:
“A referendum on the legalisation of cannabis will be held at the 2020 General Election. Possible new laws would allow people aged 20 and over to purchase cannabis for recreational use. The laws would also control the sale and supply of cannabis. At this stage, do you think you will vote for cannabis to be legalised, or for cannabis to remain illegal?”
That question is reasonably fair, given that the exact wording of the referendum hasn’t been released yet, but the general structure of the proposal in the question aligns with how the media has described it. The poll results showed that people under 34 were more likely to support legalisation, and those over 55 were least likely. It also found that National Party supporters were more likely to oppose legalisation.

The Newsbub Poll result produced a closer result than One News’, but found similar ideological variation. By itself, this doesn’t seem to support the claim that the One News sample skewed older and more conservative than it should have. But another question may reveal a bias.

The worst-worded question in the One News poll was this: “Would you consider voting for a party with Christian or conservative values at the 2020 General Election?” This should have been bloody obvious to everyone involved, but those two are not automatically the same thing or even symbiotic. National is a conservative party with conservative values. New Zealand First is a more moderate party with conservative values. Hell, even the Centre-Left Labour Party has some conservative values. But all of those parties are also firmly secular.

So far, only one rightwing “party” (technically, it is only announced, and doesn’t exist yet) is positioning itself a “Christian” party, but the only one claiming “conservative values” is a “Christian” party in all but name. The question should have been two questions.

We know this question was deeply flawed because of the results: Those most likely to support such a party parties were: Pacific Island peoples (who are often very Christian, but not necessarily Christian), Asians (who are often conservative, but not usually Christian)—and people 18-49? SERIOUSLY?! Every poll conducted, and all the social science data we have, shows that people in that age group are less religious and less conservative than older generations, but the poll says they’re among those most receptive to “a party with Christian or conservative values”? At the same time, voters 60-69 were reported as less likely to vote for either kind of party, and that, too, is at odds with research that indicates religious affiliation and conservatism goes up with age. Are we really supposed to believe that young voters are suddenly more religious and more conservative than their parents/grandparents? That seems highly improbable.

Further polling with properly constructed questions might explain why this question’s poll results are so wildly divergent form everything we know about people’s religious and ideological compartmentalisation, but, on the face of it, it appears that the One News poll may, indeed, have skewed conservative, as it did to some extent in 2017. As it is, the results for that one question cast a shadow over the entire poll.

The bottom line is that we cannot yet explain why the two polls are so divergent on political questions, yet similar on the marijuana referendum. It seems unlikely that timing accounts for the difference. Polling methods, sampling errors, and the correction of them, could account for the divergence—were it not for the similar results on the referendum. That leaves question construction, and that was certainly a problem with one question at least. That means that, at the moment, pure chance alone is as good an explanation as any.

These polls, as with most polls, must be taken with an entire mine of salt. The reliability of the entire polling industry has been questioned since so many companies seemed to get it so very wrong in the USA’s 2016 presidential election. However, most US polls actually did far better than the popular belief holds, something most people don’t know. Because of that doubt, though, polling companies must redouble their efforts to make their polls as accurate as humanly possible. One or both of these two polls haven’t done that—we just can’t tell for sure which it is or if it's both.

Let the buyer beware—and be very sceptical.

2 comments:

rogerogreen said...

these US primary polls, and especially in smallish geographies such as Iowa, make me VERY suspicious

Arthur Schenck (AmeriNZ) said...

Absolutely. At their very best, polls are just a snapshot of one moment in time, but you're right, and small polls in small areas are particularly susceptible to sampling errors so they rely more heavily on mathematical models to "correct" the results to try and fix the errors and inherent bias from them.

I'm also highly suspicious of all the horserace polls for the Democratic nomination: It's still too far out, of course, but also the nomination is decided state by state, not nationally. It look to me like pollsters are falling into the same traps that led to their mistakes in 2016.

Add to that the polls of voters who voted for the Republican nominee in 2016 but who are now disenchanted. I've seen some claim a Centrist would appeal to them, others that a Leftist would, and still others anything as long as they're not "establishment", whatever that means. I don't think that we have any idea what such voters really want, nor do we know how much they were motivated merely by their extreme dislike of Hillary, but we do at least know one thing: They, unlike the frothing fans of the current occupant of the White House, are at least theoretically winnable. We'll see,

I just hope the polls get better and more reliable.