Premium

IN MY ORBIT: Polling Matters, but I Still Don't Put Much Faith in It

Graphic shows results of AP-NORC poll on attitudes toward election security by political affiliation; 2c x 4 inches; 96.3 mm x 101 mm;

Hanging around my RedState colleagues has given me a better understanding of polling. I still think some of it is more theoretical than data driven, but I now know why it is essential to dig deeper into the methodology used, why the margin of error matters so much, why the phrasing of questions and their respondent samples are important, and most importantly who is conducting the poll and why. That all of these outlets have an agenda is a given. However, it is those who work hard to suppress voters on one side of the aisle or another who muddy the waters and diminish the quality work done by outlets like Gallup, Rasmussen, and the Pew Research Center.

Which brings me to a “well respected” policy institute and their polling surveys. Public Policy Institute of California (PPIC) is quoted by all the California fish wraps, from Sacramento Bee all the way to the San Diego Tribune, and by many of the national ones too.

PPIC claims to be

“a nonprofit, nonpartisan think tank. We provide data-driven, nonpartisan research and spark productive conversations to inspire policy solutions for California’s challenges.”

With the current state of our State, PPIC hasn’t inspired much, because our policies are what is destroying us. Which makes me wonder why this is the gold standard for polling.

So, I dived into its recent Recall Survey.

All of the polling I’ve looked at now has Gavin Newsom winning the Recall, with varying degrees and margins of error. PPIC has likely voters voting 58 percent NO/39 percent YES. However, it’s methodology is very curious.

From the Report:

“Findings in this report are based on a survey of 1,706 California adult residents, including 1,254 interviewed on cell phones and 452 interviewed on landline telephones. The sample included 510 respondents reached by calling back respondents who had previously completed an interview in PPIC Statewide Surveys in the last six months. Interviews took an average of 18 minutes to complete. Interviewing took place on weekend days and weekday nights from August 20–29, 2021.”

For such a populous state (39 million) with, from the last Secretary of State count, 22 million registered voters, I thought the sample size was a wee bit small. One of RedState’s resident polling gurus Scott Hounsell didn’t see it as a problem, but said he would have went for a bigger sample size, like 2,500.

The PPIC methodology also showed people opting in from prior survey interviews within a six-month period, which, according to Scott, could cause problems.

“So, in order for data to remain reliable, it must all be collected the same way, analyzed the same way, and be applied to the different people the same way. They admit that the contacts for this poll were collected two different ways. That makes me question it,” Scott said.

“Also, I know of no real conservatives who would be opting-in to participate in a PPIC poll.”

The fact that the survey under sampled Republican registration might have been a giveaway to that.

The most questionable area of the Survey was how they chose population demos:

“Results for African American and Asian American likely voters are combined with those of other racial/ethnic groups because sample sizes for African American and Asian American likely voters are too small for separate analysis. We compare the opinions of those who report they are registered Democrats, registered Republicans, and decline-to-state or independent voters; the results for those who say they are registered to vote in other parties are not large enough for separate analysis. We also analyze the responses of likely voters—so designated per their responses to survey questions about voter registration, previous election participation, intentions to vote this year, attention to election news, and current interest in politics.”

So, in order to get a sampling they lump similar ethic groups together. That doesn’t sit well with me in terms of integrity. It also comes off as lazy. Can’t you just resolve this by selecting a larger sample size and work harder to get respondents from those ethnic categories?

But the kicker of all this is that PPIC takes the respondent’s word that they are 1) actually registered voters; and 2) registered in the party they claim.

Are we too hurried to check actual voter rolls? And since a good majority of voters have opted out of both parties and designated themselves as NPP (No Party Preference), this could be a huge factor that changes the poll results. For a California think tank, you would think PPIC would consider this; it’s one of the things that makes this Recall such a wild card in the first place. They can gauge Democrats for sure, and maybe Republicans, but there is no way to gauge which direction someone who is NPP will take. Which is why these figures showing Hair Gel winning the Recall by wide margins may not be accurate.

While others consider PPIC a gold standard, from this cursory dive into their methods, I plan to take their data with a grain of salt, and maybe pair that with a margarita.

In case you missed it, our Managing Editor Jen Van Laar did a magnificent takedown of another so-called “trusted” poll source: the Trafalgar Group. It is a lovely read, but to sum up, Jen delved into their data and methodology, and discovered that Trafalgar included a candidate who had dropped out of the race (Doug Ose), excluded a candidate who was consistent and prominent in the race (Kevin Kiley), and they appeared to have only presented their own “top-tier” of candidates, rather than the full slate of 46 in the question about who voters would choose to replace Newsom. It’s subtle things like this that can change not only numbers, but a voter’s viewpoint on who is a viable candidate.

Jen’s investigation resulted in an apology to Kevin Kiley, and a bit of shade towards RedState.

Oh, well.

Even after the errors and debacles of 2016 and 2020, people still slather after polls to give them answers. Answers that may not be the most accurate, or trustworthy.

So why do we continue to use them as a tool to decide on who to vote for? Or more pointedly, rely on their accuracy?

Polls are a necessary source of public opinion when an actual man-on-the street interview would take too much time. They also give an indication of the temperature of the water. Are voters lukewarm, warm, or boiling over about a particular candidate or issue? A poll can help assess that.

But mostly, polls serve the role of validating our choices, and for some, controlling their behavior. If we are going to vote against a candidate, but the polls say they are winning, then we decide to change our vote to a so-called “winning” candidate. And vice-versa.

Take this Recall election. Polls are showing Larry Elder as the leading Republican candidate, and that may be true. But what this data mostly served to do was energize the Democrat base to vote “NO” on the Recall, because otherwise they might be stuck with an evil Trumpian Republican as governor, and there was no way they could have that. His Hairfulness picked that up and ran with it, which is now why the polling is reflecting that push.

Personally, I view polling as a good gauge of the temperature of the electorate, but a bad gauge of who will actually win those votes.

But what do I know? I’m just a polling newbie.

Recommended

Trending on RedState Videos