Your Definitive Guide to Understanding Polling (and Why Most Polls Are Garbage), by Scott Hounsell.
When reviewing polls in 2016, I came across bad poll after bad poll as a result of oversampling problems as well as push-poll lines of questions or questions that are framed in a way to influence the way someone answers. …
[An] example of a push poll question is “Do you support Donald Trump’s divisive rhetoric?” Maybe the respondent doesn’t believe Trump’s rhetoric is divisive. Maybe the respondent thinks the Democrats’ rhetoric is divisive. Either way, a respondent is less likely to respond in the affirmative to supporting divisive rhetoric and therefore would lead to a skewing of the results.
Polls should also not be opt-in. I will only say this once: ANY OPT-IN POLL WILL DELIVER AN FLAWED RESULT. Opt-in polling allows for people to seek out a poll or even be paid to participate in the poll. The moment a poll removes the random and unbiased nature of the results, those results are invalid, full stop.
How sampling “errors” push the polls are left:
Another trait of a good poll is one that has correct sampling. If a subsection of a population we are attempting to poll is 40% Republican, 40% Democrat, and 20% Independent/DTS/NPP, then you will either want to make sure your poll reflects that sampling or is weighted for that sample. If my sample for my poll ends up being 50% Republican, 25% Democrat, and 25% Independent/DTS/NPP, then I have incorrectly polled the population and my results will almost definitely incorrectly favor Republicans.
An example is the Economist / YouGov poll published on September 1, 2020, which gave Joe BIden 51% to 40% lead:
Of the 1,207 respondents to the question, 494 self-identified as Democrats and only 314 self-identified as Republicans (the remaining 399 were independent/third party).
Simply put, of the respondents polled, nearly 41% self-identify as Democrats, while only 26% self-identify as Republicans. A 15 point advantage for Dems built into the data. That number isn’t included in the methodology… I wonder why??
In a similar vein, all the covid infection surveys run by the anti-lockdown sites suffered from crazy levels of bias. The people in the biased studies volunteered — the people who thought they might have caught it and wanted a free check came forward. Those biased studies showing lots of asymptomatic cases, which implied that huge percentages of the population had already been exposed to covid and there were no symptoms, unknown to the mainstream scientists.
But the anti-lockdown sites were cherry picking the studies, choosing the highly biased ones to make the case for let-it-rip.
For example, the anti-lockdown people gave huge publicity to “the Stanford study,” which showed many times more people in Santa Clara had already caught covid than official statistics said were possible. No, just a biased sample (and a false positive problem with the tests). The anti-lockdown sites never mentioned the other Stanford study, by the same professor, that took an unbiased sample (namely all pro baseball players and their staffs in the US — i..e not volunteering or chosen by any method to do with covid) and found that only about half of infections were asymptomatic.
So why not use the methods used by political surveys to find (relatively) unbiased samples, and find out how many of them had had ciovid? This was in fact done, for example in Austria. It confirmed what the mainstream studies already knew — the rate of asymptotic infections is roughly half of total infections, and the rate of infections with only minor symptoms is about a quarter of all infections. Thus, the number of cases with serious symptoms is about a quarter of all cases. Thus, the number of infections is less than four times the number of confirmed cases.
You don’t get to choose your evidence in science. That’s a practice more common in salesmanship, politics, and fraud.