March 19, 2024

I, Science

The science magazine of Imperial College

            Polling is one of the most used ways to gauge the feelings or opinions of a large group of people. It’s been used for centuries. According to the book Proofiness, by Charles Seife, the first polling, or at least first political polling “was conducted by the Pennsylvanian, a Harrisburg newspaper, in July 1824” (93). The frequency of polling has only increased since then, as marketers, journalists, and politicians all use polling for their own interests. Seife argues that polling can be easily manipulated for various purposes. For journalists, polling is an already created event, and one they don’t have to create themselves. They can merely use current polls as stories for articles if they’re running low on other things to cover. Seife defines journalistic use of polling as a “pseudoevent,” as it is: “planned rather than spontaneous. It occurs at a convenient time and at an accessible location” (95), which is perfect for any journalist. This is why you may see a lot of polling stories in media, specifically around politics or controversial topics.

Why is writing about polling in newspapers and tabloids a problem?  

            The problem with polling in journalism is that these pseudoevents are written like real events, and few readers can understand the full implications behind these pseudoevents. In the case of writing about polling, “reporters bend real events to a convenient timetable, completely freeing them from the less than ideal timing of bone-fide news events” (97). While the goal of a poll might be to gauge a public’s opinions on something, a journalist can easily frame the poll in a different context to provide a specific perspective for a story. Few readers can spot slants around polling, and if they can it’s mainly due to spotting the framing of how the poll results are written. Keep this in mind the next time you’re reading about a current poll.  

How polls work:

            But how does polling work exactly? Polling is the process of asking individuals about their opinions on a topic, and then running statistical tests on the results gained. Many of our current polling processes happen via phones, but there are also online polls as well. With polling comes the margin of error, which Seife describes as “a fundamental limitation to the precision of a poll, an unavoidable statistical error that faces pollsters when they use a sample of people to intuit the beliefs of an entire population” (102). Let’s say you want to poll all of the engineering students at Imperial College about Brexit. It’s really hard to get every single engineering student’s opinion on Brexit, so instead you ask only 20 students their opinions, and then extrapolate the results to the entire population of engineering students. You will have a margin of error to be accounted for in your sample as you didn’t poll the entire population of engineering students. The margin of error is important to remember when any poll is publicized, as it is often misused by the press. Journalists can and have used the margin of error as a reason to believe the results of a poll, without discussing polling accuracy. Sometimes the margin of error isn’t even reported on at all, misleading readers to believe the poll more accurate than it actually is.

Besides the Margin of Error, there is Systematic Error:

The other error that is commonly underreported in publicized polls is systematic error. Systematic error is due to choosing a sample size that makes your results skewed or biased. One example would be if you tried to poll all of the students eating lunch near a library to talk about buying school lunches. This would bias your sample size as there are more restaurants and cafeterias on a school campus, not just at a library. Additionally, you would also get people who may just be visiting the school who may not be students.

Opinions give skewed results:

Another systematic error with polls is that their voluntary attributes bias their results. Being voluntary, “it’s almost always the case that people with strong opinions tend to respond much more often than those who don’t have strong opinions. This produces a bias; the poll disproportionately reflects extreme opinions at the expense of moderate ones” (108). While systematic error is rarely reported in public polls, it is something to be aware of. Asking questions like: how did the pollsters retrieve their data, which people were polled, and how long did the poll last, are good to start with in breaking down the accuracy of a public poll.

            It’s also good practice to see what questions the pollsters were asking, if that is available. Polls tend to get different results when questions are worded differently. Even some phrases might cause differences in answers: for example, using “collateral damage” when discussing the effects of a war in the Middle East instead of saying, “dead civilians.” It is all about how you word the question.

Is Polling worth saving:

            Should we get rid of polling altogether? Polling does help us to see the extremes about a topic, and can help to uncover a public’s opinions. Until a better method comes along, the phones will keep ringing and people will keep asking their questions. If you’re ever approached by a pollster, ask them the questions listed above and see if you can see if they even know how accurate their poll is. Chances are, they probably won’t.

Sources: C. Seife. Proofiness: the Dark Arts of Mathematical Deception: 2010. Viking Press.

Kenna Castleberry is a finished M.Sc. student in Science Communication at Imperial College.