In 1954, Darrell Huff called out the dangers of misrepresenting statistical data in his book How to Lie With Statistics. I don’t know how big a problem bad survey data and misinformation was in the 1950s but if you fast forward to 2019, social media and 24-hour news cycles have created an explosion of content that purports to be factual. Chances are, a percentage of it is not, which is what I want to talk about.
As a professional market researcher, I probably spend more time reading the small print on market research and public opinion surveys than most. In so doing, I’ve come across several instances where survey data is misinterpreted, misapplied or just plain wrong. The reasons for this vary. Sometimes they are honest errors, and other times the data was intentionally designed to mislead.
To the trained eye, some of these discrepancies are easy to spot, but not always. So, here are a few things I look for when reading polls and market research results to help me identify faulty research.
A common problem with survey results is that respondents often answer a different question than what the survey designer thought they were asking. This can happen because the respondent either didn’t understand the question or their preferred response was not an option in a closed-ended list. The Brexit referendum may be one of the most consequential examples of this issue. It offered a binary choice, Stay or Leave, without providing a way to capture more nuanced responses. Fifty-two percent of Britons chose Leave, but many voters stated that they chose Leave to air their dissatisfaction with the UK’s governance and would have chosen something else had there been options that addressed their concerns. In fact, new research from YouGov suggests that only 33% of the British electorate prefer a hard Leave option.
The most basic question to ask when looking at survey research results is Who was included in the survey? followed by Are they representative of the population we’re interested in? Obtaining a representative sample of U.S. consumers or voters is becoming increasingly difficult. Landlines were once the gold-standard when fielding surveys. Starting with the popularization of answering machines in the 1980s and the subsequent decline in landlines caused by mobile phones, it is now impossible to obtain a representative sample of the U.S. population over the phone. Online methodologies have stepped in to fill the void, but they present their own challenges.
While reaching individuals has become more difficult, the U.S. population has become more diverse. The most common problem we see with surveys that purport to be nationally representative is that they rely on convenience samples made up of easy to reach people. For example, we see lots of research on the U.S. Hispanic population that neglects to include the 30% that do not speak, let alone read or write English well enough to answer the survey. Neglecting to include hard-to-reach segments of the population can often skew the results enough to make them worthless.
Targeting issues also come up in polling. Determining who is more popular and who is likely to win an election are two different questions. During the presidential election cycle, we’re bombarded with polls showing support or disapproval of the candidates. Most of those are public opinion polls that try to measure the popular vote. The popular vote, however, does not elect presidents. The electoral college does. In fact, a U.S. president can be elected with as little as 23% of the popular vote. Therefore, any political poll that does not take into account the rules of an election are merely entertainment and don’t have predictive value.
The advent of DIY survey software has produced a boon of survey data to consume. DIY is great for low-stakes decisions but present problems when the results will be used to make important ones. Survey design is a science with decades of academic research supporting it and scholarly journals devoted to its advancement. Question design matters. Here are some of the most common issues we see with survey design:
Finally, it’s important to look at market research and polling results holistically and ask yourself if the results are internally consistent. For example, if a survey says that only 10% of respondents would consider purchasing an electric vehicle but that 30% of everyone surveyed would purchase a Tesla, which only makes electric cars, then either the first or the second percentage may be true but not both. Internally inconsistent survey results are usually caused by poor questionnaire design. If one inconsistency exists in the results, then the rest of the data becomes suspect.
Several online news aggregation sites now have Fact Check technology that let us know if news stories on the web are true. We don’t yet have that for market research surveys but, with a little attention to the fine print, we can decrease the likelihood of being lied to by statistics.
Looking to connect your company with multicultural consumers and future proof your business?
Contact us today and learn more about our custom market research solutions.