Blog

How To Recognize Bad Research

July 5, 2019 Author: Roy Eduardo Kokoyachuk

In 1954, Darrell Huff called out the dangers of misrepresenting statistical data in his book How to Lie With Statistics. I don’t know how big a problem bad survey data and misinformation was in the 1950s but if you fast forward to 2019, social media and 24-hour news cycles have created an explosion of content that purports to be factual. Chances are, a percentage of it is not, which is what I want to talk about.

As a professional market researcher, I probably spend more time reading the small print on market research and public opinion surveys than most. In so doing, I’ve come across several instances where survey data is misinterpreted, misapplied or just plain wrong. The reasons for this vary. Sometimes they are honest errors, and other times the data was intentionally designed to mislead.

To the trained eye, some of these discrepancies are easy to spot, but not always. So, here are a few things I look for when reading polls and market research results to help me identify faulty research.

Misleading Questions

A common problem with survey results is that respondents often answer a different question than what the survey designer thought they were asking. This can happen because the respondent either didn’t understand the question or their preferred response was not an option in a closed-ended list. The Brexit referendum may be one of the most consequential examples of this issue. It offered a binary choice, Stay or Leave, without providing a way to capture more nuanced responses. Fifty-two percent of Britons chose Leave, but many voters stated that they chose Leave to air their dissatisfaction with the UK’s governance and would have chosen something else had there been options that addressed their concerns. In fact, new research from YouGov suggests that only 33% of the British electorate prefer a hard Leave option.

Poor Targeting

The most basic question to ask when looking at survey research results is Who was included in the survey? followed by Are they representative of the population we’re interested in? Obtaining a representative sample of U.S. consumers or voters is becoming increasingly difficult. Landlines were once the gold-standard when fielding surveys. Starting with the popularization of answering machines in the 1980s and the subsequent decline in landlines caused by mobile phones, it is now impossible to obtain a representative sample of the U.S. population over the phone. Online methodologies have stepped in to fill the void, but they present their own challenges.

While reaching individuals has become more difficult, the U.S. population has become more diverse. The most common problem we see with surveys that purport to be nationally representative is that they rely on convenience samples made up of easy to reach people. For example, we see lots of research on the U.S. Hispanic population that neglects to include the 30% that do not speak, let alone read or write English well enough to answer the survey. Neglecting to include hard-to-reach segments of the population can often skew the results enough to make them worthless.

Targeting issues also come up in polling. Determining who is more popular and who is likely to win an election are two different questions. During the presidential election cycle, we’re bombarded with polls showing support or disapproval of the candidates. Most of those are public opinion polls that try to measure the popular vote. The popular vote, however, does not elect presidents. The electoral college does. In fact, a U.S. president can be elected with as little as 23% of the popular vote. Therefore, any political poll that does not take into account the rules of an election are merely entertainment and don’t have predictive value.

Poor Survey Design

The advent of DIY survey software has produced a boon of survey data to consume. DIY is great for low-stakes decisions but present problems when the results will be used to make important ones. Survey design is a science with decades of academic research supporting it and scholarly journals devoted to its advancement. Question design matters. Here are some of the most common issues we see with survey design:

  • Scales: The options respondents are given to choose when answering a question are called scales. A scale with fewer options will yield a different result than one with more options for the same reason that Brexit survey takers who weren’t presented an option that reflected their views chose Leave. Recently, two surveys measuring interest by Democrats in the upcoming election yielded wildly different results, 35% vs. 74%, simply because one had a 4 option scale and the other had 5 options.
  • Framing: How one asks a question matters. A classic framing example is that more people will rate ground beef better if it’s framed as 80% lean vs. 20% fat. Bad actors use framing to create push-polls that yield desired research results. When possible, it’s recommended to see how the question is worded before accepting the outcome.
  • Social Desirability Biases: We all want others to have a favorable opinion of us. Our ideal self eats right, exercises regularly, reads important books and watches mind-expanding documentaries. Our true self eats too much chocolate, sits on the couch for hours, reads gossip columns and watches reality television. It’s important to ask oneself if the reported behavior is socially desirable or undesirable. Professionally designed surveys mitigate this by careful wording that reduces the perceived risk of choosing undesirable responses and lessens the pressure to select socially desirable answers.

Finally, it’s important to look at market research and polling results holistically and ask yourself if the results are internally consistent. For example, if a survey says that only 10% of respondents would consider purchasing an electric vehicle but that 30% of everyone surveyed would purchase a Tesla, which only makes electric cars, then either the first or the second percentage may be true but not both. Internally inconsistent survey results are usually caused by poor questionnaire design. If one inconsistency exists in the results, then the rest of the data becomes suspect.

Several online news aggregation sites now have Fact Check technology that let us know if news stories on the web are true. We don’t yet have that for market research surveys but, with a little attention to the fine print, we can decrease the likelihood of being lied to by statistics.


Looking to connect your company with multicultural consumers and future proof your business?
Contact us today and learn more about our custom market research solutions.