Frank OvaittAt an Institute for Public Relations Board meeting earlier this month, Trustee Maril MacDonald suggested that IPR might provide guidance to practitioners on how to identify bad research. That could be a mission in itself for IPR. But I decided to start by asking our Research Fellows what they would advise. Here is the wisdom that returned to me just for asking.

Don W. Stacks, Ph.D., Professor of Public Relations, School of Communication, University of Miami: “I’d suggest the following for starters:

  1. Watch for rounded numbers. Seldom is research as precise as 25, 75, etc.
  2. If a sample is stated as a ratio and the actual frequencies are not given, I’d be very suspicious of it (i.e., 9 out of 10 or 9:1 when you don’t know the actual frequencies).
  3. If there is no mention of how reliable the data is (here I’d look for anything beyond simple correlation and suggest that the practitioner at least know the names of several reliability statistics).
  4. Don’t trust any research that employed a methodology other than experimental if the researcher tries to make causal statements about results.
  5. If it isn’t well written, then it is probably not well thought out and should be taken with a grain of salt.”

David Michaelson, Ph.D., managing director, Teneo Strategy: “The most important advice I can give about spotting bad research is to assess if the questions are self-serving and biased. This starts with the basic principal of ‘garbage in/garbage out.’ If the questions are not valid or reliable and are designed to bias results, the research is unreliable from the start. Another way to spot bad research is if the supporting documentation is not available. Can you review the questionnaire? Is the unanalyzed data available? Is the research method clearly described? This gets at the core credibility of the work through transparency. Much of this is discussed in my paper that explores nine specific best practices that will assure quality research. It is available from the following link: http://www.prsa.org/Intelligence/PRJournal/Vol1/ .”

Donald K. Wright, Ph.D., Harold Burson Professor and Chair in Public Relations, College of Communication, Boston University: “Methodological approach is a huge problem in both academic and practitioner generated research. Unfortunately, as PR education grows, universities are hiring faculty who do not necessarily understand research methods. Lately I’ve seen research that sounds exciting until you get to the methods section and notice the author(s) surveyed their students and/or conducted interviews with 23 people and then have tried to generalize their results to a larger population such as all PR practitioners in the country. This problem is going to get worse before it gets better because potential research subjects are being bombarded with participation requests and some researchers are struggling to find qualified subjects.”

David M. Dozier, Ph.D., Professor and Coordinator, Public Relations Emphasis, School of Journalism & Media Studies, San Diego State University: “This is especially relevant to survey research. Sample size is important but not as important as representativeness of the sample. How were respondents selected? Often, organizations use convenience samples (sometimes called reliance on available subjects) and then mislabel such samples as ‘random.’ Probability sampling (such as random, stratified random, and systematic sampling) is required to make statistical inferences from samples to populations.”

Frank Ovaitt is President and CEO of the Institute for Public Relations.

Heidy Modarelli handles Growth & Marketing for IPR. She has previously written for Entrepreneur, TechCrunch, The Next Web, and VentureBeat.
Follow on Twitter

5 thoughts on “Spotting Bad Research

  1. This is a very appropriate topic; comments are thoughtful. As someone who “lives” in both the PR world (APR) and the market research world (Professional Researcher Certification), I value this topic.

    Over the years, I’ve worked with PR practitioners (including some who hold highest honors awarded in the profession) who say they don’t need to conduct research “to that level” (meaning to a level of unbiased work or that employs appropriate sampling methods). I’ve seen some professionals present findings of non-scientific research in news releases or reports to clients as if the findings were representative of a population—because this grabs attention and headlines.

    In this high-tech environment it is frequently difficult and, in some cases, impossible to obtain “random, representative” samples of many populations. However, it is still useful to conduct research using other sampling techniques to obtain useful, actionable insights. When research is conducted this way and results are presented as non-scientific yet insightful, then everyone wins.

    There is certainly a place for non-scientific research—it is not necessarily “bad” research. Biased research is “bad.”

    It is important to conduct unbiased surveys and to present the methodology honestly. It is not ethical to present non-scientific results as representative of a population. By educating clients, PR practitioners, and the media about the differences among non-scientific work that might still provide useful insights, “bad,” and scientific research, we can all win.

    It’s not necessary to pretend that every quantitative survey is conducted to provide results that are representative of the population.

    Honesty and transparency should rule.

  2. These are all good comments and should be considered when evaluating the quality of research. To me, it always comes down to two things: reliability and validity.

    The research needs to use a method that is reliable (will consistently get the same results), and valid (measures what it purports to measure). This is why it is so important that the researcher be transparent about the methodology and instruments. As Michaelson noted, if you can’t see the survey instrument the you don’t know if the questions are biased or leading, if the sample is representative (which includes reporting the response rate), or if the statistical analysis is appropriate. There are research providers who give you great results, but they ask you to trust their “black box” where the data goes in and all you see are the results. If the research provider can’t be transparent with the process, I would be very careful putting any value on the results.

  3. Thank you Frank (and Fellows, and Forrest) for this discussion. The PR industry would greatly benefit if its members became aware of what good and bad research is. Far too many people blithely pass along research results without bothering to consider its quality or limitations.

    Imagine if the PR profession could, somehow, become very, very tough on bad research. If it could become known as a profession that was very, very good at doing research properly. Think of the prestige that would come with that. Think of the advances that could be made if people didn’t waste time doing, reading, or planning with shoddy research.

    Three of my favorite red flags for bad research:

    — Bias:
    I second David Michaelson here. How many times have we seen the Widget Industry Council release their groundbreaking new research on the health of the widget industry? Whenever anyone who is conducting research has any sort of interest in the outcome, then the work is biased right from the start and the results are highly suspect. And this very difficult to avoid because conflict of interest is so common: There is always someone whose salary or promotion or degree depends on their research results. And it is often not at all easy or practical or politic to check up on their work.

    — Substitution of Polls for Observation of Behavior:
    I see so many studies that predict that X is going to happen next year, based on a poll that shows that some executives are of the opinion that X is going to happen next year. Such a study could have looked at investment or hiring or some actual behavior that would indicate that X is going to happen. But no, they just asked some people “What do you think?” and claim it’s research.

    — Any Results That Get Big Headlines: “Research Proves Bigfoot in Central Park!”
    It’s my job — and that of anyone else who pushes content out to readers — to get people to notice our articles. If some exciting research results come across my desk and I can make it into must-read news, then I am sorely tempted to go for the flashy headline and not look too closely at the methodology. My point is that lots of us make some hay out of shoddy research. Bad research is perpetuated by a sort of collusion between its producers and those of us who write about it or use it and don’t complain or speak up. I think it would be a great idea if the IPR did a little of that speaking up.

    Bill Paarlberg, Editor, The Measurement Standard

  4. I agree with all the comments Frank included above.

    It seems to me you might view the question regarding identifying bad research at least two ways: one would be to identify bad research that someone has already done and the other would be to identify bad research someone proposes doing. I will focus more on proposed research and the questions a purchaser of research services might ask to ensure their vendor knows what he or she is doing.

    I agree that transparency is essential. If you don’t know how the research will be done, you cannot trust it. With that in place I would recommend examining each of the steps in the research process:

    o Is the research problem clearly formulated? Is what you are trying to learn through the research clearly thought through and articulated?

    o Is the research design appropriate to address the problem? Is the research descriptive? Causal? Will the design generate data in a format that you can use to answer your question or help you make a better business decision?

    o Is the data collection method appropriate for the design? Should you use secondary or primary data? What’s the best way to administer the survey (mail, telephone, internet panel, etc.) Why is that the best way?

    o Who is the subject of your survey? Should you survey the population (census) or a sample of the population? If a sample, what is an appropriate sample for your purposes and how can you get it? Do you need a probability sample or not? Why? How big should the sample be?

    o Is the questionnaire biased or leading or is it balanced?

    o What analytical techniques will be used? Why? Are they the most appropriate? Why?

    These are the concepts I was taught in my first market research class, and it seems to me they are as valid now as they were then.

    Forrest W. Anderson, MBA
    Founding Member, IPR Commission on PR Measurement and Evaluation

Leave a Reply