Recognizing Bad Social Research or News Articles

If you don’t have the time to read the whole article, ask yourself these questions every time you read some research or a news article:

  1. Recruiting:
    • Did they recruit enough people from all appropriate groups to represent the population?

  2. Research Methods:
    • Did they ask non-biased and unleading questions which allow for neutral and other opinions?
    • Did they observe the subjects in a way that minimally effected their behavior?
  3. Reporting:
    • Did they report on the data itself or extrapolate it to say more than it said and ramble on with their opinions? (Reporting)

Now, for the meat:

We often use research (surveys, interviews, news articles) to evaluate our beliefs (or to prove a point). Much research out there in the world is faulty, biased, or lacking in very vital ways. Not a great way to educate yourself!

Since I’m a researcher by trade, I thought I’d put together a basic guide to analyzing the value of research. This is mainly with practical or social research, rather than scientific research (some principles still apply).

The first thing to remember is that research only says what it says and nothing more. When a report says “40% of Americans hate puppies” it really means “40% of Americans surveyed hate puppies.” Stark contrast there. Think about it.

The next important things to remember are:

  • Just because you agree with the research, doesn’t mean they had good research methods
  • Just because you disagree with the research, doesn’t mean they had poor research methods.
  • Just because they had poor research methods, doesn’t mean the result is wrong (it does mean the research is not a good reason to inform your beliefs).

The rest of this article is divided into 3 sections which reflect the three major phases of social research:

  1. Sources
  2. Research Methods
  3. Reporting

1. Sources-Getting the people

How many

When researching a certain population, one needs to get a good representation of all target audience groups (ages, socioeconomic status, political affiliation, region. Census data and other pre-research helps you know who these groups are).

The people that you choose to research are your sample. There’s a lot of math out there as to how many you need in a sample, but the important thing to note is that you need a good number of randomly selected people from each audience group.

How recruited or chosen

Every recruitment method has its drawbacks.

There are people who won’t take a phone survey, an email survey, or a pop-up survey. Ever. If these people are excluded, you’re skewing your results. People who are likely to ignore a survey are also likely to have certain opinions that needs to be in the research.

Money can get more of those people to participate. So, beware of research that is conducted for free, they only get the opinions of the really nice or the really opinionated.

Also, beware of research that uses only one recruiting method. If they only do phone invites, then they only get people who have phone #’s the researchers have access to (or people who have phones at all if the phone-less are potentially one of their target groups!).

2. Research Methods-Gathering the data

There are two main types of data you can gather: behavioral data and opinion data. Beware of research that tries to learn people’s behavior through questioning rather than observation. People don’t usually have an accurate picture of what they actually do or what they would do (we have distorted memories and high opinions of our hypothetical selves).

People will often say one thing and do another.

Behavioral research is best done through observation. This can be done in-person, through video footage, or even website analytics data (the measure that they know they’re being watched will change their behavior, beware!).

Opinion based research is best done through questioning. And OH BOY do people go wrong here! I’ve seen surveys with the equivalent of “Here is this really great thing, how great do you think it is?” with the following three options:

  • Pretty great!
  • Super great!
  • Extremely great!

I’m exaggerating. Slighty. But sheesh! My job day in and day out is to take the biased, leading, confusing, and faulty questions from my clients and translate them into actually usable questions.

If the research doesn’t let you in on the wording of their questions when presenting results, beware.

Questions might be leading or choices for answers do not represent all possibilities. It’s important that surveys have a neutral option in order to allow for those people who really don’t give a darn to not give a darn!

I was talking to Howard Tayler on Twitter about survey question writing when he offered this as an example:

c3gshvyucaaol-l

Do you see the problems in it? It’s cramming multiple responses into one choice. A terribly worded on at that. He offered this as his fix as a funny but equally biased alternative:

c3gsn17uoai578g

The sad thing is, I could expect an extremist democrat to actually put this wording in their version of this survey (Howard Tayler is certainly not an extremist, but a writer of science fiction and comedy).

One-on-one interviews (often used by journalists and news sources) answer questions deeply but not broadly, they get into the nitty gritty of a certain thought, but not into how many people think such a thing.

3. Reporting-Interpreting and publishing the data

This is likely the most important, since this is what most of us see when investigating a certain body of research. The important thing to remember (and beware if the reporting doesn’t say this) is that:

“Research only says what it says and nothing more.”

Feel free to repeat that a few times. 😛

How much is taken directly from the data, and how much is opinion?

When a report is filled with X number of participants indicated Y, you know that’s at least decent data. But, beware when only a few lines of the report are associated with the findings of the data and the rest is paragraphs and paragraphs of opinion, speculation, and broad exaggeration.

EXAMPLES

The biggest example in recent memory is the polls on who people were voting for last election season. The people who were likely to fill out or answer the polls (mainly Clinton voters) were not representative of those who showed up to vote (a lot more Trump voters than the polls).

Here are three examples of types of research I’ve encountered recently:

The news article

Mwahaha! The worst of the worst! A journalist will often handpick a few select individuals, often to prove a point or that they anticipate will agree with what they want to say (not representative of the population), they will then asked biased and leading questions, then will choose what they want to report (in or out of context), and will fill their article with paragraphs of speculation and opinion.

Not a good way to get data! Not all journalists are like this, and most aren’t trying to be faulty, but when accuracy is traded for speed, you have a holey target with nothing in the bullseye.

The phone survey

Ah, the phone survey. Have any of you gotten a call for a political survey? How many of your took it? How did you feel on the subject if you took it? Many don’t like taking these surveys, and many don’t offer any compensation, greatly decreasing the likelihood of good representation.

The intercept survey

The pop-up appears and through analytics we know how many see it and how many click x. Those who are likely to click on it are (like I mentioned before) either the really nice or the really angry. And having people at both extremes does NOT average out to an accurate representation.

No data is perfect

Every piece of research should have a list of cautions, exceptions, and caveats. No research is perfect. And one survey or set of interviews is never sufficient to fully understand a problem. Beware those who use one chart to describe the state of a nation. The data is always more complicated than that, and people are more complicated than that.

If you learn one thing from this, it is to take every bit of research with a grain of salt, and sometimes a gallon. Because bad data can be worse than no data at all.

Bad data can mislead you into a false sense of confidence in something that might not be true.

Booyah!

-Thomas Fawkes

Advertisement

Published by Thomas Fawkes

Writer of fantasy for lovers of philosophy and physics. Booyah.

2 thoughts on “Recognizing Bad Social Research or News Articles

  1. Why Thomas Fawkes you’ve done it again! What a delightful summary of what research means and how to evaluate its weight. I’m a fan of your work. You have a sound, rational mind.

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: