Bad Research example: Trump’s Media Survey

This is an example of how to analyze research as related to my blog post Recognizing Bad Social Research or News Articles.

I swear I’ll analyze good examples too! But the bad ones are just such fun. ūüôā

The other day I read a survey on Trump’s website that hurt my soul. It is leading, biased, confusing, and faulty.

Let’s start with the good (it won’t take very long).

 

The Good

Some of the questions have these choices:

  • Yes
  • No
  • No opinion

Having no opinion as an option is good (but many of the questions needed a more rigorous scale, this is so black and white they don’t allow for a spectrum of emotions. Darn it I said I would do the good first!)

  • Which television source do you primarily get your news from?

A simple data gathering question. It needed more options though.

  • “Do you use a source not listed above?”

Hey! An other box. So that people can indicate other¬† news sources. (Pity it doesn’t let you answer the same questions about the news sources you added. It should also be written as “If any, what other news sources do you get your news from?” I’m a stickler for thoroughness.)

 

The Ugly

We’re skipping the bad on this one folks.

The biggest problem with this survey is black and white absolutism.¬†It features a strong “Us vs. Them” and “If you’re not with us, you’re against us” mentality that is quite alarming. People’s opinions exist in a broad spectrum. Research methods should account for that. They often allow the “neutral” (no opinion) but rarely allow the “other.” This is a breaking of my research rule #2:

2. Did they ask non-biased and unleading questions which allow for neutral and other opinions? Did they observe the subjects in a way that minimally effected their behavior? (Research Methods)

The questions also group the media (a broad range of people, organizations, and opinions) into one single entity. Making answering the majority of the questions quite awkward.

 

Recruiting

Another problem is recruiting. They are not following my research rule #1:

1. Did they recruit enough people from all appropriate groups to represent the population? (Recruiting)

If the black and white doesn’t turn people off, then the length will. How is this getting distributed? A link on a site? A select email list? They aren’t offering any money to finish so only those with an extreme vested interest (for or against) will take it. The sample of people who eventually finish this survey is not representative of the American population.

 

Questions

Now into a little nitty gritty, I tried to find the worst offenders. I could write a page on each question, but I shall refrain.

  • Do you trust [Insert news agency] to report fairly on Trump’s presidency?

This should be a matrix table, “How much do you trust the following news sources?” (Mentioning Trump is biasing it, if they trust the news source, they trust them to report accurately on Trump’s Presidency).

Fairness is often used instead of accuracy. They are not concerned about the news agency’s accuracy, just if they talk nice.

  • On which issues does the mainstream media do the worst job of representing Republicans? (Select as many that apply.)

Oh, let me count the ways. It should be “How¬†well does the mainstream media represent republicans in the following areas:” Even that I don’t like, because it lumps republicans in all together as if they are carbon copies of each other. It doesn’t allow for people to¬†think the media is doing well.¬†Their data will only tell them how many people were negative, not how many were neutral or positive or no opinion!

  • Do you trust the mainstream media to tell the truth about the Republican Party‚Äôs positions and actions?

BTW, who the heck is this mainstream media? Is that Facebook? Twitter? Instagram? The TV? What? They start asking these questions without making sure that the respondent understands exactly who they mean by “mainstream media.”

My hurt brains so much right now.

  • Do you believe that the mainstream media does not do their due diligence fact-checking before publishing stories on the Trump administration?

Ah the double-negative triple pirouette. Double negatives, does selecting no mean “No, I do not believe that the mainstream media does not do…” Oh sheesh. The poor respondent is getting a headache!

  • Do you believe that the media unfairly reported on President Trump‚Äôs executive order temporarily restricting people entering our country from nations compromised by radical Islamic terrorism?

That question is very biased. It is attempting to make the respondent feel like if they agree with the media in any way, they want to endanger our nation from the radical islamic terrorists. It should have been an open-ended written this way:

  • Do you believe that the media inaccurately reported on President Trump‚Äôs executive order temporarily restricting people entering our country from (names of countries)?

It should also link to the text of the order.

  • Were you aware that a poll was released revealing that a majority of Americans actually supported President Trump’s temporary restriction executive order?

Wow. This question is serving as a report of prior research. Can I get a link to that research? And that poll revealed that a majority of Americans POLLED supported etc. I’d want to see their recruiting method. This mini-report breaks my rule #3 of research:

3. Did they report on the data itself or extrapolate it to say more than it said and ramble on with their opinions? (Reporting)

  • Do you believe that political correctness has created biased news coverage on both illegal immigration and radical Islamic terrorism?

They throw words around with very subjective meanings (like political correctness). Should be asking separate questions about illegal immigration and radical Islamic terrorism. Also, by putting them in the same question, they imply a connection (some say there is a connection, some not, that’s some other research I personally need to look into).

  • Do you believe that contrary to what the media says, raising taxes does not create jobs?

Ok, they’re asking about belief now (good). But it should be written as such:

Do you believe that raising taxes does not create jobs?

Also, huh? Darn. It’s still biased. We’ve got the double-negatives again too.

  • Do you believe that the media wrongly attributes gun violence to Second Amendment rights?

Because if you don’t you’re wrong! That’s what this is implying. I’m done rewriting this survey because if they were my client I’d be going back to them to ask what are the high-level research questions they have, what do they really want to know? How much does the public believe the media?

This survey, if it were meant for understanding rather than generating ammunition for political debates, should be written to understand the opinions the public has about the news, their demographics, and it should be recruited in such a way to get the best representation.

BTW, what the heck does “Our Movement” mean? Republicans? Trump? Conservatives?

Redundancy is a huge problem with this survey. The next series of questions essentially asks “Do you believe the media does evil things to try to stop us holy warriors of truth and justice?” in one way or another.

  • Do you believe that the mainstream media has been too eager to jump to conclusions about rumored stories?

This is actually decent, it should also be a how much though.

  • Do you believe that if Republicans were obstructing Obama like Democrats are doing to President Trump, the mainstream media would attack Republicans?

More us vs. them.

  • Do you believe that the media uses slurs rather than facts to attack conservative stances on issues like border control, religious liberties, and ObamaCare?

I also don’t appreciate them tying religious liberties to things like border control. Another example of absolutism. Let every person be an individual. Not a cookie cutter. This survey was designed for extremist Republicans to back up extremist Republicans.

 

End Questions

A few questions at the end are shifted toward how they feel about Trump’s method of communicating with the people, but does it in a way that makes him seem like a holy warrior.

  • Do you agree with President Trump‚Äôs media strategy to cut through the media‚Äôs noise and deliver our message straight to the people?

A good thing to try and found out, but questions should be written more objectively. “The almighty Thor cuts through the media noise to rescue the people!” Oh dear. Flair and drama do not belong in surveys.

  • Do you agree with the President‚Äôs decision to break with tradition by giving lesser known reporters and bloggers the chance to ask the White House Press Secretary questions?

How were these selected? Links please. Orient your respondent before you ask them questions.

At the end, it asks for Name, Email, and Zip code. They really should be asking questions about age, gender, ethnicity (location is still good), and socioeconomic status. And if at the end not all populations are represented, conduct more research.

 

Reporting

Can’t really write on reporting yet, but you can tell that from the way this survey was worded, the¬†report is going to be quite biased in a lot of ways.

 

Conclusion

I’m very concerned that political research is biased all around. They should be hiring outside agencies to write their surveys, analyze the data, and report back to them. The objective should be to get an accurate view of the people, not have a blunt weapon to argue with (such and such biased research said such and such!).

Hope you found this analysis enlightening. Remember, bad data is worse than no data at all!

BTW, if you have concerns about this being a republican-focused survey, please send me a democrat-focused survey. I will gladly analyze that as well. Both parties are guilty of biased research methods.

Booyah.

-Thomas Fawkes

Advertisements

Recognizing Bad Social Research or News Articles

If you don’t have the time to read the whole article, ask yourself these questions every time you read some research or a news article:

  1. Did they recruit enough people from all appropriate groups to represent the population? (Recruiting)

  2. Did they ask non-biased and unleading questions which allow for neutral and other opinions? Did they observe the subjects in a way that minimally effected their behavior? (Research Methods)

  3. Did they report on the data itself or extrapolate it to say more than it said and ramble on with their opinions? (Reporting)

 

Now, for the meat:

We often use research (surveys, interviews, news articles) to evaluate our beliefs (or to prove a point). Much research out there in the world is faulty, biased, or lacking in very vital ways. Not a great way to educate yourself!

Since I’m a researcher by trade, I thought I’d put together a basic guide to analyzing the value of research. This is mainly with practical or social research, rather than scientific research (some principles still apply).

The first thing to remember is that research only says what it says and nothing more.¬†When a report says “40% of Americans hate puppies” it really means “40% of Americans¬†surveyed¬†hate puppies.” Stark contrast there. Think about it.

The next important things to remember are:

  • Just because you agree with the research, doesn’t mean they had good research methods
  • Just because you disagree with the research, doesn’t mean they had poor research methods.
  • Just because they had poor research methods, doesn’t mean the result is wrong (it does mean the research is not a good reason to inform your beliefs).

The rest of this article is divided into 3 sections which reflect the three major phases of social research:

  1. Sources
  2. Research Methods
  3. Reporting

 

1. Sources-Getting the people

How many

When researching a certain population, one needs to get a good representation of all target audience groups (ages, socioeconomic status, political affiliation, region. Census data and other pre-research helps you know who these groups are).

The people that you choose to research are your¬†sample.¬†There’s a lot of math out there as to how many you need in a sample, but the important thing to note is that you need a good number of randomly selected people from each audience group.

How recruited or chosen

Every recruitment method has its drawbacks.

There are people who won’t take a phone survey, an email survey, or a pop-up survey. Ever. If these people are excluded,¬†you’re skewing your results. People who are likely to ignore a survey¬†are also likely to have certain opinions that needs to be in the research.

Money can get more of those people to participate. So, beware of research that is conducted for free, they only get the opinions of the really nice or the really opinionated.

Also,¬†beware of research that uses only one recruiting method. If they only do¬†phone invites, then they only get people who¬†have phone #’s the researchers have access to (or people who have phones at all if the phone-less are potentially one of their target groups!).

 

2. Research Methods-Gathering the data

There are two main types of data you can gather: behavioral data and opinion data. Beware of research that tries to learn people’s behavior through questioning rather than observation. People don’t usually have an accurate picture of what they actually do or what they would do (we have distorted memories and high opinions of our hypothetical selves).

People will often say one thing and do another.

Behavioral research is best done through observation. This can be done in-person, through video footage, or even website analytics data (the measure that they know they’re being watched will change their behavior, beware!).

Opinion based research is best done through questioning. And OH BOY do people go wrong here! I’ve seen surveys with the equivalent of “Here is this really great thing, how great do you think it is?” with the following three options:

  • Pretty great!
  • Super great!
  • Extremely great!

I’m exaggerating. Slighty. But sheesh! My job day in and day out is to take the biased, leading, confusing, and faulty questions from my clients and translate them into actually usable questions.

If the research doesn’t let you in on the wording of their questions when presenting results, beware.

Questions might be leading or choices for answers do not represent all possibilities. It’s important that surveys have a neutral option in order to allow for those people who really don’t give a darn to not give a darn!

I was talking to Howard Tayler on Twitter about survey question writing when he offered this as an example:

c3gshvyucaaol-l

Do you see the problems in it? It’s cramming multiple responses into one choice. A terribly worded on at that. He offered this as his fix as a¬†funny but equally biased alternative:

c3gsn17uoai578g

The sad thing is, I could expect an extremist democrat to actually put this wording in their version of this survey (Howard Tayler is certainly not an extremist, but a writer of science fiction and comedy).

One-on-one interviews (often used by journalists and news sources) answer questions deeply but not broadly, they get into the nitty gritty of a certain thought, but not into how many people think such a thing.

 

3. Reporting-Interpreting and publishing the data

This is likely the most important, since this is what most of us see when investigating a certain body of research. The important thing to remember (and beware if the reporting doesn’t say this) is that:

“Research only says what it says and nothing more.”

Feel free to repeat that a few times. ūüėõ

How much is taken directly from the data, and how much is opinion?

When a report is filled with X number of participants indicated Y, you know that’s at least decent data. But, beware when only a few lines of the report are associated with the findings of the data and the rest is paragraphs and paragraphs of opinion, speculation, and broad exaggeration.

 

EXAMPLES

The biggest example in recent memory is the polls on who people were voting for last election season. The people who were likely to fill out or answer the polls (mainly Clinton voters) were not representative of those who showed up to vote (a lot more Trump voters than the polls).

Here are three examples of types of research I’ve encountered recently:

The news article

Mwahaha! The worst of the worst! A journalist will often handpick a few select individuals, often to prove a point or that they anticipate will agree with what they want to say (not representative of the population), they will then asked biased and leading questions, then will choose what they want to report (in or out of context), and will fill their article with paragraphs of speculation and opinion.

Not a good way to get data! Not all journalists are like this, and most aren’t trying to be faulty, but when accuracy is traded for speed, you have a holey target with nothing in the bullseye.

The phone survey

Ah, the phone survey. Have any of you gotten a call for a political survey? How many of your took it? How did you feel on the subject if you took it? Many don’t like taking these surveys, and many don’t offer any compensation, greatly decreasing the likelihood of good representation.

The intercept survey

The pop-up appears and through analytics we know how many see it and how many click x. Those who are likely to click on it are (like I mentioned before) either the really nice or the really angry. And having people at both extremes does NOT average out to an accurate representation.

 

No data is perfect

Every piece of research should have a list of cautions, exceptions, and caveats. No research is perfect. And one survey or set of interviews is never sufficient to fully understand a problem. Beware those who use one chart to describe the state of a nation. The data is always more complicated than that, and people are more complicated than that.

If you learn one thing from this, it is to take every bit of research with a grain of salt, and sometimes a gallon. Because bad data can be worse than no data at all.

Bad data can mislead you into a false sense of confidence in something that might not be true.

Booyah!

 

-Thomas Fawkes

Facebook and Google are putting you in a bubble

Someone on the radio the other day (NPR, perhaps?) pointed out that some groups think that the whole country is going to vote for their favorite presidential candidate while polls show that only about 20% of people are considering them.

Why? Because:

Facebook and Google, in their efforts to show content which they think we would like, show us more of the type of content we interact with. Therefore, they are creating pocket bubble cultures where beliefs are never questioned and reinforced within the group.

This makes people think “The whole world agrees with me! Everyone on Facebook is saying the same thing!”

Not only does this affect what people¬†think¬†most of the world believes, but it makes it more and more difficult to encounter and consider differing viewpoints. This is strongly related to confirmation bias.¬†“Confirmation bias, or Positive Bias is the tendency to look for evidence that confirms a hypothesis, rather than disconfirming evidence.”

Check out this video real quick and see if you can solve the riddle:

 

Google and Facebook take advantage of this, feeding us articles that we read more, like more, share more. They tailor what we see to be more to what we’ll interact with. And we’re more likely to¬†seek out and interact with things which agree with us. And that’s no way to progress, to learn more.

To conclude, here’s a quote from the¬†Veritasium video above, he says:

“The scientific method is to set out to disprove your theory, and when you can’t disprove it, that’s when you know you’re getting to something really true about our reality.”