Are Online Surveys Making Us Stupid?

by cv harquail on September 14, 2010

All over the web I see stupid “surveys” collecting what is almost always meaningless data. The “results” of these “surveys” are then used to influence readers’ perceptions and to steer people towards (or away from) companies and services.

What I find frustrating, almost to the point of infuriating, is how most of these surveys violate the basic rules of  scientific social measurement.

Online Surveys Are Making Us Stupid

How are online survey tools making us stupid?

View Results

Loading ... Loading ...

Why Online Surveys are (usually) Stupid

Most online surveys violate basic rules of social scientific measurement. Crimes against science include poorly defined terms, improperly worded questions (items), and careless question ordering that “lead the witness” and bias the respondents as they go through the ‘survey’.

But the biggest crime against science– the ‘fatal error’ — is improper sampling.

Improper sampling

A survey that hopes to have reputable, reliable, scientific results must be distributed to a random sample of the people you want to understand. The sample has to be randomized so that you can infer from the results of your survey what the patterns might be in the population at large.

Random sampling of the population is the number one requirement for social scientific survey. Even if every other criterion has been met, without random sampling, you can’t be said to have a scientific study.

But only 1 out of 7,281,989 online surveys or polls is administered to a randomized sample. That is, only about .0003% of online surveys are even remotely “scientific”.

Most online surveys ignore the importance of randomized sampling.

What happens, instead, is that the people who chance to come to your website (already a specific and biased portion of your population) see the survey and get the invitation to take it.

When presented with the survey online, only certain people are motivated to take it. And who are these people? The folks at either end of the spectrum–those people who are very positive or very negative about the phenomena.

People who complain are the most motivated to reply, people with praise are somewhat less often motivated to reply, and people who feel more ambivalent or more satisfied plain old satisfied are rarely motivated to apply.

So, the ‘results’ you have are from flamers and fans, not a representative sample. (And you wonder why 49% of people love your product, and 49% of people hate it?)

Sure, you get some ‘data’, but it is likely distorted and unrepresentative of what the bulk of your potential customers or people of interest actually believe.

Worse, still, is that many people mistakenly believe that they can get around the problem of improper sampling.

Recently, I challenged a website that advertised itself as a reliable resource for information comparing a dozen different service providers. Their supposedly objective consumer survey is completed by whomever comes to their site, clicks through to PollDaddy, and answers 10 questions.

I questioned whether they could accurately describe their survey as trustworthy and reliable given their lack of a randomized sampling procedure. Here’s the response I got from their ‘research director’:

Regarding our surveys, they are completed by customers from across the country and collected by an outside polling agency (PollDaddy) to ensure reliability/validity. In order for any survey to be reliable, you have to collect a large number of surveys. Anything over 100 is considered to be a very large sample size and data from such a large sample size is considered to be robust, reliable and scientific! We never use any data from any survey that does not have over 100 respondents.

If I were this company, I’d ask this research director exactly where he learned inferential statistics and survey design. But I digress…

The idea that response size compensates for biased or improper sampling just shows you how little users know about what it takes to have a scientific survey.

Stupid Surveys => Stupid Data => Bad Decisions

An even bigger problem is that people actually make important business decisions based on this bad ‘data’. The website in this example not only charges for a report of their survey results, but also they advise potential customers to choose one supplier over another based on their survey results.

customer service2.jpgI’m not sure I want to make a $20,000 purchase based on bad data. Would you?

Even scientific surveys can gather bad data.

Even well-designed, scientifically-rigorous online surveys are subject to validity and reliability challenges. Recent research on Internet surveys in formal psychological science has noted important shortcomings about how people take surveys online. These problems include completing the survey while drunk, reading the instructions improperly, clicking on items randomly, and skipping important questions altogether. But at least with a scientifically-rigorous survey design and delivery, you have a fighting chance to get good data.

Why We Like Stupid Surveys

Blogging experts and customer engagement advocates will argue that polls and surveys are important parts of a company’s online offerings.

— Polls increase reader engagement because they trigger some readers to reflect and then respond. And,

— Polls can be kind of fun– like an online game, you hit enter and get poll results, which feel satisfying even if they are informationally empty. (Try this here, and see what I mean. Wasn’t that fun?)

For bloggers, surveys can be useful in non-scientific ways… you can find out what some readers /respondents might think about a particular topic. Especially when there are open-ended questions in the ‘survey” –a place where people can just describe in their own words how they feel about something — you can get very interesting anecdotal insights about your product or service.

These insights might help you think differently about what you’re offering, which can be useful even though you don’t know what percentage of people feel this way or how important this feeling is to your overall market.

So, if (a.)  most online surveys are unscientific, and (b.) we like and (c.) use them anyway, then (d.) what are we left to do?

1. We can be really clear, to ourselves, about what these “results” actually are.

The results we get from unscientific surveys are feedback, not “data”.

2. We can be explicit in how we report them to others, making note that they are not scientific.

For example, survey information can be reported more accurately, as being about:

  • “our readers who cared strongly enough to reply”
  • “the readers who took time to take our survey”,
  • readers who shared their opinion”, and so forth.

3. We can invent and use a more accurate label for these tools.

We need a label we can use every time we talk about the tools and what they gather, but a label that is catchier and shorter than

“a marginally useful collection of meaning-diminished numbers that let us talk about a phenomenon as though we know something about it when we don’t.”

Alternatively,

4. We can keep online survey tools out of the hands of amateurs.

We could allow only those with training in survey design, statistics, and data analysis to use survey tools.

It’s important to note that the misuse of online surveys is not exactly the fault of the online survey companies themselves. A few online survey companies and online marketing research firms go to great lengths to offer information to potential users (like tutorials on statistical sampling) to help improve the validity of the surveys that their users will design.  But more often than not, the websites advertising these  easy to (mis)use tools are absolutely unhelpful if you want to design a survey that’s anything more than neatly formatted.

It’s one thing to poll the readers of your site to ask them if they want more book reviews or discussions of different topics. It’s another thing to claim that you provide ‘consumer ratings and preferences’ or can offer advice others on how they should run their businesses, when these ratings and advice are derived from stupid, unscientific online surveys.

Unscientific surveys are dumbing us down. They are dumbing us down about the things we’re trying to understand with the surveys, and they are dumbing us down about scientific measurement itself.

How can we resist the stupidity of unscientific online surveys?

View Results

Loading ... Loading ...

What do you think?

View Results

Loading ... Loading ...

{ 2 comments }

Joe June 1, 2011 at 3:06 pm

I love this post! You have no idea, I recently started working for an organization doing marketing research and my supervisor had no idea what random sampling was and forced me to just send out an online survey to get as many responses as we could possibly get (an attempt at a census of over 60,000 people lol). Well we got a large number of responses, but they are pretty much useless if we want to make any inferences about the overall population!

cv harquail June 1, 2011 at 4:26 pm

Hi Joe-
I know, it seemed like a rant at the time I wrote it, but I continue to see this common ‘mistake’ about online surveys, etc. It wouldn’t be so bad expect that people are actually making decisions based not only on faulty research but also on faulty (i.e., unreliable) inference. And, it seems that the idea that a huge n takes away the problems of non-random sampling is, again, just dumb. sigh. cv

Comments on this entry are closed.