Nonresponse Bias Many studies have attempted to determine if there is a difference between respondents and nonrespondents. Some researchers have reported that people who respond to surveys answer questions differently than those who do not. Others have found that late responders answer differently than early responders, and that the differences may be due to the different levels of interest in the subject matter. One researcher, who examined a volunteer organization, reported that those more actively involved in the organization were more likely to respond.
Demographic characteristics of nonrespondents have been investigated by many researchers. Most studies have found that nonresponse is associated with low education. However, one researcher reported that demographic characteristics such as age, education, and employment status were the same for respondents and nonrespondents. Another study found that nonrespondents were more often single males.
Most researchers view nonresponse bias as a continuum, ranging from fast responders to slow responders (with nonresponders defining the end of the continuum). In fact, one study used extrapolation to estimate the magnitude of bias created by nonresponse. Another group of researchers argue that nonresponse should not be viewed as a continuum, and that late respondents do not provide a suitable basis for estimating the characteristics of nonrespondents.
The Order of the Questions Items on a questionnaire should be grouped into logically coherent sections. Grouping questions that are similar will make the questionnaire easier to complete, and the respondent will feel more comfortable. Questions that use the same response formats, or those that cover a specific topic, should appear together.
Each question should follow comfortably from the previous question. Writing a questionnaire is similar to writing anything else. Transitions between questions should be smooth. Questionnaires that jump from one unrelated topic to another feel disjointed and are not likely to produce high response rates.
Most investigators have found that the order in which questions are presented can affect the way that people respond. One study reported that questions in the latter half of a questionnaire were more likely to be omitted, and contained fewer extreme responses. Some researchers have suggested that it may be necessary to present general questions before specific ones in order to avoid response contamination. Other researchers have reported that when specific questions were asked before general questions, respondents tended to exhibit greater interest in the general questions.
It is not clear whether or not question-order affects response. A few researchers have reported that question-order does not effect responses, while others have reported that it does. Generally, it is believed that question-order effects exist in interviews, but not in written surveys.
Anonymity and Confidentiality An anonymous study is one in which nobody (not even the researcher) can identify who provided data. It is difficult to conduct an anonymous questionnaire through the mail because of the need to follow-up on nonresponders. The only way to do a follow-up is to mail another survey or reminder postcard to the entire sample. However, it is possible to guarantee confidentiality, where those conducting the study promise not to reveal the information to anyone. For the purpose of follow-up, identifying numbers on questionnaires are generally preferred to using respondents' names. It is important, however, to explain why the number is there and what it will be used for.
Some studies have shown that response rate is affected by the anonymity/confidentiality policy of a study. Others have reported that responses became more distorted when subjects felt threatened that their identities would become known. Others have found that anonymity and confidentiality issues do not affect response rates or responses.
The Length of a Questionnaire As a general rule, long questionnaires get less response than short questionnaires. However, some studies have shown that the length of a questionnaire does not necessarily affect response. More important than length is question content. A subject is more likely to respond if they are involved and interested in the research topic. Questions should be meaningful and interesting to the respondent.
Incentives Many researchers have examined the effect of providing a variety of nonmonetary incentives to subjects. These include token gifts such as small packages of coffee, ball-point pens, postage stamps, key rings, trading stamps, participation in a raffle or lottery, or a donation to a charity in the respondent's name. Generally (although not consistently), nonmonetary incentives have resulted in an increased response. A meta-analysis of 38 studies that used some form of an incentive revealed that monetary and nonmonetary incentives were effective only when enclosed with the survey. The promise of an incentive for a returned questionnaire was not effective in increasing response. The average increase in response rate for monetary and nonmonetary incentives was 19.1 percent and 7.9 percent, respectively.
Most researchers have found that higher monetary incentives generally work better than smaller ones. One researcher proposed a diminishing return model, where increasing the amount of the incentive would have a decreasing effect on response rate. A meta-analysis of fifteen studies showed that an incentive of 25¢ increased the response rate by an average of 16 percent, and $1 increased the response by 31 percent.
Notification of a Cutoff Date Several researchers have examined the effect of giving subjects a deadline for responding. While a deadline will usually reduce the time from the mailing until the returns begin arriving, it appears that it does not increase response, and may even reduce the response. One possible explanation is that a cutoff date might dissuade procrastinators from completing the questionnaire after the deadline has past.
Reply Envelopes and Postage A good questionnaire makes it convenient for the respondent to reply. Mail surveys that include a self-addressed stamped reply envelope get better response than business reply envelopes. Some investigators have suggested that people might feel obligated to complete the questionnaire because of the guilt associated with throwing away money--that is, the postage stamp. Others have pointed out that using a business reply permit might suggest advertising to some people. Another possibility is that a business reply envelope might be perceived as less personal.
A meta-analysis on 34 studies comparing stamped versus business reply postage showed that stamped reply envelopes had a 9 percent greater aggregate effect than business reply envelopes. In another meta-analysis on nine studies, an aggregate effect of 6.2 percent was found.
The Outgoing Envelope and Postage There have been several researchers that examined whether there is a difference in response between first class postage versus bulk rate. A meta-analysis of these studies revealed a small, but significant, aggregate difference of 1.8 percent. Envelopes with bulk mail permits might be perceived as "junk mail", unimportant, or less personal, and thus will be reflected in a lower response rates.
A few researchers have also examined whether metered mail or stamps work better on the outgoing envelope. The results of these studies suggest a small increase in response favoring a stamped envelope. A meta-analysis of these studies revealed that the aggregate difference was slightly less than one percent.
Many researchers have reported increased response rates by using registered, certified, or special delivery mail to send the questionnaire. The wisdom of using these techniques must be weighed against the consequences of angering respondents that make a special trip to the post office, only to find a questionnaire.
It is not clear whether a typed or hand-addressed envelope affects response. One study, conducted at the University of Minnesota, reported that students responded better to hand-addressed postcards, while professors responded better to typed addresses.
This writer could find no studies that examined whether gummed labels would have a deleterious effect on response rate, although we might predict that response rate would be less for gummed labels because they have the appearance of less personalization.
This writer could also find no studies that examined whether the color of the envelope affects response rate. First impressions are important, and the respondent's first impression of the study usually comes from the envelope containing the survey. Therefore, we might predict that color would have a positive impact on response because of its uniqueness.
The "Don't Know," "Undecided," and "Neutral" Response Options Response categories are developed for questions in order to facilitate the process of coding and analysis. Many studies have looked at the effects of presenting a "don't know" option in attitudinal questions. The "don't know" option allows respondents to state that they have no opinion or have not thought about a particular issue.
The physical placement of the "undecided" category (at the midpoint of the scale, or separated from the scale) can change response patterns. Respondents are more likely to choose the "undecided" category when it was off to the side of the scale. There are also different response patterns depending on whether the midpoint is labeled "undecided" or "neutral".
Several researchers have found that the physical location of the middle alternative can make a difference in responses, and that placing the middle option at the last position in the question increases the percentage of respondents who select it by over 9 percent. Frequently, offering respondents a middle alternative in a survey question will make a difference in the conclusions that would be drawn from the data. The middle option of an attitudinal scale attracts a substantial number of respondents who might be unsure of their opinion.
Researcher have also studied the "don't know" option for factual questions. Unlike attitude questions, respondents might legitimately not know the answer to a factual question. Surprisingly, the research suggests that the "don't know" option should not be included in factual questions. Questions that exclude the "don't know" option produce a greater volume of accurate data. Furthermore, there is generally no difference in response rate depending on the inclusion or exclusion of the "don't know" option. There is still a controversy surrounding the "don't know" response category. Many researchers advocate including a "don't know" response category when there is any possibility that the respondent may not know the answer to a question. The best advice is probably to use a "don't know" option for factual questions, but not for attitude questions.