EVERYTHING YOU WANT TO KNOW ABOUT QUESTIONNAIRES

1993 David S. Walonick, Ph.D.

All rights reserved.

Excerpts from Survival Statistics - an applied statistics book for graduate students.


Questionnaires are one of the most popular methods of conducting scholarly research. They provide a convenient way of gathering information from a target population.

This paper will address most of the important issues related to written questionnaires

Advantages and Disadvantages of Written Questionnaires

Questionnaires are easy to analyze, and most statistical analysis software can easily process them. They are cost effective when compared to face-to-face interviews, mostly because of the costs associated with travel time (Bachrack and Scoble, 1967; Benson, 1946; Hochstim and Athanasopoulos, 1970; Moser and Kalton, 1971; Seitz, 1944). This is especially true for studies involving large sample sizes and large geographic areas (Clausen and Ford, 1947; Goode and Hatt, 1962; Ruckmick, 1930). Written questionnaires become even more cost effective as the number of research questions increases.

Questionnaires are familiar to most people (Berdie, Anderson, and Niebuhr, 1986). Nearly everyone has had some experience completing questionnaires and they generally do not make people apprehensive. They are less intrusive than telephone or face-to-face surveys. When respondents receive a questionnaire in the mail, they are free to complete it on their own time-table (Cahalan, 1951; Jahoda, et al., 1962). Unlike other research methods, the respondent is not interrupted by the research instrument. On the other hand, questionnaires are simply not suited for some people. For example, a written survey to a group of poorly educated people might not work because of reading skill problems. More frequently, some people are turned off by written questionnaires because of misuse (Deutcher, 1956; Norton, 1930).

Written questionnaires reduce interviewer bias because there is uniform question presentation (Jahoda, et al., 1962). Unlike in-person interviewing, there are no verbal or visual clues to influence a respondent to answer in a particular way. Many investigators have reported that interviewer voice inflections and mannerisms can bias responses (Barath and Cannell, 1976; Benson, 1946; Boyd and Westfall, 1965, 1970; Cahalan, 1951; Collins, 1970; Dohrenwend, Colombotos, and Dohrenwend, 1968; Franzen and Lazersfeld, 1945). Written surveys are not subject to this bias because there is no interviewer. On the other hand, the lack of an interviewer limits the researcher's ability to probe responses. Structured questionnaires often lose the "flavor of the response", because respondents often want to qualify their answers (Walonick, 1993). By allowing frequent space for comments, the researcher can partially overcome this disadvantage.

A common criticism of mail surveys is that they often have low response rates (Benson, 1946; Phillips, 1941; Robinson, 1952). Low response is the curse of statistical analysis, and it can dramatically lower confidence in the results. While response rates vary widely from one questionnaire to another, well-designed studies consistently produce high response rates.

When returned questionnaires arrive in the mail, it's natural to assume that the respondent is the same person you sent the questionnaire to. A number of researchers have reported that this may not actually be the case (Clausen and Ford, 1947; Franzen and Lazersfeld, 1945; Moser and Kalton, 1971; Scott, 1961). Many times business questionnaires get handed to other employees for completion. Housewives sometimes respond for their husbands. Kids respond as a prank. For a variety of reasons, the respondent may not be who you think it is. In a summary of five studies sponsored by the British Government, Scott (1961) reports that up to ten percent of the returned questionnaires had been completed by someone other than the intended person.

Response Rate

Response rate is the single most important indicator of how much confidence can be placed in the results of a mail survey. A low response rate can be devastating to the reliability of a study (Benson, 1946; Phillips, 1941; Robinson, 1952). Fortunately, "low response rates are not an inherent shortcoming of mail surveys", and the researcher must do everything possible to maximize response (Berdie, Anderson, and Neibuhr, 1986, p. 17). Much of the research in questionnaire methodology has centered around techniques to maximize response. However, Jones and Lang (1980) point out that increasing the response rate does not necessarily improve the precision of survey results.

Following up on Nonrespondents

One of the most powerful tool for increasing response is to use follow-ups or reminders (Scott, 1961; Toops, 1924). "Traditionally, between 5 and 65 percent of those sent questionnaires respond without follow-up reminders. These rates are too low to yield confident results" (Berdie, Anderson, and Niebuhr, 1986, p. 58). The need to follow up on nonrespondents is clear.

Researchers can increase the response from follow-up attempts by including another copy of the questionnaire (Futrell and Lamb, 1981; Goldstein and Kroll, 1957; Orr and Neyman, 1965; Sivan, Epley, and Burns, 1980). The most important consideration is that the investigator "designs the follow-up procedure by taking into consideration the unique characteristics of the people in the sample." (Berdie, Anderson, and Neibuhr, 1986, p. 58) The most successful follow-ups have been achieved by phone calls (Roscoe, Lang, and Sheth, 1975; Sheth and Roscoe, 1975; Speer and Zold, 1971).

Many researchers have examined whether postcard follow-ups are effective in increasing response (Cox, Anderson. and Fulcher, 1974; Hinrichs, 1975; Jones and Lang, 1980; Keane, 1963; Peterson, 1975; Watson, 1965; Wiseman, 1973). The vast majority of these studies show that a follow-up postcard increases response rate, and a meta-analysis by Fox, Crask, and Kim (1988) reveals an aggregate gain of 3.5 percent. The postcard serves as an effective reminder for subjects who have forgotten to complete the survey (Dillman, 1978).

Nonresponse Bias

Many studies have attempted to determine if there is a difference between respondents and nonrespondents. Some researchers have reported that people who respond to surveys answer questions differently than those who do not (Benson, Booman, and Clark, 1951; Gough and Hall, 1977). Others have found that late responders answer differently than early responders, and that the differences may be due to the different levels of interest in the subject matter (Bauer, 1947; Brown and Wilkins, 1978; Reid, 1942; Speer and Zold, 1971). One researcher, who examined a volunteer organization, reported that those more actively involved in the organization were more likely to respond (Donald, 1960).

Demographic characteristics of nonrespondents have been investigated by many researchers. Most studies have found that nonresponse is associated with low education (Gannon, Northern, and Carrol, 1971; Robins, 1963; Suchman and McCandless, 1940). However, one reseacher (Barton, 1980) reported that demographic characteristics such as age, education, and employment status were the same for respondents and nonrespondents. Another study found that nonrespondents were more often single males (Gannon, Northern, and Carrol, 1971).

Most researchers view nonresponse bias as a continuum, ranging from fast responders to slow responders (with nonresponders defining the end of the continuum). In fact, one study used extrapolation to estimate the magnitude of bias created by nonresponse (Armstrong and Overton, 1977). Another group of researchers argue that nonresponse should not be viewed as a continuum, and "that late respondents do not provide a suitable basis for estimating the characteristics of nonrespondents" (Ellis, Endo, and Armer, 1970).

Goal Definition

Most problems with questionnaires can be traced back to the design phase of the project. Well-defined goals are the best way to assure a good questionnaire design (Bartholomew, 1963; Freed, 1964). One of the best ways to clarify the study goals is to decide how the information will be used. (Berdie, Anderson, and Niebuhr, 1986; Oppenheim, 1966; Payne, 1951). Study goals should be committed to writing (Freed, 1964). When the goals of a study can be expressed in a few clear and concise sentences, the design of the questionnaire becomes considerably easier. The questionnaire is developed to directly address the goals of the study.

One important way to assure a successful survey is to include other experts and relevant decision-makers in the questionnaire design process (Walonick, 1993). Their suggestions will improve the questionnaire and they will subsequently have more confidence in the results.

General Layout and Format Considerations

The physical appearance of a written survey may largely determine if the respondent will return it (Levine and Gordon, 1958). Therefore, it is important to use professional production methods for the questionnaire--either desktop publishing or typesetting and keylining (Robinson and Agisim, 1951; Robinson, 1952; Sletto, 1940; Toops, 1937).

Every questionnaire should have a title that is short and meaningful to the respondent (Berdie, Anderson, and Niebuhr, 1986). The rationale is that a questionnaire with a title will be perceived as more credible than one without.

Well-designed questionnaires include clear and concise instructions on how they should be completed. These must be very easy to understand, so use short sentences and basic vocabulary. The questionnaire itself should have the return address printed on it since questionnaires often get separated from the reply envelopes (Berdie, Anderson, and Niebuhr, 1986).

Questionnaires should use simple and direct language (Norton, 1930). The questions must be clearly understood by the respondent, and have the same meaning that the researcher intended (Freed, 1964; Huffman, 1948). The wording of a question should be simple, to the point, and familiar to the target population (Freed, 1964; Moser and Kalton, 1971). Surprisingly, several researchers (Blair et al., 1977; Laurent, 1972) have found that longer questions elicit more information than shorter ones, and that the information tends to be more accurate. However, it is generally accepted that questionnaire items should be simply stated and as brief as possible (Payne, 1951). The rationale is that this will reduce misunderstandings and make the questionnaire appear easier to complete. One way to eliminate misunderstandings is to emphasize crucial words in each item by using bold, italics or underlining (Berdie, Anderson, Niebuhr, 1986).

Uncommon words, jargon, and abbreviations may be included in a questionnaire provided that they are familiar to the population being investigated (Bartholomew, 1963). Slang is often ambiguous, and should be excluded from all questionnaires (Payne, 1951).

Questionnaires should leave adequate space for respondents to make comments. One criticism of questionnaires is their inability to retain the "flavor" of a response. Leaving space for comments will provide valuable information not captured by the response categories. Leaving white space also makes the questionnaire look easier and this might increase response (Berdie, Anderson, and Neibuhr, 1986).

Researchers should design the questionnaire so it holds the respondent's interest. The goal is to make the respondent want to complete the questionnaire. One way to keep a questionnaire interesting is to provide variety in the type of items used. Varying the questioning format will also prevent respondents from falling into "response sets".

If a questionnaire is more than a few pages and is held together by a staple, include some identifying data on each page (such as a respondent ID number). Pages often accidentally separate (Berdie, Anderson, and Neibuhr, 1986).

The Order of the Questions

Questionnaires should begin with a few non-threatening and easy to answer items (Erdos, 1957; Robinson, 1952; Sletto, 1940). If the first items are too difficult or threatening, there is little chance that the person will complete the questionnaire (Levine and Gordon, 1958). People generally look at the first few questions before deciding whether or not to complete the questionnaire. The researcher can encourage response by starting with a few interesting and nonthreatening questions.

Likewise, the most important items should appear in the first half of the questionnaire (Levine and Gordon, 1958). Respondents often send back partially completed questionnaires. By putting the most important items near the beginning, the partially completed questionnaires will still contain important information.

Items on a questionnaire should be grouped into logically coherent sections (Levine and Gordon, 1958; Robinson, 1952; Seitz, 1944). Grouping questions that are similar will make the questionnaire easier to complete, and the respondent will feel more comfortable. Questions that use the same response formats, or those that cover a specific topic, should appear together (Freed, 1964).

Each question should follow comfortably from the previous question. Writing a questionnaire is similar to writing anything else. Transitions between questions should be smooth. Questionnaires that jump from one unrelated topic to another feel disjointed and are not likely to produce high response rates.

Most investigators have found that the order in which questions are presented can affect the way that people respond (Noelle-Newmann, 1970; Schuman and Presser, 1981; Smith, 1982; Sudman, Seymour, and Bradburn, 1974; Tourangeau and Rasinski, 1988). One study reported that questions in the latter half a questionnaire were more likely to be omitted, and contained fewer extreme responses (Kraut, Wolfson, and Rothenberg, 1975). Carp (1974) suggested that it may be necessary to prevent general questions before specific ones in order to avoid response contamination. McFarland (1981) reported that when specific questions were asked before general questions, respondents tended to exhibit greater interest in the general questions.

A few researchers, however, have found that question-order does not effect responses. Bradburn and Mason (1964) reported that interviews involving self-reports and self-evaluations were unaffected by question order. Clancey and Wachsler (1971) found that responses to questions were similar regardless of where the questions appeared in a questionnaire. Bishop et al. (1988) reported that question-order effects existed in interviews, but not in written surveys. Ayidiya and McClendon (1990) reported mixed results, where some questions were subject to order effects, and other similar questions were not.

Anonymity and Confidentiality

"An anonymous study is one in which nobody (not even the study directors) can identify who provided data on completed questionnaires." (Berdie, Anderson, Niebuhr, 1986, p. 47) It is generally not possible to conduct an anonymous questionnaire through the mail because of the need to follow-up on nonresponders. However, it is possible to guarantee confidentiality, where the those conducting the study promise not to reveal the information to anyone. For the purpose of follow-up, identifying numbers on questionnaires are generally preferred to using respondents' names. It is important, however, to explain why the number is there and what it will be used for.

Some studies have shown that response rate is affected by the anonymity/confidentiality policy of a study (Jones, 1979; Dickson et al., 1977; Epperson and Peck, 1977). Klein, Maher, and Dunnington (1967) reported that responses became more distorted when subjects felt threatened that their identities would become known. Others have found that anonymity/confidentiality issues do not affect response rates or responses (Butler, 1973; Fuller, 1974; Futrell and Swan, 1977; Skinner and Childers, 1980; Watkins, 1978; Wildman, 1977). One researcher reported that the lack of anonymity actually increased response (Fuller, 1974).

The Length of a Questionnaire

As a general rule, long questionnaires get less response than short questionnaires (Brown, 1965; Leslie, 1970). However, some studies have shown that the length of a questionnaire does not necessarily affect response (Berdie, 1973; Champion and Sear, 1979; Childers and Ferrell, 1979; Duncan, 1979; Layne and Thompson, 1981; Mason Dressel, and Bain, 1961). "Seemingly more important than length is question content." (Berdie, Anderson, and Niebuhr, 1986, p. 53) A subject is more likely to respond if they are involved and interested in the research topic (Bauer, 1947; Brown and Wilkins, 1978; Reid, 1942; Speer and Zold, 1971). Questions should be meaningful and interesting to the respondent.

Color of the Paper

One study found that the color of the paper (yellow, pink, and white) did not have an effect on response (Sharma and Singh, 1967). Nevertheless, Berdie, Anderson and Neibuhr (1986) suggest that color might make the survey more appealing. Another early study examined the ink and paper color combinations that provide the greatest legibility (Paterson and Tinker, 1940). The authors suggest three different ink colors for white paper: black, grass green, and lustre blue. The only other recommended combination is black ink on yellow paper.

Some investigators have examined the effect of using a green paper compared to white paper. Two studies (Gullahorn and Gullahorn, 1963; Pressley and Tullar, 1977) reported no significant differences in response rates, while another (Pucel, Nelson and Wheeler, 1971) reported a 9.1 percent difference. A meta-analysis of these studies calculated an average aggregate increase of 2.0 percent when using a green questionnaire (Fox, Crask, and Kim, 1988).

Incentives

Many researchers have examined the effect of providing a variety of nonmonetary incentives to subjects. These include token gifts such as small packages of coffee, ball-point pens, postage stamps, or key rings (Nederhof, 1983; Pucel, Nelson, and Wheeler, 1971; Whitmore, 1976), trading stamps (Brennan, 1958), participation in a raffle or lottery (Knox, 1951; Blythe, 1986), or a donation to a charity in the respondent's name (Furse and Stewart, 1982; Hubbard and Little, 1987; Robertson and Bellenger, 1978). Generally (although not consistently), nonmonetary incentives have resulted in an increased response.

There can be little doubt that monetary incentives increase response. Only a few investigators have reported no increase in response (Dohrenwend, 1970; Landy and Bates, 1973). The overwhelming majority have reported increased response by including monetary incentives (Blumberg, Fuller, and Hare, 1974; Crowley, 1959; Ferber and Sudman, 1974; Friedman and Augustine, 1979; Furse and Stewart, 1982; Furse, Stewart and Rados, 1981; Godwin, 1979; Goodstadt, et al., 1977; Hackler and Bourgette, 1973; Hansen, 1980; Huck and Gleason, 1974, James and Bolstein, 1990; Kimbal, 1961; McDaniel and Jackson, 1984; Newman, 1962; Pressley and Tullar, 1977; Robin and Walter, 1976; Watson, 1965; Wiseman, 1973; Wotruba, 1966).

Church (1993) conducted of meta-analysis of 38 prior studies that used some form of an incentive. Monetary and nonmonetary incentives were effective only when enclosed with the survey. The promise of an incentive for a returned questionnaire was not effective in increasing response. The average increase in response rate for monetary and nonmonetary incentives was 19.1 percent and 7.9 percent, respectively.

Many researchers have found that higher monetary incentives generally work better than smaller ones (Armstrong, 1975; Chromy and Horvitz, 1978; Doob, Freedman, and Carlsmith, 1973; Gunn and Rhodes, 1981; James and Bolstein, 1990; Linsky, 1975; Yu and Cooper, 1983). Armstrong (1975) proposed a diminishing return model, where increasing the amount of the incentive would have a decreasing effect on response rate. A meta-analysis performed by Fox, Crask, and Kim (1988) applied Armstrong's diminishing return model to fifteen studies. An incentive of 25 increased the response rate by an average of 16 percent, and $1 increased the response by 31 percent.

It is not known whether the effects of incentives disappear after follow-up mailings. Kephart and Bressler (1958) found that a 25 incentive significantly increased response, however, the effect disappeared after one follow-up mailing. Another study using a 25 incentive (Goodstadt, et al., 1977) reported a significant difference, and the difference continued to be significant even after three follow-up mailings. James and Bolstein (1990) reported that four mailings without an incentive produced a better response than one mailing with an incentive. However, incentives of $1 and $2 with follow-ups produced a significantly better response than the same number of follow-ups with no incentive. Nederhof (1983) used a ball-point pen as an incentive and reported that "the effect of nonmonetary incentives on response rates disappears after the first mailing."

It is not clear whether offering to share the results of the research provides sufficient incentive to affect response. Mullner, Levy, Byre, and Matthews (1982) report that mentioning the offer in a cover letter did not increase response, while two older studies found that it did (Koos, 1928;, Wiseman, 1973).

Notification of a Cutoff Date

Several researchers have examined the effect of giving subjects a deadline for responding (Duncan, 1979; Erdos, 1957; Ferriss, 1951; Jones and Lang, 1980; Jones and Linda, 1978; Nevin and Ford, 1976; Houston and Nevin, 1977; eterson, 1975; Vocino, 1977) While a deadline will usually reduce the time from the mailing until the returns begin arriving, it appears that it does not increase response, and may even reduce the response. One possible explanation is that a cutoff date might dissuade procrastinators from completing the questionnaire after the deadline has past. A meta-analysis by Fox, Crask and Kim (1988) revealed an aggregate increase in response rate of 1.7 percent, which was not significant.

Reply Envelopes and Postage

A good questionnaire makes it convenient for the respondent to reply. Mail surveys that include a self-addressed stamped reply envelope get better response than business reply envelopes, although they are more expensive since you also pay for the non-respondents (Brook, 1978; Gullahorn and Gullahorn, 1963; Harris and Guffey, 1978; Jones and Linda, 1978; Kimball, 1961; Martin and McConnell, 1970; McCrohan and Lowe, 1981; Peterson, 1975, Price, 1950; Veiga, 1974; Watson, 1965; Wiseman, 1973). Some investigators have suggested that people might feel obligated to complete the questionnaire because of the guilt associated with throwing away money--that is, the postage stamp (Moser, 1971; Scott, 1961). Others have pointed out that using a business reply permit might suggest advertising to some people (Goode and Hatt, 1962). Another possibility is that a business reply envelope might be perceived as less personal (Armstrong and Lusk, 1987).

Armstrong and Lusk (1987) performed a meta-analysis on 34 studies comparing stamped versus business reply postage. They calculated that stamped reply envelopes had a 9 percent greater aggregate effect than business reply envelopes. In a subsequent meta-analysis on nine studies, Fox, Crask, and Kim (1988) reported an aggregate effect of 6.2 percent.

The Outgoing Envelope and Postage

There have been several researchers that examined whether there is a difference in response between first class postage versus bulk rate. (Brook, 1978; Gullahorn and Gullahorn, 1963; Kernan, 1971; McCrohan and Lowe, 1981; Watson, 1965). A meta-analysis of these studies (Fox, Crask, and Kim, 1988) revealed a small, but significant, aggregate difference of 1.8 percent. Envelopes with bulk mail permits might be perceived as "junk mail", unimportant, or less personal (Armstrong and Lusk, 1987), and thus will be reflected in a lower response rates.

A few researchers have also examined whether metered mail or stamps work better on the outgoing envelope (Dillman, 1972; Kernan, 1971; Peterson, 1975; Vocino, 1977). The results of these studies suggest a small increase in response favoring a stamped envelope. A meta-analysis of these studies (Fox, Crask, and Kim, 1988) revealed that the aggregate difference was slightly less than one percent.

Many researchers have reported increased response rates by using registered, certified, or special delivery mail to send the questionnaire (Champion and Sear, 1969; Eckland, 1965; Eisinger, et al., 1974; Gullahorn and Gullahorn, 1959; House, Gerber, and McMichael, 1977; Tedin and Hofstetter, 1982). The wisdom of using these techniques must be weighed against the consequences of angering respondents that make a special trip to the post office, only to find a questionnaire (Berdie, Anderson, and Niebuhr, 1986; Slocum and Swanson, 1956).

It is not clear whether a typed or hand-addressed envelope would affect response. One study, conducted at the University of Minnesota, reported that students responded better to hand-addressed postcards, while professors responded better to typed addresses (Anderson and Berdie, 1972).

This researcher could find no studies that examined whether gummed labels would have a deleterious effect on response rate, although we might predict that response rate would be less for gummed labels because they have the appearance of less personalization.

This researcher could also find no studies that examined whether the color of the envelope affects response rate. First impressions are important, and the respondent's first impression of the study usually comes from the envelope containing the survey. Therefore, we might predict that color would have a positive impact on response because of its uniqueness.

The "Don't Know", "Undecided", and "Neutral" Response Options

Response categories are developed for questions in order to facilitate the process of coding and analysis. Many studies have looked at the effects of presenting a "don't know" option in attitudinal questions (Bishop, Tuchfarber, and Oldendick, 1986; Bishop, Oldendick, and Tuchfarber, 1983, 1978; Bishop et al., 1980; Schuman and Presser, 1981, 1978). The "don't know" option allows respondents to state that they have no opinion or have not thought about a particular issue (Poe et al., 1988).

Holdaway (1971) found that the physical placement of the "undecided" category (at the midpoint of the scale, or separated from the scale) could change response patterns. Respondents were more likely to choose the "undecided" category when it was off to the side of the scale. This study also reported different response patterns depending on whether the midpoint was labeled "undecided" or "neutral".

Bishop (1987) also found that the physical location of the middle alternative can make a difference in responses, and that placing the middle option at the last position in the question increased the percentage of respondents who selected it by over 9 percent. Bishop states that "offering respondents a middle alternative in a survey question will generally make a significant difference in the conclusions that would be drawn [from the data]." The middle option of an attitudinal scale attracts a substantial number of respondents who might be unsure of their opinion.

Poe et al. (1988) studied the "don't know" option for factual questions. Unlike attitude questions, respondents might legitimately not know the answer to a factual question. Their findings suggest that the "don't know" option should not be included in factual questions. Questions that excluded the "don't know" option produced a greater volume of accurate data. They found no difference in response rate depending on the inclusion or exclusion of the "don't know" option. Poe's finding directly contradict several previous authors who advocate including a "don't know" response category when there is any possibility that the respondent may not know the answer to a question (Bartholomew, 1963; Jahoda, Deutsch, and Cook, 1962; Payne, 1951).

Question Wording

The wording of a question is extremely important. Researchers strive for objectivity in surveys and, therefore, must be careful not to lead the respondent into giving the answer a desired answer. Unfortunately, the effects of question wording are one of the least understood areas of questionnaire research.

Many investigators have confirmed that slight changes in the way questions are worded can have a significant impact on how people respond (Arndt and Crane, 1975; Belkin and Lieverman, 1967; Cantril, 1944; Kalton, Collins, and Brook, 1978; Petty, Rennier and Cacioppo, 198; Rasinski, 1989; Schuman and Presser, 1981, 1977; ). Several authors have reported that minor changes in question wording can produce more than a 25 percent difference in people's opinions (Payne, 1951; Rasinski, 1989).

One important area of question wording is the effect of the interrogation and assertion question formats. The interrogation format asks a question directly, where the assertion format asks subjects to indicate their level of agreement or disagreement with a statement. Schuman and Presser (1981) reported no significant differences between the two formats, however, other researchers hypothesized that the interrogation format is more likely to encourage subjects to think about their answers (Burnkrant and Howard, 1984; Petty, Cacioppo, and Heesacker, 1981; Swasy and Munch, 1985; Zillman, 1972). Petty, Rennier and Cacioppo (1987) found that the interrogation format caused greater polarization in subjects' responses, suggesting that there was greater cognition than the assertion format.

Other investigators have looked at the effects of modifying adjectives and adverbs (Bradburn and Miles, 1979; Hoyt, 1972; Schaeffer, 1991). Words like usually, often, sometimes, occasionally, seldom, and rarely are "commonly" used in questionnaires, although it is clear that they do not mean the same thing to all people. Simpson (1944), and a replication by Hakel (1968), looked at twenty modifying adjectives and adverbs. These researchers found that the precise meanings of these words varied widely between subjects, and between the two studies. However, the correlation between the two studies with respect to the relative ranking of the words was .99.

John Hoyt (1972) conducted a study on how people interpret quantifying adjectives. The results show that some of the adjectives had high variability and others had low variability. The following adjectives were found to have highly variable meanings: a clear mandate, most, numerous, a substantial majority, a minority of, a large proportion of, a significant number of, many, a considerable number of, and several. Other adjectives produced less variability and generally had shared meaning. These were: lots, almost all, virtually all, nearly all, a majority of, a consensus of, a small number of, not very many of, almost none, a damn few, hardly any, a couple, and a few.

Sponsorship

There have been several studies to determine if the sponsor of a survey might affect response rate (Houstan and Nevin, 1977; Jones and Lang, 1980; Jones and Linda, 1978; Peterson, 1975). The overwhelming majority of these studies have clearly demonstrated that university sponsorship is the most effective. A meta-analysis of these studies revealed an aggregate increase in response rate of 8.9 percent (Fox, Crask and Kim, 1988). Dillman (1978) suggested that this may be due to the past benefits that the respondent has received from the university. Another possibility is that a business sponsor implies advertising or sales to potential respondents.

Prenotification Letters

Many researchers have studied prenotification letters to determine if they increase response rate (Ford, 1968; Heaton, 1965; Heberlein and Baumgartner, 1978; Jones and Lang, 1980; Myers and Haug, 1969; Parson and Medford, 1972; Pucel, Nelson and Wheeler, 1971; Stafford, 1966; Walker and Burdick, 1977). A meta-analysis of these studies revealed an aggregate increase in response rate of 7.7 percent (Fox, Crask and Kim, 1988). Dillman (1978) proposed that prenotification letters might help to establish the legitimacy of a survey, thereby contributing to a respondent's trust. Another possibility is that a prenotification letter builds expectation, and reduces the possibility that a potential respondent might disregard the survey when it arrives.

The prenotification letter should address five items (Walonick, 1993):

1. Briefly describe why the study is being done.

2. Identify the sponsors.

3. Explain why the person receiving the pre-letter was chosen.

4. Justify why the respondent should complete the questionnaire.

5. Explain how the results will be used.

Cover Letters

The cover letter is an essential part of the survey. To a large degree, the cover letter will affect whether or not the respondent completes the questionnaire. It is important to maintain a friendly tone and keep it as short as possible (Goode and Hatt, 1962). The importance of the cover letter should not be underestimated. It provides an opportunity to persuade the respondent to complete the survey. If the questionnaire can be completed in less than fifteen minutes, the response rate can be increased by mentioning this in the cover letter (Nixon, 1954).

Flattering the respondent in the cover letter does not seem to affect response (Hendrick, et al., 1972) Altruism or an appeal to the social utility of a study has occasionally been found to increase response (Yu and Cooper, 1983), but more often, it is not an effective motivator (Linsky, 1965; Roberts, McGory, and Forthofer, 1978).

The cover letter should address seven items (Walonick, 1993):

1. Briefly describe why the study is being done.

2. Identify the sponsors.

3. Mention the incentive.

4. Mention inclusion of a stamped, self-addressed return envelope.

5. Encourage prompt response without using deadlines.

6. Describe the confidentiality/anonymity policy.

7. Give the name and phone number of someone they can call.

Personalization in a Cover Letter

There are no definitive answers whether or not to personalize cover letters. Some researchers (Andreasen, 1970; Houston and Jefferson, 1975) have found that personalized cover letters can be detrimental to response when anonymity or confidentiality are important to the respondent.

The literature regarding personalization are mixed. Some researchers have found that personalized cover letters with hand-written signatures helped response rates (Carpenter, 1974; Cox, Anderson, and Fulcher, 1974; Dillman and Frey, 1974; Fantasia, 1977; Linsky, 1965; Snelling, 1969). Other investigators, however, have reported that personalization has no effect on response (Clausen and Ford, 1947; Forsythe, 1977; Kernan, 1971; Mason, Dressel, and Bain, 1961; Matteson, 1974; Mooren and Rothney, 1956).

Signature on the Cover Letter

The signature of the person signing the cover letter has been investigated by several researchers. Ethnic sounding names and the status of the researcher (professor or graduate student) do not affect response (Friedman and Goldstein, 1975; Horowitz and Sedlacek, 1974). One investigator found that a cover letter signed by the owner of a marina produced better response than one signed by the sales manager (Labrecque, 1978). The literature is mixed regarding whether a hand-written signature works better than one that is mimeographed. Two researchers (Blumenfeld, 1973 ; Kawash and Aleamoni, 1971) reported that mimeographed signatures worked as well as a hand-written one, while another reported that hand-written signatures produced better response (Reeder, 1960). Another investigator (Smith, 1977) found that cover letters signed with green ink increased response by over 10 percent.

Postscript on the Cover Letter

It is commonly believed that a handwritten postscript (P.S.) in the cover letter might increase response. One older study (Frazier and Bird, 1958) did find an increase in response, however, two more recent studies (Childers, Pride, and Ferrell, 1980; Pressley, 1979) found no significant difference.

References

Anderson, J., and D. Berdie. 1972. Graduate Assistants at the University of Minnesota. Minneapolis: University of Minnesota Measurement Services Center.

Andreasen, A. 1970. "Personalizing mail questionnaire correspondence." Public Opinion Quarterly 34:273-277.

Armstrong, J. 1975. "Monetary incentives in mail surveys." Public Opinion Quarterly 39:111-116.

Armstrong, J., and T. Overton. 1977. "Estimating nonresponse bias in mail surveys." Journal of Marketing Research 14:396-402.

Armstrong, J., and E. Lusk. 1987. "Return postage in mail surveys: A meta-analysis." Public Opinion Quarterly 51:233-248.

Arndt, J., and E. Crane. 1975. "Response bias, yeasaying, and the double negative." Journal of Marketing Research 12:218-220.

Bachrack, S., and H. Scoble. 1967. "Mail questionnaires efficiency: Controlled reduction of non-response." Public Opinion Quarterly 31:265-271.

Barath, A., and C. Cannell. 1976. "Effect of interviewer's voice intonation." Public Opinion Quarterly 40:370-373.

Bartholomew, W. 1963. Questionnaires in Recreation; Their Preparation and Use. New York: National Recreation Association.

Barton, J. 1980. "Characteristics of respondents and non-respondents to a mail questionnaire." American Journal of Public Health 70:823-825.

Bauer, E. 1947-48. "Response bias in a mail survey." Public Opinion Quarterly 11:594-600.

Belkin, M., and S. Lieverman. 1967. "Effect of question wording on response distribution." Journal of Marketing Research 4:312-313.

Benson, L. 1946. "Mail surveys can be valuable." Public Opinion Quarterly 10:234-241.

Benson, S., W. Booman, and K. Clark. 1951. "A study of interview refusals." Journal of Applied Psychology 35:116-119.

Berdie, D. 1973. "Questionnaire length and response rate." Journal of Applied Psychology 58:278-280.

Bishop, G., R. Oldendick, and A. Tuchfarber. 1978. "Effects of question wording and formation on political attitude consistency." Public Opinion Quarterly 42:81-92.

Bishop, G., R. Oldendick, and A. Tuchfarber. 1983. "Effects of filter questions in public opinion surveys." Public Opinion Quarterly 47:528-546.

Bishop, G., R. Oldendick, A. Tuchfarber, and S. Bennett. 1980. "Pseudo-opinions on public affairs." Public Opinion Quarterly 44:198-209.

Bishop, G., A. Tuchfarber, and R. Oldendick. 1986. "Opinions on fictitious issues: The pressure to answer survey questions." Public Opinion Quarterly 50:240-250.

Bishop, G. 1987. "Experiments with the middle response alternative in survey questions." Public Opinion Quarterly 51:220-223.

Blair, E., S. Sudman, N. Bradburn, and C. Stocking. 1977. "How to ask questions about drinking and sex: Response effects in measuring consumer behavior." Journal of Marketing Research 14:316-321.

Blumberg, H., C. Fuller, and A. Hare. 1974. "Response rates in postal surveys." Public Opinion Quarterly 28:113-123.

Blumenfeld, W. 1973. "Effect of appearance of correspondence on response rate to a mail questionnaire survey." Psychological Reports 32:178.

Blythe, B. 1986. "Increasing mailed survey responses with a lottery." Social Work Research and Abstracts 22:18-19.

Boyd, H., and R. Westfall. 1965. "Interviewer bias revisited." Journal of Marketing Research 2:58-63

Boyd, H., and R. Westfall. 1970. "Interviewer bias once more revisited." Journal of Marketing Research 7:249-253.

Bradburn, N., and W. Mason. 1964. "The effect of question order on response." Journal of Marketing Research 1:57-61.

Bradburn, N., and C. Miles. 1979. "Vague quantifiers." Public Opinion Quarterly 43:92-101.

Brennan, R. 1958. "Trading stamps as an incentive in mail surveys." Journal of Marketing 22:306-307.

Brook, L. 1978. "The effect of different postage combinations on response levels and speed of reply." Journal of the Market Research Society 20:238-244.

Brown, S., and K. Coney. 1977. "Comments on 'Mail survey premiums and response bias'." Journal of Marketing Research 14:385-387.

Brown, T., and B. Wilkins. 1978. "Clues to reasons for nonresponse, and its effect upon variable estimates." Journal of Leisure Research 10:226-231.

Burnkrant, R., and D. Howard. 1984. "Effects of the use of introductory rhetorical questions versus statements on information processing." Journal of Personality and Social Psychology 47:12218-1230.

Butler, R. 1973. "Effects of signed and unsigned questionnaires for both sensitive and nonsensitive items." Journal of Applied Psychology 57:348-349.

Cahalan, D. 1951. "Effectiveness of a mail questionnaire technique in the army." Public Opinion Quarterly 15:575-580.

Cantril, H. 1944. Gauging Public Opinion. Princeton: Princeton University Press.

Carp, F. 1974. "Position effects on interview responses." Journal of Gerontology 29:581-587.

Carpenter, E. 1974-75. "Personalizing mail surveys: A replication and reassessment." Public Opinion Quarterly 38:614-620.

Champion, D., and A. Sear. 1969. "Questionnaire response rate: A methodological analysis." Social Forces 47:335-339.

Childers, T., and O. Ferrell. 1979. "Response rates and perceived questionnaire length in mail surveys." Journal of Marketing Research 16:429-431.

Church, A. 1993. "Estimating the effect of incentives on mail survey response rates: A meta-analysis." Public Opinion Quarterly 5:62-79.

Chromy, J., and D. Horvitz. 1978. "The use of monetary incentives in national assessment household surveys." Journal of American Statistics Association 73:473-478.

Clancey, K., and R. Wachsler. 1971. "Positional effects in shared cost surveys." Public Opinion Quarterly 35:258-265.

Clausen, J., and R. Ford. 1947. "Controlling bias in mail questionnaires." Journal of American Statistics 42:497-511.

Collins, W. 1970. "Interviewers' verbal idiosyncrasies as a source of bias." Public Opinion Quarterly 34:416-422.

Cox, E., W. Anderson, and D. Fulcher. 1974. "Reappraising mail survey response rates." Journal of Marketing Research 11:413-417.

Crowley, F. 1959. "Compensation of subjects for participation in research." School and Society 87:430-431.

Deutcher, I. 1956. "Physicians' reaction to a mailed questionnaire: A study in 'resistantialism'." Public Opinion Quarterly 20:599-604.

Dickson, J., M. Casey, D. Wyckoff, and W. Wynd. 1977. "Invisible coding of survey questionnaires." Public Opinion Quarterly 41:100-106.

Dillman, D. 1972. "Increasing mail questionnaire response in large samples of the general public." Public Opinion Quarterly 36:254-257.

Dillman, D., and J. Frey. 1974. "Contribution of personalization to mail questionnaire response as an element of a previously tested method." Journal of Applied Psychology 59:297-301.

Dillman, D. 1978. Mail and Telephone Surveys: The Total Design Method. New York: John Wiley and Sons.

Dohrenwend, B. S., J. Colombotos, and B. P. Dohrenwend. 1968. "Social distance and interviewer effects." Public Opinion Quarterly 332:410-422.

Dohrenwend, B.S. 1970-71. "An experimental study of payments to respondents." Public Opinion Quarterly 34:621-624.

Donald, M. 1960. "Implications of nonresponse for the interpretation of mail questionnaire data." Public Opinion Quarterly 24:99-114.

Doob, A., J. Freedman, and M. Carlsmith. 1973. "Effects of sponsor and prepayment on compliance with a mailed request." Journal of Applied Psychology 57:346-347.

Duncan, W. 1979. "Mail questionnaires in survey research: A review of response inducement techniques." Journal of Management 5:39-55.

Eckland, B. 1965. "Effects of prodding to increase mailback returns." Journal of Applied Psychology 49:165-169.

Eisinger, R., W. Janicki , R. Stevenson, and W. Thompson. 1974. "Increasing returns in international mail surveys." Public Opinion Quarterly 38:124-130.

Ellis, R., C. Endo, and M. Armer. 1970. "The use of potential nonrespondents for studying nonresponse bias." Pacific Sociological Review 13:103-109.

Epperson, W., and R. Peck. 1977. "Questionnaire response bias as a function of respondent anonymity." Accident Analysis and Prevention 9:249-256.

Erdos, P. 1957. "How to get higher returns from your mail surveys." Printers Ink 258:30-31.

Fantasia, S. 1977. "Effects of personalized sponsorship of an additional covering letter on return rate and nature of evaluative response." Psychological Reports 41:151-154.

Ferber, R., and S. Sudman. 1974. "Effects of compensation in consumer expenditure studies." Annuals of Economic and Social Measurement 3:319-331.

Ferriss, A. 1951. "A note on stimulating response to questionnaires." American Sociological Review 16:247-249.

Ford, N. 1967. "The advance letter in mail surveys." Journal of Marketing Research 4:202-204.

Ford, N. 1968. "Questionnaire appearance and response rates in mail surveys." Journal of Advertising Research 8:43-45.

Forsythe, J. 1977. "Obtaining cooperation in a survey of business executives." Journal of Marketing Research 14:370-373.

Franzen, R., and P. Lazersfield. 1945. "Mail questionnaire as a research problem." Journal of Psychology 20:293-320.

Frazier, G., and K. Bird. 1958. "Increasing the response of a mail questionnaire." Journal of Marketing 23:186-187.

Freed, M. 1964. "In quest of better questionnaires." Personnel and Guidance Journal 43:187-188.

Friedman, H., and L. Goldstein. 1975. "Effect of ethnicity of signature on the rate of return and content of a mail questionnaire." Journal of Applied Psychology 60:770-771.

Friedman, H., and A. Augustine. 1979. "The effects of a monetary incentive and the ethnicity of the sponsor's signature on the rate and quality of response to a mail survey." Journal of the Academy of Marketing Science 7:95-101.

Fuller, C. 1974. "Effect of anonymity on return rate and response bias in a mail survey." Journal of Applied Psychology 59:292-296.

Furse, D., D. Stewart, and D. Rados. 1981. "Effects of foot-in-the-door, cash incentives, and follow-ups on survey response." Journal of Marketing Research 18:473-478.

Furse, D., and D. Stewart. 1982 "Monetary incentives versus promised contribution to charity: New evidence on mail survey response." Journal of Marketing Research 19:375-380.

Futrell, C., and J. Swan. 1977. "Anonymity and response by salespeople to a mail questionnaire." Journal of Marketing Research 14:611-616.

Futrell, C., and C. Lamb. 1981. "Effect on mail survey return rates including questionnaires with follow-up letters." Perceptual and Motor Skills 52:11-15.

Futrell, C., and R. Hise. 1982. "The effects of anonymity and a same-day deadline on the response rate to mail surveys." European Research 10:171-175.

Gannon, M., J. Northern, and S. Carrol. 1971. "Characteristics of nonrespondents among workers." Journal of Applied Psychology 55:586-588.

Godwin, K. 1979. "The consequences of large monetary incentives in mail surveys of elites." Public Opinion Quarterly 43:378-387.

Goldstein, H., and B. Kroll. 1957. "Methods of increasing mail responses." Journal of Marketing 22:55-57.

Goode, W., and P. Hatt. 1962. Methods in Social Research New York: McGraw-Hill.

Goodstadt, M., L. Chung, R. Kronitz, and G. Cook. 1977. "Mail survey response rates: Their manipulation and impact." Journal of Marketing Research 14:391-395.

Gough, H., and W. Hall. 1977. "A comparison of physicians who did not respond to a postal questionnaire." Journal of Applied Psychology 62:777-780.

Gullahorn, J.E., and J.T. Gullahorn. 1959. "Increasing returns from nonrespondents." Public Opinion Quarterly 23:119-121.

Gullahorn, J.E., and J.T. Gullahorn. 1963. "An investigation of the effects of three factors on response to mail questionnaires." Public Opinion Quarterly 27:294-296.

Gunn, W., and I. Rhodes. 1981. "Physician response rates to a telephone survey: Effects of monetary incentive level." Public Opinion Quarterly 45:109-115.

Hakel, M.D. 1968. "How often is often." American Psychologist 23:533-534.

Hackler, J., and P. Bourgette. 1973. "Dollars, dissonance, and survey returns." Public Opinion Quarterly 37:276-281.

Hanson, R. 1980. "A self-perception interpretation of the effect of monetary and nonmonetary incentives on mail survey respondent behavior." Journal of Marketing Research 17:77-83.

Harris, J., and H. Guffey. 1978. "Questionnaire returns: Stamps versus business reply envelopes revisited." Journal of Marketing Research 15:290-293.

Heaton, E. 1965. "Increasing mail questionnaire returns with a preliminary letter." Journal of Advertising Research 5:36-39.

Heberlein, T., and R. Baumgartner. 1978. "Factors affecting response rates to mailed questionnaires: A quantitative analysis of the published literature." American Sociological Review 43:447-462.

Hendrick, C., R. Borden, M. Giesen, E. Murray, and B. Seyfried. 1972. "Effectiveness of ingratiation tactics in a cover letter on mail questionnaire response." Psychonomic Science 26:349-351.

Henley, J. 1976. "Response rate to mail questionnaires with a return deadline." Public Opinion Quarterly 40:374-375.

Hinrichs, J. 1975. "Factors related to survey response rates." Journal of Applied Psychology 60:249-251.

Hochstim, J., and D. Athanasopoulos. 1970. "Personal follow-up in a mail survey: Its contribution and its cost." Public Opinion Quarterly 34:69-81.

Holdaway, E. 1971. "Different response categories and questionnaire response patterns." Journal of Experimental Education 40:57-60.

Horowitz, J., and W. Sedlacek. 1974. "Initial returns on mail questionnaires: a literature review and research note." Research in Higher Education 2:261-367.

House, J., W. Gerber, and A. McMichael. 1977. "Increasing mail questionnaire response: A controlled replication and extension." Public Opinion Quarterly 41:95-99.

Houston, M., and R. Jefferson. 1975. "The negative effects of personalization on response patterns in mail surveys." Journal of Marketing Research 12:114-117.

Houston, M., and J. Nevin. 1977. "The effects of source and appeal on mail survey response patterns." Journal of Marketing Research 14:374-378.

Hoyt, J. 1972. Do Quantifying Adjectives Mean the Same Thing to All People? Minneapolis: University of Minnesota Agricultural Extension Service.

Hubbard, R., and E. Little. 1988. "Promised contributions to charity and mail survey responses." Public Opinion Quarterly 52:223-230.

Huck, S., and E. Gleason. 1974. "Using monetary inducements to increase response rates from mailed surveys." Journal of Applied Psychology 59:222-225.

Huffman, H. 1948. "Improving the questionnaire as a tool of research." The National Business Education Quarterly 17:15-18 & 55-61.

Jahoda, M., M. Deutsch, and S. Cook. 1962. Research Methods in Social Relations. New York: Holt, Rinehart and Winston.

James, J., and R. Bolstein. 1990. "The effect of monetary incentives and follow-up mailings on the response rate and responsive quality in mail surveys." Public Opinion Quarterly 54:346-361.

Jones, W. 1979. "Generalizing mail survey inducement methods: Population interactions with anonymity and sponsorship." Public Opinion Quarterly 43:102-111.

Jones, W., and J. Lang. 1980. "Sample composition bias and response bias in a mail survey: A comparison of inducement methods." Journal of Marketing Research 17:69-76.

Jones, W., and G. Linda. 1978. "Multiple criteria effects in a mail survey experiment." Journal of Marketing Research 15:280-284.

Kalton, G., M. Collins, and L. Brook. 1978. "Experiments in wording opinion questions." Journal of the Royal Statistical Society (Series C) 27:149-161.

Kawash, M., and L. Aleamoni. 1971. "Effect of personal signature on the initial rate of return of a mailed questionnaire." Journal of Applied Psychology 55:589-592.

Keane, J. 1963. "Low cost, high return mail surveys." Journal of Advertising Research 3:28-30.

Kernan, J. 1971. "Are 'bulk-rate' occupants really unresponsive?" Public Opinion Quarterly 35:420-422.

Kephart, W., and M. Bressler. 1958. "Increasing the responses to mail questionnaires: A research study." Public Opinion Quarterly 22:123-132.

Kimball, A. 1961. "Increasing the rate of return in mail surveys." Journal of Marketing 25:63-64.

Klein, S., J. Mahler, and R. Dunnington. 1967. "Differences between identified and anonymous subjects in responding to an industrial opinion survey." Journal of Applied Psychology 51:152-160.

Knox, J. 1951. "Maximizing responses to mail questionnaires: A new technique." Public Opinion Quarterly 15:366-367.

Koos, L. 1928. The Questionnaire in Education. New York: Macmillan.

Kraut, A., A. Wolfson, and A. Rothenberg. 1975. "Some effects of position on opinion survey items." Journal of Applied Psychology 60:774-776.

Krosnick, J. 1989. "Poll Review: Question wording and reports of survey results." Public Opinion Quarterly 53:107-113.

Labrecque, D. 1978. "A response rate experiment using mail questionnaires." Journal of Marketing 42:82-83.

Landy, F., and F. Bates. 1973. "The non-effect of three variables on mail survey response rate. Journal of Applied Psychology 58:147-148.

Laurent, A. 1972. "Effects of question length on reporting behavior in the survey interview." Journal of the American Statistical Association 67:298-305.

Layne, B., and D. Thompson. 1981. "Questionnaire page length and return rate." Journal of Social Psychology 113:291-292.

Leslie, L. 1970. "Increasing response rates to long questionnaires." The Journal of Educational Research 63:347-350.

Levine, S., and G. Gordon. 1958-59. "Maximizing returns on mail questionnaires." Public Opinion Quarterly 22:568-575.

Linsky, A. 1965. "A factorial experiment in inducing responses to a mail questionnaire." Sociology and Social Research 49:183-189.

Linsky, A. 1975. "Stimulating responses to mailed questionnaires: A review." Public Opinion Quarterly 39:82-101.

Martin, D., and J. McConnell. 1970. "Mail questionnaire response induction: The effect of four variables on the response of a random sample to a difficult questionnaire." Social Science Quarterly 51:409-414.

Mason, W., R. Dressel, and R. Bain. 1961. "An experiment study of factors affecting response to a mail survey of beginning teachers." Public Opinion Quarterly 25:296-299.

Matteson, M. 1974. "Type of transmittal letter and questionnaire color as two variables influencing response rates in a mail survey." Journal of Applied Psychology 59:535-536.

McCrohan, K., and L. Lowe. 1981. "A cost/benefit approach to postage used on mail questionnaires." Journal of Marketing 45:130-133.

McDaniel, S., and R. Jackson. 1984. "Exploring the probabilistic incentive in mail survey research." p.372-375. AMA Educators' Proceedings. Chicago: American Marketing Association.

McFarland, S. 1981. "Effects of question order on survey responses." Public Opinion Quarterly 45:208-215.

Mooren, R., and J. Rothney. 1956. "Personalizing the follow-up study." Personnel and Guidance Journal 34:409-412.

Moser, C., and G. Kalton. 1971. Survey Methods in Social Investigation. London: Heinemann Educational Books Limited.

Mullner, R., P. Levy, C. Byre, and D. Matthews. 1982. "Effects of characteristics of the survey instrument on response rates to a mail survey of community hospitals." Public Health Reports 97:465-469.

Myers, J., and A. Haug. 1969. "How a preliminary letter affects mail survey returns and costs." Journal of Advertising Research 9:37-39.

Nederhof, A. 1983. "The effects of material incentives in mail surveys: two studies." Public Opinion Quarterly 47:103-111.

Nevin, J., and N. Ford. 1976. "Effects of a deadline and a veiled threat on mail survey responses." Journal of Applied Psychology 61:116-118.

Newman, S. 1962. "Differences between early and late respondents to a mailed survey." Journal of Advertising Research 2:37-39.

Nixon, J. 1954. "The mechanics of questionnaire construction." Journal of Educational Research 47:481-487.

Noelle-Newmann, E. 1970. "Wanted: Rules for wording structured questionnaires." Public Opinion Quarterly 24:191-201.

Norton, J. 1930. "The questionnaire." National Education Association Research Bulletin 8:

Oppenheim, A. 1966. Questionnaire Design and Attitude Measurement. New York: Basic Books.

Orr, D., and C. Neyman. 1965. "Considerations, costs and returns in a large-scale follow-up study." The Journal of Educational Research 58:373-378.

Parsons, R., and T. Medford. 1972. "The effect of advance notice in mail surveys of homogeneous groups." Public Opinion Quarterly 36:258-259.

Paterson, D., and M. Tinker. 1940. How to Make Your Type Readable. New York: Harper & Bros.

Payne, S. 1951. The Art of Asking Questions. Princeton: Princeton University Press.

Peterson, R. 1975. "An experimental investigation of mail-survey responses." Journal of Business Research 3:199-209.

Petty, R., and J. Cacioppo. 1979. "Issue involvement can increase or decrease persuasion by enhancing message-relevant cognitive responses." Journal of Personality and Social Psychology 37:1915-1926.

Phillips, M. 1941. "Problems of questionnaire investigation." Research Quarterly 12:528-537.

Pressley, M., and W. Tullar. 1977. "A factor interactive investigation of mail survey response rates from a commercial population." Journal of Marketing Research 14:108-111.

Pressley, M. 1979. "Care needed when selecting response inducements in mail surveys of commercial populations." Journal of the Academy of Marketing Science 6:336-343.

Price, D. 1950. "On the use of stamped return envelopes with mail questionnaires." American Sociological Review 15:672-673.

Pucel, D., H. Nelson, and D. Wheeler. 1971. "Questionnaire follow-up returns as a function of incentives and responder characteristics." Vocational Guidance Quarterly 19:188-193.

Rasinski, K. 1989. "The effect of question wording on public support for government spending." Public Opinion Quarterly 53:388-394.

Reeder, L. 1960. "Mailed questionnaires in longitudinal health studies: The problem of maintaining and maximizing response." Journal of Health and Human Behavior 1:123-129.

Reid, S. 1942. "Respondents and non-respondents to mail questionnaires." Educational Research Bulletin 21:87-96.

Roberts, R., O. McGory, and R. Forthofer. 1978. "Further evidence on using a deadline to stimulate responses to a mail survey." Public Opinion Quarterly 42:407-410.

Roberts, R., O. McCrory, and R. Forthofer. 1978. "Further evidence on using a deadline to stimulate responses to a mail survey." Public Opinion Quarterly 42:407-410.

Robertson, D., and D. Bellenger. 1978. "A new method of increasing mail survey responses: Contributions to charity." Journal of Marketing Research 15:632-633.

Robin, D., and C. Walters. 1976. "The effect on return rate of messages explaining monetary incentives in mail questionnaire studies." Journal of Business Communication 13:49-54.

Robin, D., and C. Walter. 1976. "The effect of return rate of messages explaining monetary incentives in mail questionnaire studies." The Journal of Business Communication 13:49-54.

Robins, L. 1963. "The reluctant respondent" Public Opinion Quarterly 27:276-286.

Robinson, R., and P. Agisim. 1951. "Making mail surveys more reliable." Journal of Marketing 15:415-424.

Robinson, R. 1952. "How to boost returns from mail surveys." Printer's Ink 239:35-37.

Roscoe, A., D. Lang, and J. Sheth. 1975. "Follow-up methods, questionnaire length, and market differences in mail survey." Journal of Marketing 39:20-27.

Ruckmick, C. 1930. "The uses and abuses of the questionnaire procedure." Journal of Applied Psychology 14:32-41.

Roberts, R., O. McCrory, and R. Forthofer. 1978. "Further evidence on using a deadline to stimulate responses to a mail survey." Public Opinion Quarterly 42:407-410.

Robin, D., and C. Walters. 1976. "The effect on return rate of messages explaining monetary incentives in mail questionnaire studies." Journal of Business Communication 13:49-54.

Schaeffer, N. 1991. "Hardly ever or constantly: Group comparisons using vague quantifiers." Public Opinion Quarterly 55:395-423.

Schuman, H., and S. Presser. 1977. "Question wording as an independent variable in survey analysis." Sociological Methods & Research 6:151-176.

Schuman, H., and S. Presser. 1981. Questions and Answers in Attitude Surveys. New York: Academic Press.

Schuman, H., and S. Presser. 1978. "The assessment of 'no opinion' in attitude surveys." p.241-275. Sociological Methodology. 1979. San Francisco: Jossey-Bass.

Scott, C. 1961. "Research on mail surveys." Journal of the Royal Statistical Society 124:143-205.

Seitz, R. 1944. "How mail surveys may be made to pay." Printer's Ink 209:17-19.

Sharma, S., and Y. Singh. 1967. "Does the colour pull responses?" Manus: A Journal of Scientific Psychology 14:77-79.

Sheth, J., and M. Roscoe. 1975. "Impact of questionnaire length, follow-up methods, and geographical location on response rate to a mail survey." Journal of Applied Psychology 60:252-254.

Sivan, J., D. Epley, and W. Burns. 1980. "Can follow-up response rates to a mail survey be increased by including another copy of the questionnaire?" Psychological Reports 47:103-106.

Skinner, S., and T. Childers. 1980. "Respondent identification in mail surveys." Journal of Advertising Research 57-61.

Sletto, R. 1940. "Pretesting of questionnaires." American Sociological Review 5:193-200.

Simpson, R. 1944. "The specific meaning of certain terms indicating different degrees of frequency." The Quarterly Journal of Speech 30:328-330.

Slocum, W., L. Empey, and H. Swanson. 1956. "Increasing response to questionnaires and structured interviews." American Sociological Review 21:221-225.

Smith, K. 1977. "Signing off in the right color can boost mail survey response." Industrial Marketing 62:59-62.

Smith, T. 1982. Conditional Order Effects. Chicago: National Opinion Research Center.

Snelling, W. 1969. "The impact of a personalized mail questionnaire." The Journal of Educational Research 63:126-129.

Speer, D., and A. Zold. 1971. "An example of self-selection bias in follow-up research." Journal of Clinical Psychology 27:64-68.

Suchman, E., and B. McCandless. 1940. "Who answers questionnaires?" Journal of Applied Psychology 24:758-769.

Sudman, S., and N. Bradburn. 1974. Response Effects in Surveys. Chicago: Aldine.

Swasy, J., and J. Munch. 1985. "Examining the target of receiver elaborations: Rhetorical question effects on source processing and persuasion." Journal of Consumer Research 11:877-886.

Tedin, K., and R. Hofstetter. 1982. "The effect of cost and importance factors on the return rate for single and multiple mailings." Public Opinion Quarterly 46:122-128.

Toops, H. 1924. "Validating the questionnaire method." Journal of Personnel Research 2:153-169.

Tourangeau, R., and K. Rasinski. 1988. "Cognitive processes underlying context effects in attitude measurement." Psychological Bulletin 103:299-314.

Veiga, J. 1974. "Getting the mail questionnaire returned: Some practical research considerations." Journal of Applied Psychology 59:217-218.

Vocino, T. 1977. "Three variables in stimulating responses to mailed questionnaires." Journal of Marketing 41:76-77.

Walker, B., and R. Burdick. 1977. "Advance correspondence and error in mail surveys." Journal of Marketing Research 14:379-382.

Walonick, D. 1993. StatPac Gold IV: Survey & Marketing Research Edition. Minneapolis, MN: StatPac Inc.

Watkins, D. 1978. "Relationship between the desire for anonymity and responses to a questionnaire on satisfaction with university." Psychological Reports 42:259-261.

Watson, J. 1965. "Improving the response rate in mail research." Journal of Advertising Research 5:48-50.

Whitmore, W. 1976. "Mail survey premiums and response bias." Journal of Marketing Research 13:46-50.

Wildman, R. 1977. "Effects of anonymity and social setting on survey responses." Public Opinion Quarterly 41:74-79.

Wiseman, F. 1973. "Factor interaction effects in mail survey response rates." Journal of Marketing Research 10:330-333.

Wotruba, T. 1966. "Monetary inducement and mail questionnaire response." Journal of Marketing Research 3:398-400.

Yu, J., and H. Cooper. 1983. "A quantitative review of research design effects on response rates to questionnaires." Journal of Marketing Research 20:36-44.

Zillmann, D. 1972. "Rhetorical elicitation of agreement in persuasion." Journal of Personality and Social Psychology 21:159-165.

 

Copyright 2017 StatPac Inc., All Rights Reserved