Steps to Follow When Designing a Survey
1995 © David S.
How to define the goals of the research
Many people begin a survey by writing the questionnaire items. This is wrong. Don't do it. If you begin by writing the questionnaire items you'll lose focus of what's important. Your survey will become a fishing expedition with a lot of tributaries leading nowhere.
Well-designed surveys always begin by committing your research questions to writing. Research questions are not the same as the questionnaire items. Research questions are global in nature. They are the goals and objectives of the study. Questionnaire items, on the other hand, are designed to help answer the global research questions.
Defining the goals and objectives of a research project is one of the most important steps in the research process. Do not underestimate the importance of this step. Clearly stated goals keep a research project focused. The process of goal definition begins by writing down the broad and general goals of the study. As the process continues, the goals become more clearly defined and the research issues are narrowed.
Here are a few examples of how
you might phrase your goals and objectives:
The goals of the study are easily transformed into research questions. Once again, research questions are global and broad, and they are not the same as the questionnaire items. There are basically two kinds of research questions: testable and non-testable. Neither is better than the other, and both have a place in survey research.
"Non-testable" means that the research question cannot be answered by performing a statistical test. The answers to these questions might be important to know, but the decision making criteria does not involve a statistical test.
Examples of non-testable
research questions are:
Respondents' answers to these questions can be summarized in descriptive tables and the results might be extremely valuable to administrators and planners. Business and social science researchers often ask non-testable research questions. The shortcoming with these types of questions is that they do not provide objective cut-off points for decision-makers.
For example, imagine that we've done our survey, and now we need to decide what constitutes satisfactory service? Each of us might give a different answer. There is no exact cutoff point where we would say "yes" our customers are satisfied, or "no" they are not. When we ask questions like this, it's important to establish a decision making guideline before doing the survey.
How to write testable research questions
In business research, it is perhaps more important to ask questions that involve decision making criteria. Business research usually seeks to answer one or more testable research questions. Nearly all testable research questions begin with one of the following two phrases:
Is there a significant
difference between ...?
Examples of testable research
A research hypothesis is a
testable statement of opinion. It is created from
the research question by replacing the words
"Is there" with the words "There
is", and also replacing the question mark
with a period. The hypotheses for the four sample
research questions would be:
It is not possible to test a
hypothesis directly. Instead, you must turn the
hypothesis into a null hypothesis. The null
hypothesis is created from the hypothesis by
adding the words "no" or
"not" to the statement. For example,
the null hypotheses for the two examples would
All statistical testing is done on the null hypothesis...never the hypothesis. The result of a statistical test will enable you to either 1) reject the null hypothesis, or 2) fail to reject the null hypothesis. Never use the words "accept the null hypothesis". When you say that you "reject the null hypothesis", it means that you are reasonably certain that the null hypothesis is wrong. When you say that you "fail to reject the null hypothesis", it means that you do not have enough evidence to claim that the null hypothesis is wrong.
There are many different ways to design the questions for a survey. The problem is how can we check the validity of the survey? How can we determine if a survey is actually measuring what it's supposed to measure?
There are no statistical tests for validity. When a survey is "validated" it means that the researcher has come to the opinion that the survey is measuring what it was designed to measure, or the researcher has received a statement from another researcher indicating that they believe the instrument is measuring what it was designed to measure. Validity is an opinion; nothing more.
Here is a quote from our statistics book Survival Statistics
Most statistics textbooks will tell you that the way to "debug" a questionnaire is to send the questionnaire to a sample of about 30 potential respondents and evaluate their responses for potential problems. We have found that this rarely yields useful information.
Instead, we recommend the following method to get the kinks out of your survey. It's fast, costs nothing, and you'll get immediate and valuable feedback that can be used to improve your instrument.
Reliability is synonymous with repeatability. A measurement that yields consistent results over time is said to be reliable. When a measurement is prone to random error, it lacks reliability. The reliability of an instrument places an upper limit on its validity. A measurement that lacks reliability will also lack validity. There are three basic methods to test reliability: test-retest, equivalent form, and internal consistency
A test-retest measure of reliability can be obtained by administering the same instrument to the same group of people at two different points in time. The degree to which both administrations are in agreement is a measure of the reliability of the instrument. This technique for assessing reliability suffers two possible drawbacks. First, a person may have changed between the first and second measurement. Second, the initial administration of an instrument might in itself induce a person to answer differently on the second administration.
The second method of determining reliability is called the equivalent-form technique. The researcher creates two different instruments designed to measure identical constructs. The degree of correlation between the instruments is a measure of equivalent-form reliability. The difficulty in using this method is that it may be very difficult (and/or prohibitively expensive) to create a totally equivalent instrument.
The most popular methods of estimating reliability use measures of internal consistency. When an instrument includes a series of questions designed to examine the same construct, the questions can be arbitrarily split into two groups. The correlation between the two subsets of questions is called the split-half reliability. The problem is that this measure of reliability changes depending on how the questions are split. A better statistic, known as Cronbach's alpha, is based on the mean (absolute value) interitem correlation for all possible variable pairs. It provides a conservative estimate of reliability, and generally represents the lower bound to the reliability of a scale of items. For dichotomous nominal data, the KR-20 (Kuder-Richardson) is used instead of Cronbach's alpha.