Ten Tangible and Practical Tips to Improve Student Participation in Web Surveys

William R. Molasso
Assistant Professor
Counseling, Adult and Higher Education
Northern Illinois University
billym@niu.edu

Posted: November 2005     Student Affairs Online, vol. 6 no. 4 - Summer 2005

The Student Affairs On-Line article You Still Need High Response Rates with Web Based Surveys (Malaney, 2002) reviewed the importance of obtaining a high rate of response among participants in Web surveys. One of the greatest risks in utilizing Web surveys in collecting data for assessment or research is the unknown influence Web based techniques may have on response rates of the participants (Crawford, Couper, & Lamias, 2001). There is a significant need for new and more robust data collection techniques, for as Krosnick (1999) observed, “response rates for most major national surveys have been falling during the past four decades” (p. 539). Crawford et al. believed that nonresponse represents the main challenge for Web based surveys. Because of the concern about response rates generally and with Web based methods specifically, a number of researchers have begun making significant suggestions on ways to improve response rates in this methodology. However, much of this contemporary literature is based on the larger issues of data collection on the Web. There is a lack of tangible and practical tips of increasing college student response to Web surveys. Based on my use of Web surveys to complete original research, and to conduct program evaluation and assessment, a number of practical methods have emerged to enhance the likelihood that students will respond to a Web survey. Following are ten tangible and practical lessons I have learned to improve Web survey response rates.

 

1.      Immediately Identify Why the Student Is Getting the Email. With the rapid expansion of spam and junk email, students have become very used to quickly deciding to read or delete emails in their in-box. When the student first sees the email in the in-box or reads the first line of the email, it is important that he/she can immediately tell this is not spam. Use obvious cues that straightforwardly set the email apart from what could be junk mail. For example, use the name of your institution in the subject line. Or, if you are focusing on a particular group of students, let them know. Students are more likely to read an email with a first line of “As a fraternity member at XYZ University…” then “You have been randomly selected to participate in….” or worse, the typical junk email introduction of “You have been pre-selected for a fabulous prize.” The more closely you can connect the email with something with which the student identifies, the more likely the email won’t land up in the trash bin. Additionally, emails that begin with the name of the participant generally improve response rates (Cook, Heath, & Thompson, 2000). Personalization of emails requires that you start the email with the person’s name—”Dear Billy”—and then proceed to invite the person to participate in the study.

 

2.      Survey Length and Number of Pages. It is important to be very conscious of the overall length and the number of different screens or pages of the survey. A basic rule of thumb is to design Web surveys the same way you would paper-and-pencil surveys. Few students would complete a 15-page survey, either online or in writing. Limit the scope of the project to something more manageable for participants, with 4-8 pages being a general upper limit. Use page or screen breaks where they would naturally fall in paper-and-pencil surveys.

 

3.      Test the Web survey on Different Computers. Create very simple layouts for your Web survey. Don’t use flashy pictures or formats, unusual fonts, etc. Remember, users accessing the Web page via dial-up or with older computers and Web browsers may have difficulty downloading and using a survey that includes such elements. Once you have designed your Web survey, test the format on different computers and platforms. Different Web browsers (e.g., Netscape, Internet Explorer, or Safari) may show the same page differently. Variations in screen resolution, monitors, and settings may also change how things appear on different computers. Macs and PCs may show the same page in very distinctive ways. 

 

4.      Do Mini-Projects First. Become familiar with whatever software you are using in a mini-project before using it in a larger undertaking. For example, try an attitudes survey with a small number of students, or develop a quick and easy staff survey about end-of-the-year plans. Use these kinds of mini-projects to become comfortable with the design and function of the Web survey software. If you are going to learn from your mistakes, it is better to make the most obvious errors on smaller, less critical projects, not in a larger assessment endeavor.

 

5.      Be Honest in How Long It Will Take. Invitations to participate in Web surveys often indicate it will take only 3-5 minutes. However, it frequently actually takes 15 minutes or more! In this circumstance, participants may stop in the middle of the survey because it was not what they expected. If the participant had known it would take 15 to 20 minutes, most of those responders probably still would have taken the survey. But, they would have planned for that time commitment and actually completed the whole thing. Be honest in the estimate of how long it will actually take the participant to complete the survey.

 

6.      Use Three Email Invitations Only. Cook, et al. (2000) indicate that the greatest response rates are obtained when three emailed contacts are utilized; and that additional contacts did not necessarily increase response rates. In addition, contact after the third email could be considered intrusive.  Emails beyond three frequently result in a large number of participants replying back to the author in a fairly negative tone. 

 

7.      Consider Usage Patterns When Scheduling Emails. When scheduling the email invitations, remember to take into account the normal and differing usage patterns of students. In most studies I have completed using Web surveys, participants responded within a few hours of the emailed invitations, or not at all. Almost 97% of all responses for my dissertation study came in a 6-hour window after each of the three emailed invitations were sent. When you create your schedule of email distribution, do the first on a Thursday morning, the second on a Saturday afternoon, and the third on Monday late morning. Another option would be sending the first email invitation on Monday afternoon, with reminders to non-responders on Thursday morning and again on Saturday. This schedule accounts for different usage patterns due to week vs. weekend schedules, as well as users who check email only during particular times of the day.  It is also important to consider student time commitments and other influences on the outcomes of the study. Stay well away from end of the semester, holiday breaks, major events (football game weekends, concerts), and mid-terms. For institutions on semester systems, October through mid-November and mid-January through February are often the ideal times to collect data. It can be done at other times, but those periods seem to be the most likely to produce higher student participation.

 

8.      Make Sure You Know To Whom The Invitation is Sent. A recent clerical error in Web survey administered on a college campus resulted in the invitation to participate to be emailed to all students at the institution, multiple times in just a few days. While mistakes do happen, it is important to be conscious of what choices you have selected in your survey software, before you actually send the invitation. Send the email to participants on the sample list only. Send reminder emails only to those participants who have not yet responded. Verifying who, when and how often will minimize the number of negative replies you receive in your own in-box.

 

9.      Provide Some Incentive to Participate. Dillman (2000), a well-respected expert in paper-and-pencil survey design, believes that providing incentives can increase response rates dramatically. Using the theory of social exchange, providing each respondent a token of appreciation to participate, such as a $1 bill or a pen, can have a remarkable impact on the choice to participate. Unlike paper-and-pencil surveys, providing a $1 bill or other token of appreciation is somewhat impractical in Web surveys, as well as rather costly. However, it is possible to enter respondents into a drawing that may entice participation. Try offering a drawing for a $150 gift certificate for a major retailer, the local bookstore, or any store in the local mall. In situations of very limited funding, it may be best to approach a local business for a donation—a new DVD player, membership in a health club, or some other item that will appeal to students. Find out what is considered “valuable” for your students, and figure out a way to offer that as the prize.

 

10.  Pilot Test.  It is critical that you test your Web survey with a small group of people prior to its wider use. Ask other staff members or a small set of students with whom you have regular contact to take the survey and provide feedback to you. Did it work? Did they understand what to do? Is there a question they did not understand or should be changed?  Additionally, test the data analysis portion of the project from the pilot. Can you download the data and use it? Are you going to get what you thought you would get? As part of the Pilot Test, make sure to test the Web survey format on different computers and platforms. Different Web browsers (e.g., Netscape, Internet Explorer, or Safari) may show the same page differently. Variations in screen resolution, monitors, and settings may also change how things appear on different computers. Macs and PCs may show the same page in very distinctive ways.

 


References

 

Cook, C., Heath, F., & Thompson, R. L. (2000). A meta-analysis of response rates in Web or Internet-based surveys. Educational and Psychological Measurement, 60, 821-836.

Crawford, S. D., Couper, M. P., & Lamias, M. J. (2001). Web surveys: Perceptions of burden. Social Science Computer Review, 19, 146-162.

Dillman, D. A. (2000). Mail and Internet surveys: The total design method (2nd ed.). New York: Wiley.

Krosnick, J. A. (1999). Survey research. Annual Review of Psychology, 50, 537-567.

Malaney, G. D. (2002). You still need high response rates with Web-based surveys. Student Affairs On-Line, 3(1). Retrieved September 22, 2005, from http://studentaffairs.com/ejournal/ Winter_2002/rates.html

Sills, S. J., & Song, C. (2002). Innovations in survey research. Social Science Computer Review, 20, 22-30.

Upcraft, M. L., & Wortman, T. I. (2000). Web-based data collection and assessment in student affairs. Student Affairs On-Line, 1(3). Retrieved September 22, 2005, from http://studentaffairs.com/ejournal/Fall_2000/art1.html