Skip to main content icon/video/no-internet

Although usability testing can apply to all types of products, for survey research, it can best be described as a method for measuring how well interviewers and respondents can use a computer-assisted interview such as a CAPI, CATI, CASI, or Web-based survey, for its intended purpose. It is important to separate usability testing from testing functionality, which focuses only on the proper operation of a computerized instrument (software and hardware), not the individual using the system. The purpose of usability testing is to determine whether or not the form being used to collect data helps or hinders a user's ability to deploy it.

In developing and designing survey instruments, researchers have always strived to ensure that data collection instruments are the best they can be through a variety of testing and evaluation methods put into place prior to data collection. Traditionally, cognitive interviewing and other cognitive methods have provided important tools for examining the thought processes that affect the quality of answers provided by survey respondents to survey questions. In addition, question appraisal systems are used to provide a structured, standardized instrument review methodology that assists a survey design expert in evaluating questions relative to the tasks they require of respondents, specifically with regard to how respondents understand and respond to survey questions. Focus groups can be used to obtain qualitative data that provide insight into the attitudes, perceptions, and opinions on a given topic or instrument. Although all of these efforts have long been important to understanding the way questions and the wording on a survey are perceived by respondents, the increased use of computer-assisted data collection has called for yet another form of testing instruments.

The general thought regarding computerized instruments is that they are easier on respondents and interviewers when compared with paper questionnaires. Pre-programmed skip patterns and automated progress through an instrument removes the time it takes to manually follow routing instructions, turn pages, and edit or calculate responses. But in practice, computer instruments can be more difficult to figure out than their paper counterparts because of complicated instructions, self-editing, navigational problems, and general layout. Usability testing can measure the time it takes to complete certain tasks in an instrument and whether or not these factors are contributing to increased respondent burden. Following the thought that burden is tied to stress or respondent fatigue, which could contribute to respondent attrition, identifying sources of burden and reducing them can contribute to improved response rates. In addition, usability testing can result in increased reliability and validity of survey instruments by examining features—such as error messages and other feedback, instructions, and placement of navigational features ("next buttons," etc.)— and assessing whether or not they help, confuse, encourage, or discourage respondents. The same examinations can also assist interviewers. Usability testing also can reveal how a computerized instrument affects the burden, emotions, and motivation of interviewers, which in turn, can have a positive impact on the quality of the data that they collect.

It is generally agreed that to properly conduct a high-quality usability test, a closed laboratory setting should be used. Many researchers use cognitive laboratories with common features such as one-way mirrors for observation to conduct usability testing. In addition, testing can be enhanced through the use of multiple cameras and recording devices. By using multiple cameras, researchers can capture users' hands on a computer keyboard as well as users' facial expressions. This practice is especially useful in allowing researchers to examine nonverbal cues that users may give, such as facial expressions or body language, that speak to burden or difficulties with a given task. By using microphones, researchers can record and analyze any comments that are made by users during testing. Devices such as scan converters or computers equipped with software allowing them to record images are useful for capturing images from a user's computer screen during testing. Video processors and editing equipment can also be used to capture images from all recording sources, synchronize them, and combine them so that the three images can either be viewed in real time or videotaped for later viewing, coding, and analysis.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading