This case concerns the use of brokered mailing lists to identify and correspond with potential respondents in a research study. Although the case concerns an example of a mail survey only, the processes of defining the relevant population and then refining the mailing list are just as relevant to the use of Internet email lists obtained from list brokers or from organizations. And the process of examining the list and the returned surveys for representativeness of the chosen population and for the impact of dealing with non-response and partial response are also equally applicable to Internet surveys. Surveys of samples of a population of consumers, professionals, and/or types of organizations are necessary to allow important questions of business management to be examined. This case study presents a systematic process for defining and refining lists of potential sample members at the pre-mailing of the survey stage and then the issues involved in establishing the reliability and generalizability of the returned surveys, prior to data analysis.
By the end of this case, students should be able to
- Understand the importance to generalizability of results of proper attention to refining a list of possible subjects
- Understand how to refine a purchased list to reduce your mailing to the sample characteristics you want
- Understand how to do a systematic random sample from a list of possible subjects
Some years ago, I (Shelley R. Tapp) and a student of mine (Mark Arnold) decided to look at the use by arts organizations of direct marketing techniques. We wanted to see whether specific characteristics could affect the success of such organizations with respect to raising funding to enable the organizations to continue to provide their services to their communities. The variables that we examined included the type of organization, performing arts companies versus museums; their typical success in ticket/membership sales; their ability to generate local and federal grants; and their size and the training of their chief executive or marketing director. Our hypotheses were, in general, that success breeds success. Or, in other words, that larger arts organizations with directors who were experienced and professionally trained in marketing would be more successful on the performance variables that we intended to examine.
We had applied for a competitive research grant in a program offered by our university. We felt that our research project was more likely to receive funding for a well-designed study of not-for-profit organizations from a not-for-profit organization. We especially believed that designing a study of not-for-profit organizations improved our odds of obtaining one of the grants because the professors who would serve as judges for the university would come from a broad spectrum of disciplines. We were also aware that there was a mistaken but prevalent belief that business school researchers had recourse to much greater research funds than researchers in some of the other disciplines. So, our choice of subject was, at least in part, a reflection of our desire to enhance the probability of being among the awardees. Also, both researchers had a sustained love of the arts that continues to this day.
A consideration not directly related to this case study but nonetheless important is for you as researcher to be certain you know the issues that a prospective funding organization wants to support. And that you choose funding organizations wisely and read grant applications carefully. There are search engines that contain information about funding organizations, and you greatly increase your odds of funding if you find the right one: an organization that will be interested in funding the kind of study that you wish to complete. As with this study, your choice of funding organization may greatly influence your choice of population to study. Luckily, there are many research topics that can be examined in many different organizations.
An article by Mark Arnold and Shelley Tapp (2003) illustrates the issues involved with the use of brokered lists for survey research. Brokered lists are lists obtained by paying a fee to a for-profit organization to construct a list from databases that the organization maintains. The assessed fee is determined by the number of descriptors you wish the firm to use when creating the list. While mail surveys allow researchers to canvas representatives of a large population to increase both the precision of sample measures and the generalizability of analysis results, systematic procedures must be followed to insure the randomness of the sample and its representation of the population of interest. Once a researcher has developed and pretested the survey instrument, the following steps illustrate the process of systematic attention to improving the sample frame.
- Decide the framework of the sample. What kind of people/organizations do you want to study?
- Identify relevant characteristics to define participants sought. What are the most important characteristics that describe the desired participants in the study?
- Using the characteristics from step 2, purchase a list from a broker or from some other method, such as working with free mailing lists from organizations interested in participating in your study.
- Cull the sample for potential responders who do not fit the relevant characteristics defined in step 2.
- Utilize a systematic random sample of the remaining potential responders in the list.
- Check the resulting random sample for distributions of the potential responders that would skew the representativeness of the sample, for example, a sample that does not represent the geographic scope of the study as desired.
- Prepare the documents to distribute to the sample members and mail the documents.
- Set a specific date to send reminders to the sample members to encourage the return of the surveys and plan on sending a second mailing after a set time (usually 2–4 weeks).
- Inspect returned surveys for completeness and adjust the response rate to the number of completed surveys.
- Inspect the completed surveys for representation of the desired population and for the analysis technique you intend to use.
The survey by Arnold and Tapp (2003) used a list purchased from the Dun & Bradstreet Information Database. The authors wished to survey chief executives of arts organizations primarily engaged in the operation of museums and in the performing arts. Using Standard Industrial Classification (SIC) codes (now renamed the North American Industry Classification System [NAICS]), we described the types of organizations for Dun & Bradstreet to include in the resulting list. The NAICS is a set of standard coding for industries that allows greater precision in specifying an industry when searching for published data or information about a specific type of business. The resulting brokered list contained more than 4,000 organizations located nation-wide. However, the list also included organizations that were not relevant to the study, for example, artist unions and historical societies. These organizations were painstakingly removed from the brokered list to create the sample frame for the mailing. This process reduced the potential size of the sample to 1,310.
Next, a systematic random sample was drawn from the 1,310 organizations, resulting in a final sample size of 600 firms. A systematic random sample begins with the random choice of a starting point in the list of possible subjects. This can be done by using a random number generator. Let us say that the random number generated in this fashion was seven. A researcher would pick the seventh addressee on the mailing list as a starting point. Then, the researcher uses a fixed interval to choose the remaining members of the sample. Again, assume that interval is four. The researcher begins with the 7th name and then the 11th name in the list, then the 15th, and so forth. The size of the interval is determined by the size of the list of prospective organizations and the size of the sample the researcher wishes to draw. Another inspection of the resulting sample determined that the sample provided national representativeness of the organizations to be studied.
The documents for the study were prepared. These consisted of the questionnaire and a cover letter explaining the purpose of the study and a promise that the data would be used in the aggregate and that respondents’ organizations would not be identified by name in the coding of data or the analysis of that data. The preparation and testing of the questionnaire, although obviously important to the quality of the analysis, is not within the scope of this monograph on the use of brokered lists in surveys.
However, the procedures for validating the returned surveys are very relevant to this monograph. Among the first returned surveys were 46 returned for incorrect addresses, which reduced the potential mailing to 554 organizations. Of that 554 surveys mailed, 217 responses were returned. These surveys were edited and coded, resulting in 13 surveys classified as unusable for various reasons. The final collection of 204 organizations represented a response rate of 37%, roughly.
Eventually after the reminder mailing, the return of questionnaires diminished and then ended. However, we needed to estimate the possibility of systematic differences between responders and non-responders, as that could affect the validity of the analysis of the data. Critical variables of the study were compared using the first 50 respondents and the last 50 respondents. The last 50 respondents served as a surrogate for the non-responding organizations. The variables included performance of the organizations (for example, number of performances/exhibitions produced during the average year) and measures of the organization’s demographics: the type of organization (performing arts company or museum) and average annual funding from the federal government. There were no significant differences between the first 50 responders and the last 50 responders on these variables, verified by quantitative comparisons of means and standard deviations. A final check of zip codes revealed no obvious difference in the geographical distribution of the first and last responders. While this method does assume that the non-responders could be represented by the last responders (Armstrong & Overton, 1977), it did not seem that any strongly significant difference would exist that could impair the validity of the study.
Finally, we had to deal with the preparation of the data for analysis. Missing item analysis (Hair, Anderson, Tatham, & Black, 1998; Jӧreskog and Sӧrbom, 1996) caused a further 53 cases to be discarded because of missing values in one or more of the hypothesized relationships being investigated. This process resulted in data for analysis of 151 respondents. To see whether the removal of those 53 cases affected our sample in any significant way, an analysis of variance (ANOVA) was performed on the model variables between the remaining sample cases and the 53 cases removed from the study. The ANOVA found no significant differences between the 151 respondents and the 53 cases removed. This final culling of the surveys prior to analysis resulted in a sample that included roughly a 50/50 split in the sample between museums and performing arts organizations. These organizations ranged in size from those offering fewer than 10 performances/exhibitions to those producing more than 100 performances/exhibitions per year. Approximately half of the sample was drawn from non-profit private organizations, the rest from a mix of non-profit organizations affiliated with universities and non-profit public organizations.
Although this case study represents the experience and procedures of collecting data via mail, the same issues present themselves when using the Internet to disseminate questionnaires. You still must decide what characteristics would describe your population and desired sample and then find a list broker for Internet information. You must be able to communicate clearly to the broker the characteristics that the persons/organizations in the resulting list should have. But there will always be a need for culling the list even if only to take a random sample. And even with Internet surveys, the issue of non-response and partial responses needs to be addressed.
The results you get from your analysis of the data will be only as good as the data that you use. Your ability to generalize your results to a larger population will only be as good as the quality of your careful selection of characteristics to use in defining your population and the resulting sample. The reliability of the parameters that are used by your analysis technique of choice is vitally affected by failure to deal with issues in the sample frame that are addressed above. These issues can and do arise, even with brokered lists and even using the Internet to deliver questionnaires.
A well-tested and thoughtfully created questionnaire can only yield good answers to your research questions if that questionnaire gets to someone who truly belongs in your sample. So, even if you are as lucky as we were to win a grant from your university to cover costs of list acquisition and mailing for your research, your purchased list will contain people or organizations that are not appropriate for your study, people or organizations who will not respond, and people who will return questionnaires partially completed. At each stage of the distribution and collection of your surveys, the number of your respondents will need to be adjusted so that you finally achieve a sample that can yield appropriate responses and allow you to test your hypotheses fairly.
At the end of this case study are suggested readings, if you wish to learn more about mail or email survey research. We have also included two articles on more recent studies of performing arts usage of Internet techniques which were not included in the study to stimulate membership and contributions, in the event that you would like to find an interesting take on this type of research for your own purposes of study. While non-profit organizations do not make decisions to market themselves to gain profits and return on investment in the traditional sense, they do need to make decisions as to what marketing techniques will perform best for the needs of their companies/museums. We included these two articles in case you have some interest in supporting the arts or see that kind of organization as a viable option for study.
Method in Action
Mark and I were pleasantly surprised by the return rate that we achieved. It was much higher than we had anticipated because the only incentive for participation was to receive a report on the findings and the promise that the researchers would donate US$1 for each returned survey to a charitable cause. The addresses of those managers interested in receiving a report were maintained separately from the data file to protect the confidentiality of the data contributed by those organizations. The organizations were mailed a report after analysis of the data. Some of the factors that influenced our response rate included the fact that our questionnaire was easy to understand, easy to record responses, and truly did take only 10–15 min to complete. Of course, we included a stamped, self-addressed envelope and that was another assist in the effort to increase participation in the study.
While we thought our hypotheses were reasonable and well supported by literature, we were again pleasantly surprised by the degree to which they were supported by the data. Of the 14 hypothesized relationships between variables, 10 were supported at the critical p value of .05. In other words, we could be reasonably certain that our results were highly unlikely to be incorrect in our failure to support the null hypothesis. In this study, the null hypothesis would be for all 14 relationships that the independent variables would not affect the dependent variables. Of course, we really hoped we would see a significant value of .05 or lower. This would indicate a high probability that our hypothesized relationships between variables in our study were very likely to exist. Remember, statisticians are very conservative about the conclusions they draw from data analysis.
The most surprising result was the failure to support the null hypothesis that the arts organization’s size would predict formalized decision-making within the organization. We certainly received a broad enough range of organizations by size to test this hypothesis. And this relationship was, at the time, supported by literature of long-standing that indicated size as a significant influence in an organization’s adoption of new technologies and the degree to which an organization would develop formalized planning processes. Size of the organization did not prove to be a significant indicator of formalization of decision-making within the organization. However, the size of the organization and the size of the marketing expenditures were significantly related. The best hypothesis that we could come up with was that, possibly, even large arts organizations may have much smaller executive staffs than for-profits so that coordination and systematic procedures are not as necessary as with for-profits, who often offer products and services and even produce them in foreign markets, requiring the formalization of procedures to control efficiency and minimize wasted resources.
Another doctoral student, Kasandra L. Lane, and I are currently interested in researching social media effects across arts organizations of varying size. Certainly, there is quite a bit of more current research into social media and the arts, but the Arnold and Tapp study is still being cited in research on marketing and the arts. Kasandra and I are planning a research project, and we both look forward to reading and thinking about current literature to decide what we should study in this area.
Researchers should look deeply into specific characteristics and variables to develop results from the mail survey that are generalizable to the population of interest. The initial study focused on the arts industry and the resulting report to interested respondents concentrated on results to help organizations improve marketing success within their communities. In this study, we focus on the procedures that will help researchers build a sample of data that is representative of the population and generalizable to the population. The steps given in this case study allow new researchers a guide to build a successful study focusing on surveys in specific industries. The steps are equally applicable for researchers using mail or email survey lists of potential respondents.
Exercises and Discussion Questions
- What steps should a researcher use to plan a successful mail survey that is generalizable to the population of interest?
- Why would a researcher using a list provided by a list broker need to be so concerned about sample representativeness?
- Which of the response analysis procedures explained in this case would still be necessary if you use the Internet to administer your survey?
- Why should late responders and early responders be analyzed separately?
- How does a researcher use a systematic random sampling technique?