Skip to main content

A Personal Journey of Identifying, Developing, and Publishing Survey-Based Research in an Emerging Research Area: Incubation of Early Stage Technology Entrepreneurial Ventures

Case
By: Published: 2019 | Product: SAGE Research Methods Cases Part 2
+- LessMore information
Search form
No results
Not Found
Download Case PDF

Abstract

Conducting research in a newly emerging research area presents unique challenges. The context of this case study is the development of a survey-based research study on the incubation process of early stage technology ventures when this research topic was first emerging. Through this case study you will go on a journey covering the selection of a research topic, development and fine-tuning of a research question, streamlining a literature review, adapting measurement scales from tangential literature streams, executing a survey instrument, conducting in-person interviews to gain explanatory depth of survey results, using conferences for feedback, and surviving the journal review process even when reviewers disagree. This case begins during doctoral student days and culminates to a journal publication post-graduation. Thus, this case is particularly relevant for ambitious doctoral students who want to move beyond incremental research and pursue interesting and newly emerging topics.

Learning Outcomes

By the end of this case, students should be able to

  • Systematically develop a relevant research question
  • Evaluate and justify where and how to find Early Stage Technology Entrepreneurial Ventures as a survey research sample, including samples of convenience
  • Gain insight into how in-person interviews and surveys can complement web-based surveys
  • Evaluate the value gained from conference and journal review processes
Case Overview

This case is based upon the development of a research paper titled “The Role of Incubator Interactions in Assisting New Ventures” published in Technovation (Scillitoe & Chakrabarti, 2010). The beginnings of this research paper can be traced back to a larger research agenda associated with my doctoral dissertation. Thus, the case begins during my doctoral dissertation period discussing how I grappled with finding the right topic (advisor), research questions, and collecting data and then evolves to the writing of the research paper as an outcome of the dissertation. The case is divided into the following sections:

  • Is it the Right Research Topic?
  • Finding and Fine-Tuning a Relevant Research Question
  • Putting Together the Research Paper
  • Conference Presentations for Initial Feedback
  • Selecting a Journal and Surviving the Journal Review Process
Is It the Right Research Topic?

It was important to me to embark on a topical research area that was of interest to me. Topic selection as a PhD student is often tied to selecting a dissertation advisor. It can be surprisingly easy to work on a research topic that isn’t of great interest to you. Some potential advising faculty have their own research agendas and projects that students can join but I thought—are these topics really of interest to me? For PhD students seeking an academic career, you will likely remain working on this research topic for many years, beginning with 2 to 4 years of deep dissertation research that should carry enough data and insights to support multiple papers through tenure effectively extending the time on this topic to 8 to 10 years. I couldn’t imagine spending 10 years on a research topic that I was not passionate about, could you? The Technovation paper focused on for this case was one of the papers from my PhD dissertation work, published 6 years after graduation while still tenure-track.

I had received four invitations to work with faculty at both Rutgers and New Jersey Institute of Technology (NJIT) (an option due to a joint agreement between NJIT and Rutgers) on their research areas. These areas included motivation studies, strategic alliances, leadership, and qualitative research on innovation processes. However, my interest was in university technology transfer and innovation drawing upon my 15 years of work experience at Princeton University in the Chemistry Department combined with an additional 5 years working in the chemistry and biological sciences. I decided to make appointments with three tenured faculty who were doing research in technology and innovation management but who also did quantitative empirical work. Conducting a qualitative dissertation seemed more difficult to me. Also, I wanted a tenured faculty as an advisor who could better manage my doctoral dissertation committee.

Two of my desired advisors I interviewed declined me as a research student because they had several doctoral student advisees already and were not enthusiastic about my interest area. I was deeply disappointed. The third, Dr. Chakrabarti, accepted me as a research student but with a twist. He recommended that I take a new perspective on university technology transfer, a topical area that he felt was well studied in the extant literature, and instead redirect my interest to university-linked incubators as a new form of university knowledge creation and transfer through entrepreneurial business development. Being open to feedback and adjustments to my research interest proved to be valuable. My newly aligned research topic positioned me into a newly emerging research area in the literature. I am thankful I did not underestimate or ignore the experience and foresight of Dr. Chakrabarti.

Finding and Fine-Tuning a Relevant Research Question

The next challenge was to identify a relevant research question. Unlike other students I was envious of at the time, I was not handed a research question by my advisor but had to go through the evolution process that I now understand prepared me for my academic career. A good research question addresses a gap in the literature and will lead to well-stated propositions/hypotheses, guide study design and data collection, and produce quality results that can be published in quality journals. The development of a relevant, clear, and well-articulated research question is very important and cannot be emphasized enough.

I recall spending a significant amount of time reading lots and lots of research articles related to my topical area (e.g., social capital, networks, university technology transfer, incubators and incubation, entrepreneurship, technology innovation management, etc.). I diligently read and recorded relevant information (paper title, date, authors, methods used, findings) for each paper. Within these readings, I was searching for a good research question. What I learned is that you can look inside the limitations, implications, and next steps sections of papers for author recommended “next steps” in research but these “next steps” tend to be incremental in nature and specific to that particular paper. My dissertation advisor became frustrated with me (I recall getting emails in all caps at one point) and suggested I stop reading and start thinking about how the readings connect, disconnect, and are based upon invalid assumptions. Understanding what the literature IS NOT saying offers an opportunity to fill a gap and create new knowledge. As a result, I started to pose research questions that were less incremental and more interesting. I continued to read more articles but my readings became more focused and specific as I fine-tuned my research question.

The research question specifically for my Technovation article initially was “how do incubators provide support in the development of new technology based ventures.” The extant literature at the time showed inconclusive and inconsistent results. Simply being affiliated with an incubator showed inconsistent results in venture success compounded with a variety of offerings among these incubators. More recent work suggested that understanding venture development should be approached as an incubation process with some early exploratory work suggesting the importance of networking ties. Drawing from my readings in the technology and innovation literature, I was aware that business and technology development processes within tech firms can be different. There was some discussion in the literature about business and technology-related assistance for young ventures but that was mostly about business services and intellectual property rights (a very limited scope of technology assistance). From my further analysis, I noticed inconsistencies regarding the measurements of venture development and success as well as a focus on incubator managers as the source of data. Thinking about the role of networks, I thought about learning as a measurement of venture development. The organizational learning literature highlights the importance of networks as part of the learning process. Thus, my research question evolved to “how do incubators support the incubation process of technology ventures in regard to both business and technology development with learning as an early measure of venture development.” I also decided to survey entrepreneurs within incubators as my primary data source, not the incubator managers as much of the prior researchers had done. Perhaps entrepreneurs had differing experiences than what was presented by incubator managers? Also, I wanted to do theoretically driven empirical work. Prior literature tended to be case studies or anecdotal in nature limiting generalizability.

I then elected to visit some incubators and affiliated entrepreneurs to learn about their experiences, importance of networks, how network contacts were found and with who, and how business and technology development was nurtured. I selected local and easily accessible incubators associated with both NJIT and Rutgers and my advisor was able to secure an appointment with an incubator in Horseheads, New York, which had strong ties to Corning, Inc. to understand corporate influence, and invited me along. Unless you have significant and recent work-related experience on the topic, you will learn a lot through this initial interviewing process. Business professionals complain academics lack practical experience. It is important to keep in touch with practitioners as academics. I also want to note that this early interviewing process was not official data collection for the empirical study but rather served to confirm, adjust, and deepen my understanding of the phenomenon of incubation, something not well discussed in the literature. I was able to reach back out to these individuals I spoke with when I began my survey-based data collection.

Putting Together the Research Paper
Literature Review

Once I had a clearly articulated and relevant research question, I streamlined my literature review. Although I had read a wide swath of literature thinking about this research topic stemming from my dissertation work, not all readings were relevant to this paper and the fine-tuned research question. Journal editors are not impressed with the length of a paper but rather the conciseness, clarity, and contribution it offers. Some journals impose page limits or give a suggested paper length. I do not recall a page limit when submitting my work to Technovation but the reviewer comments certainly emphasized clarity and conciseness. A journal rejection is possible if the reviewers do not have a clear understanding of what the paper is about or feel there is just too much work to be done. The goal is to get a revise and resubmit. Very few papers are accepted without revisions.

I covered the seminal works and relevant facets of the incubation phenomenon that I was aware of through my library literature search. Google Scholar and ResearchGate are also viable sources for searching for relevant literature today. Of course, the reviewers suggested other sources as part of the revision process which is why quality reviewers matter. I was able to find a review article on incubator/incubation research (Hackett & Dilts, 2004) that also helped a lot. However, I reviewed each article in the review article myself. I did not rely on the summary account of another author because their perspective when writing the article may be different than mine in light of my research question.

In addition, my article data were sampled in the United States and Finland so I also discussed these contexts in regard to incubation in the paper along with relevant descriptive data presented in Tables 1 and 2 in the paper as requested by reviewers. The United States and Finland have differing incubation systems so, in hindsight, I understand that these contexts were important and influenced data analysis found in the paper’s Discussion section. Finland was a sample of convenience and luckily, I could justify this context in my paper! I discuss how Finland was a sample of convenience below in the data collection section. If you have a sample of convenience, you will need to find out a way to justify how that sample is relevant to your research study.

Drawing from good quality, recognized journals helps legitimize your research work and make conference acceptance and publication easier. I drew from well-respected management, technology management, and entrepreneurship journals. While some of these are niche journals, they were recognized as quality in a niche journal such as Technovation. A journal rating source such as SCIMAGO or the Australian Business Deans Council (ABCD) list can be used to determine quality if unsure. Also, there may be a published journal article discussing quality of journals, particularly in niche areas such as technology and innovation management (e.g., Linton & Thongpapanl, 2004).Sources used should be double-blind peer-reviewed whenever possible. Books and conference papers are not always blind peer-reviewed or just single-blind peer-reviewed. In addition, conference reviews often have a mix of seasoned and new academics along with many doctoral students so the quality of reviews are not always consistent.

Using Valid and Reliable Measures

Measures and data related to early stage technology venture development were difficult to find and also involved private, non-public firms further compounding information availability. These ventures lacked the traditional measures of success used for larger firms such as revenue, profits, financial ratios, number of employees, patents, and other easily quantified measures. In addition, technological innovation is considered a messy, non-linear process that is also difficult to measure. Also, learning as an early measurement of venture business and technology development is not easily measured but could offer valuable insights to the literature. Due to these measurement challenges, secondary data would not suffice.

Getting information directly from entrepreneurs was required for this paper so a survey instrument was developed. Survey questions were adapted from existing measurement scales in the literature whenever possible using Likert-type scaling for quantifying to enable statistical analysis. This adaptation meant using scales used for larger firms and those outside the incubation literature and adapting the questions for a small technology venture. For example, the measure for Business Assistance used in the Technovation article was adapted from the User Preference scale by von Hippel (1995). I sought to find a measure of buyer preferences specific to technological innovations as relevant to my study. Most incubators in practice focused on marketing and buyer awareness as business assistance. von Hippel was measuring User Preference in the context of lead users as contributors toward innovation with large innovative firms. von Hippel also stated that the technology preferences of lead users were similar to other technology users in the market (i.e., buyers). Also, the specific types of User Preferences von Hippel mentioned included 17 attributes (see von Hippel, 1995, p. 116) specific to his study on CAD system technology. In my study, I adopted 3 of these attributes (integration with other products or systems, features, and maintenance) that were relevant to entrepreneurial ventures but also applicable to various types of technological innovations. I defined technologies from a broad perspective in my study (advanced materials, aerospace, biotechnology, computer software, environmental technology, medical and surgical equipment, pharmaceuticals, specialty chemicals, telecommunications, online businesses, and health-care related services) consistent with prior literature (e.g., Zahra, Ireland, & Hitt, 2000) and subsequently included a wide variety of technology ventures in my sample. Sampling a wide variety of technology ventures was largely done due to sample size concerns and expected low response rates from busy and difficult to find entrepreneurs. Focusing on a single technology was considered too limiting to get sufficient data for empirical testing. Based upon my low response rate for this study, this was a wise choice.

Among a total of 10 variables in my study, five required scales and were drawn from existing literature. Each scale used is mentioned, along with the citation, under each variable description under Measures. Factor analysis as well as reliability tests were conducted and reported for scaled measures after data collection. Luckily, because established scales were used, everything worked out fine for my data as well.

Finalizing the Survey, Collecting the Data, and Analysis

I found some books on survey design to better ask questions and organize the flow of questions. One trick was that more sensitive questions were asked at the end which could turn off a respondent, while easier to answer, less sensitive questions were asked at the beginning. As many of my scales were taken and adapted from the extant literature, there was not much to change in the survey questions.

My dissertation advisor had secured a National Science Foundation (NSF) grant and had contacts at Helsinki University of Technology (HUT) (now a part of Aalto University) who secured funds from Tekes (now known as Business Finland) and provided me with office space. These resources allowed me to travel and work in Helsinki, Finland, resulting in a sample of convenience. I asked Finnish and U.S. research colleagues to read the survey to ensure the instructions and questions were clear and understandable. I did not conduct a pilot study because response rates to the web-based survey were expected to be a low. I was concerned that lacking a pilot study would limit publication potential but I was transparent in the article and it was not an issue.

All survey respondents whether in person or online were promised confidentiality and offered an executive summary of study results if desired. I met with some incubator managers in the United States and Finland asking them open-ended questions to understand their perspective on venture incubation. I also met with some U.S. and Finnish tech entrepreneurs asking them the survey questions and follow-up questions about their answers. I kept copious notes of these meetings. Beyond the survey responses, the additional information gained provided a vital source of explanatory power for the article. Amazingly, getting 30 min of time with entrepreneurs in the United States required some arm twisting by the incubator managers who were interested in the study results. The U.S. incubator managers were much more giving of their time. Both Finnish entrepreneurs and incubator managers were willing to chat for over an hour to discuss my research and their experiences. I am sure my affiliation with HUT, who helped set-up the Finnish interviews, helped but I believe there was also a cultural difference.

After I conducted the in-person surveys and interviews, I then sent out links to the Qualtrics web-based survey to a larger sample. Qualtrics ensures data integrity and confidentiality and free access was provided by my institution. I reached out to tech incubator managers in the NYC and Helsinki metropolitan areas as well as those listed in the National Business Incubator Association (now called International Business Incubator Association) asking for venture emails or distribution of the survey link to their affiliated ventures. A total of 527 technology ventures were invited to participate in the surveys across 39 incubators in the United States and eight in Finland. Finnish incubators tended to be much larger and fewer in number. In all, 42 valid responses were received, 28 U.S. and 14 Finnish, resulting in an 8% response rate.

Once the data were secured, some coding and factor analysis of the data were required before statistical analysis with SPSS statistical software package. For example, Nation was a control variable and respondents simply checked the box of their national location in the survey. This information was coded as 0 = Finland and 1 = USA as dummy variables in the data in preparation for statistical analysis. A similar process was followed for other check box questions such as Desire for Business Assistance and Desire for Technical Assistance. Also, for some multi-item scales such as Business Assistance and Technology Assistance, factor analysis was performed to identify a single factor for statistical analysis. How each variable was measured, analyzed, and coded for regression analysis is described for each variable in the article. Using dummy variables and Likert-type scales to quantify the variables made regression analysis possible.

Two multiple hierarchical regression models, both statistically significant, were run to understand the dynamics of business assistance and technology assistance, as two separate processes. Multiple hierarchical regression is a variation of multiple regression where a fixed order entry of variables, often added in blocks or groups of variables, is made to see effects of certain predictors (Gelman & Hill, 2007). As seen within the Technovation article in Tables 4 and 5, control variables were the first block added as noted in Models 1 and 4. Then counseling ties were added to the control variables in Models 2 and 5 which showed how much added effect counseling ties had on the model. Alternatively, networking ties were added to the control variables in Models 3 and 6 for similar reasons. Networking and counseling ties were added separately in different models to see their individual effect on Business Assistance and Technical Assistance as the dependent variables.

Not every hypothesis in the study was supported but did contribute to a larger story gleaned in the article. The differing statistical significance of the variables across the dependent variables told the story of how business and technology assistance involve different network processes. I sent an executive summary of the paper to everyone who requested one. The President and CEO of the Ben Franklin Technology Partners of Southeastern Pennsylvania (BFTP) who helped with sampling within her incubators said I was the only researcher to follow through with this promise which opened the door for future collaborations.

Conference Presentations for Initial Feedback

A footnote within my Technovation article states “an earlier version of this paper was presented at the 2008 Academy of Management Conference.” Conferences serve as an avenue to share knowledge, get initial feedback, and find others interested in the topic. While smaller conferences make networking easier, well-known and respected conferences, such as the AOM (Academy of Management), were important for the reviews, presentation feedback, and for my resume toward getting a tenure-track job and tenure.

Although the AOM presentation for this paper was my eighth research presentation at a major conference, I do want to comment on first-time experiences. During my first few conference presentations, I was not prepared to receive criticism. I also watched a PhD candidate vehemently and coarsely defend his paper with a distinguished professor in the audience who, unbeknownst to the presenter apparently, was well-known and published widely in the area. Academics tend to know names better than faces. I found being professional and polite served me well. Further explaining a paper in light of questions raised can be a part of a productive dialogue but also welcoming constructive criticism helped me grow and learn. I since have a pen and paper handy at every presentation, including job talks, to take notes and always thank the audience for their insights. I tried to have co-authors or academic friends as moral support during my first few conference presentations. I continue this practice today as part of a supportive network. In addition, I have met several long-term mentors through conferences while a doctoral student who have supported me through the years and provided career mentoring to secure jobs, tenure, two Fulbright Scholar grants, promotions, and most recently as an endowed chair.

Selecting a Journal and Surviving the Journal Review Process

I selected Technovation as my target journal due to its impact and citation scores as well as topical fit. I also used several sources published in Technovation in my paper. Journal reviews can take 4 to 6 months, sometimes longer. Sending your paper to a journal that is not a good topical fit can result in unnecessary delays toward publication as you await a response. The timeline for my article was as follows: Initial submission in April 2008, Revision 1 submitted in January 2009, Revision 2 submitted in September 2009, and published in March 2010. The entire process from submission to print took approximately 2 years.

The review process was challenging and, at times, discouraging. I offer some comments from the reviewers that can highlight the challenges in the revision process. Below is a comment from Reviewer 1 during the first review leading to the first revision who then went on after this comment to give specific details of what was wrong with the paper:

From our point of view, this paper addresses one of the crucial questions in business incubator (BI)-research: What are the effects of counseling and networking interactions between tenant companies and the incubator management? In particular, the distinction the authors made between direct interactions on the one side, and the intermediary function of BIs, called net-working interactions, is an important one. However, this paper has some severe shortcomings, and in its present form it is not publishable in Technovation.

This same reviewer then stated the following after Revision 1:

This revised version has significant improvements. I feel that the comments made on the initial submission have been considered carefully by the authors. Only few questions/recommendations remain.

However, it seems a new reviewer (Reviewer 3) to the paper brought in to review Revision 1 stated the following, but thankfully the editor allowed another revision instead of the recommended reject:

The paper raises the research question: do different types of interactions (network and direct) enable beneficial technical and business assistance? This question seems both timely and relevant, given the increasing focus among practitioners, politicians and academics on entrepreneurship and new business venturing. Moreover, the question seems also relevant for the journal. However, there are a number of serious problems with the paper. I therefore recommend that the paper is rejected. In the following I outline my reservations, first with respect to the framing of the research question and the use of theory, second with data and operationalization/methodology issues.

My article underwent three rounds of reviews. The first two reviews were by two blind reviewers (Reviewers 1 and 2 and then Reviewers 1 and 3) resulting in two major revisions. There was a final review after the second major revision by the editorial team before acceptance. Each revision seemed so major and difficult I was almost convinced that it was impossible to do. Every part of the paper had to be reworked in some way. Even my academic advisor and co-author thought the likelihood of publication was not high given a second major revision and recommended instead a submission to another journal. I persevered and the paper was finally accepted. I kept reminding myself that a revision is not a rejection but instead means the reviewers and/or editor(s) remain interested and still see value in the paper. Each revision allows the paper to get stronger. Linking back to my comments about having explanatory depth, the in-person interviews proved extremely valuable in the revision process as I was able to provide quotes and insights to support my arguments beyond the hypotheses and survey questions.

I would also like to comment that submitting to high quality journals means you get high quality reviewers. These reviewers provide deep and constructive comments that are usually very insightful and relevant. As noted above, conference reviews can be inconsistent in quality. I always recommend sending your paper to a high quality but relevant journal. Even a reject can glean some valuable feedback.

Conclusion

This case study shares a personal journey of identifying, developing, and eventually publishing a survey-based research paper in an emerging research area. This case begins from doctoral dissertation work and highlights many of the challenges and experiences encountered for the first time, making this case relevant and relatable for doctoral students. In addition, publishing in an emerging research area requires some specific activities that are different from more incremental research such as articulating a research question not easily identified within the extant literature, streamlining the literature review, adapting survey measures from tangential literature, and gaining explanatory depth from in-person interviews. This case study leads students through the process of finding the right topic, articulating a clear and relevant research question, writing the paper, getting feedback via conferences, and finally surviving the publication process. I hope the journey shared in this case is helpful to those just beginning this same journey.

Exercises and Discussion Questions
  • Would you prefer to work with a doctoral advisor who asks you to work on their research projects or seek a doctoral advisor who supports your interest area? How might you begin these conversations? How would you approach these conversations with openness to feedback?
  • In your own area of interest, what are the areas you might draw from in your literature review? Is your area of interest within or across existing streams of literature?
  • What kinds of notes might you take during your literature review to help you see where the readings might connect, disconnect, and/or are based upon invalid assumptions?
  • If there is not much prior survey research done relevant to your research question, where else might you look for valid and reliable survey scales? Would you prefer to develop your own scales or adapt existing scales from tangential literature?
  • What journals are considered high quality in your field that could be used in your literature review? If your literature review involves journal articles outside your field, how can you judge the quality of these journal articles?
  • How can in-person interviews help add explanatory power to your research papers?
  • What is a sample of convenience? Why must you always consider justifying a sample of convenience in your data collection? Can a sample of convenience negatively affect your ability to publish your findings?
  • Why is it helpful to take notes of all questions and comments at conferences and job talks?
Further Reading
Hausberg, J. P., & Korreck, S. (2018). Business incubators and accelerators: A co-citation analysis-based, systematic literature review. Journal of Technology Transfer. Advance online publication. doi:http://dx.doi.org/10.1007/s10961-018-9651-y
Marlow, S., & McAdam, M. (2013). Incubation or induction? Gendered identity work in the context of technology business incubation. Technovation, 39, 791816. doi:http://dx.doi.org/10.1111/etap.12062
Mian, S., Lamine, W., & Fayolle, A. (2016). Technology business incubation: An overview of the state of knowledge. Technovation, 50–51, 112. doi:http://dx.doi.org/10.1016/j.technovation.2016.02.005
Scillitoe, J. L., & Chakrabarti, A. K. (2005). Sources of social capital within technology incubators. In T. R. Anderson, T. U. Daim, & D. F. Kocaoglu (Eds.), Technology management: A unifying discipline for melting the boundaries (pp. 423432). Portland, OR: IEEE PICMET.
Web Resources

Australian Business Deans Council, http://www.abdc.edu.au/master-journal-list.php

Business Finland, https://www.businessfinland.fi/en/

International Business Incubator Association, https://inbia.org/

National Science Foundation, https://www.nsf.gov/

Qualtrics, https://www.qualtrics.com

SCIMAGO, https://www.scimagojr.com

References
Gelman, A., & Hill, J. (2007). Data analysis using regression and multilevel/hierarchical models. Cambridge, UK: Cambridge University Press.
Hackett, S. M., & Dilts, D. M. (2004). A systematic review of business incubation research. Journal of Technology Transfer, 29, 5582. doi:http://dx.doi.org/10.1023/B: JOTT.0000011181.11952.0f
Linton, J. D., & Thongpapanl, N. (2004). Perspective: Ranking the technology innovation management journals. Journal of Product Innovation Management, 21, 123139. doi:http://dx.doi.org/10.1111/j.0737-6782.2004.00062.x
Scillitoe, J. L., & Chakrabarti, A. K. (2010). The role of incubator interactions in assisting new ventures. Technovation, 30, 155167. doi:http://dx.doi.org/10.1016/j.technovation.2009.12.002
von Hippel, E. (1995). The sources of innovation. New York, NY: Oxford University Press.
Zahra, S. A., Ireland, R. D., & Hitt, M. A. (2000). International expansion by new venture firms: International diversity, mode of market entry, technological learning, and performance. Academy of Management Journal, 43, 925950. doi:http://dx.doi.org/10.5465/1556420

Copy and paste the following HTML into your website