Skip to main content

Mixed Methods Research

FOUNDATION
By: Cornelia Thierbach, Jannis Hergesell & Nina Baur | Edited by: Paul Atkinson, Sara Delamont, Alexandru Cernat, Joseph W. Sakshaug & Richard A. Williams Published: 2020 | Length:   10 | DOI: |
+- LessMore information

Abstract

Mixed methods research (MMR) combines at least one qualitative and one quantitative research component within a single study or a series of studies. After embedding MMR in the history of social science methodology and epistemology, this entry reviews the current state of research concerning key decisions researchers have to make in mixed methods studies: (1) When choosing a research design, scholars have to decide upon the purposes of mixed methods. (2) When deciding which data to mix in which way, it is not only essential if data are qualitative or quantitative but also if data are verbal or visual as well research-elicited or process-produced. Researchers have to consider how data are compatible and can be combined along all these dimensions. (3) Mixed methods sampling procedures allow for resolving several trade-offs between qualitative and quantitative research, namely if generalization should be based on probability theory or social theory and if a large sample allowing for better generalizations or a smaller sample allowing for in-depth analysis is preferable. However, it is yet unresolved how to reconcile linear with iterative sampling strategies and how to properly define populations, contexts, and fields. (4) Qualitative and quantitative data analysis strategies cannot only complement each other but also provide new insights, if they are integrated. (5) When assessing research quality, MMR not only has to meet the criteria of single methods research but also needs specific quality criteria for MMR itself—how these should look like, is still being intensively discussed.

Introduction

Mixed methods research (MMR) combines at least one qualitative and one quantitative research component within a single study or a series of studies and can be defined as follows:

MMR is the type of research in which a researcher or a team of researchers combines elements of qualitative and quantitative research approaches (e.g., use of qualitative and quantitative viewpoints, data collection, analysis, inference techniques) for the broad purposes of breadth and depth of understanding, and corroboration (Johnson et al., 2007).

In contrast to MMR, “multi-method research” combines either solely multiple qualitative approaches (e.g., narrative interviews with ethnography) or solely multiple quantitative approaches (e.g., public administrational data with surveys). In quantitative research, “mixed mode research” combines different ways of collecting a specific type of data (e.g., online, telephone, and face-to-face surveys). Some research traditions are usually not classified as mixed methods but in fact they are—for example, qualitative comparative analysis (QCA), computational social science, and big data analysis.

In MMR, both qualitative and quantitative research strands are integrated, respectively mixed at some point of the research process. MMR thus presumes compatibility, that is, that qualitative and quantitative methods can be combined. MMR also assumes that both qualitative and quantitative methods have specific limitations which make mixing almost mandatory in order to compensate for these shortcomings. Even if one does not take this strong stance, MMR is a suitable approach either when researchers lack suitable data for answering their specific research question or when researchers’ purposes and rationales cannot be achieved by single method research approaches due to the nature of the research question or analyzed phenomenon. The main criterion for the choice or development of a suitable (mixed methods) research strategy is the orientation towards the research question. One of the specific strengths of MMR is that it offers multiple options for tailoring a research design accurately fitting to a specific research question. It is therefore unsurprising that since the mid-1990s, MMR has become more and more common in different fields of the social sciences.

History of MMR

Current debates on how to conduct, justify, and assess the quality of MMR are largely shaped by the history of social science methodology in general and MMR in particular. It therefore helps to reflect upon this history in order to understand contemporary methodological discussions.

In this context, it is important to note that in social science research, methods have been mixed since the beginning, early examples being Frederick Engels’s (1845/1887) “The Condition of the Working Class in England,” Robert Staughton Lynd and Helen Merrell Lynd’s (1929/1957) “Middletown,” or Marie Jahoda, Paul F. Lazarsfeld, and Hans Zeisel’s (1933/1971) “Marienthal” study. In addition, many research traditions which officially practice single method research are in fact typically mixing methods. For example, in survey research, qualitative interviews are often used for pretesting and evaluating survey questions, and most surveys contain open-ended questions. In ethnography, typically at least some types of quantitative data (e.g., public administrational statistics) are mixed with qualitative observational data. However, typically these studies either do not reflect at all that methods are mixed or mixing methods is legitimized pragmatically (Kelle, 2017).

A systematic methodological debate on mixed methods only started in the 1970s in the Anglo-Saxon countries. While MMR is negotiated differently in various national and international contexts as well as across disciplines, even today, the debate is strongly influenced by the U.S. debate. For the United States, John W. Creswell and Vicki L. Plano Clark (2011) as well as Creswell in his introduction to Udo Kuckartz (2014) distinguished five overlapping stages in the evolution of MMR:

  1. Formative Period (1960s‒1980s). From the beginning, the mixed methods debate has been shaped and driven by scholars from Anglo-Saxon countries (especially the United States) from specific subfields within the social sciences—for example, health, nursing and educational research, family medicine, psychology and evaluation research, as well as some parts of sociology and management research. In these subfields, specific single methods (namely social experiments and evaluations) have been the dominating research strategies in the 20th century in contrast to, say, large-scale cross-cultural and longitudinal surveys in social inequality research or ethnography in urban and spatial research. Mixed methods debates have been shaped by this beginning, as many mixed methods scholars use the dominating methods in the original subfields as a reference point for building their arguments. This means that even today, mixed methods researchers typically assume that most research is quantitatively driven.
  2. Period of Paradigm Debates (since 1970s). In order to establish MMR as a standalone research tradition, proponents soon started demarcating MMR from single method approaches in an epistemological debate that oversimplified complex philosophical problems and preserved outdated frontlines (Kelle, 2017). Central issues were the epistemological foundations of MMR and whether qualitative and quantitative methods are incommensurable.
  3. Period of Procedural Development (since late 1980s). Since the 1980s, mixed methods researchers also started to reflect on more practical issues such as data collection and analysis, research designs, purposes of applying MMR, how to assess the validity of mixed methods, and ways of generalizing in MMR.
  4. Period of Harmonization and Reflection (since 2003). The publication of Abbas Tashakkori and Charles Teddlie’s Handbook of Mixed Methods in Social & Behavioral Research (2003/2010) can be set as the point when MMR could be counted as an established research strategy. In the course of institutionalization, further handbooks and textbooks were written. In 2007, the Journal of Mixed Methods Research was founded and the Mixed Methods International Association was established in 2013.
  5. Period of Advocacy and Expansion (since 2010s). As a result of successful institutionalization, an international and interdisciplinary MMR community has evolved. The amount of MMR and publications have grown rapidly, and a network of systematic methodological training and workshops has developed.

Epistemological Foundations of MMR

Since the beginning of the mixed methods movement, its protagonists introduced MMR as a “third methodological paradigm” in reference to qualitative and quantitative research. This triggered debates about its methodology and its epistemological foundations. In the so-called paradigm wars, mixed methods scholars typically crudely distinguish between “constructivism” and “positivism” (Johnson et al., 2017), neglecting that there are many different types of philosophies of science and associated epistemologies and subschools (Kelle, 2017), such as pragmatism, phenomenology, critical rationalism, critical theory, radical constructivism, relationism, postmodernism, anarchism, epistemological historism, fallibism, or evolutionary epistemology. Next, mixed methods scholars typically make another oversimplification, assuming that qualitative research is constructivist and quantitative research is positivist.

Based on these assumptions, since the 1970s, the epistemological debate has circled around the issue whether quantitative and qualitative “paradigms” could be combined. The result of this debate is the so-called compatibility thesis, which argues that indeed, they can be combined.

Another thread of discussion is whether MMR needs a methodological foundation and if there is one best worldview. In this regard, positions diverge. For example, whereas R. Burke Johnson and colleagues (2017) suggested that pragmatism is well suited for this purpose, Udo Kelle (2017) argued that researchers neither need to commit themselves permanently to an epistemological paradigm nor can they avoid epistemological reflections during the research process.

All in all, most of the epistemological debates on mixed methods are based on oversimplified or even false assumptions. Therefore, despite almost 50 years of debates on the epistemological foundations of MMR, the debate is still at its starting point (Kelle, 2017).

MMR Designs

In contrast to the epistemological debate, the discussion on MMR designs is very advanced. Setting up a research design means to operationalize the research question. Scholars develop a guideline on how to proceed in order to actually answer the research question in research practice. This includes sampling strategies, choice of methods, as well as strategies for data analysis, generalization, and validation. Based on the state of the art of this extensive debate, the following subsections contain the main decisions researchers must make when operationalizing MMR questions and developing research designs.

Purposes of MMR

A general principle of social research is to select the methods that are best suitable for answering a research question. If one has a choice between several suitable methods, the method is preferable that can answer the question with a minimum amount of time, money, and other resources. As MMR combines several methods, mixed methods studies almost always take more effort than single method studies. Therefore, two questions arise: Why and when do mixed methods become reasonable, preferable, or even a necessity? For what purpose are methods mixed? The answer to both questions is important in order to decide on an appropriate MMR design. It is thus unsurprising that purposes of MMR have been an important issue of the mixed methods discourse from the very beginning. The methodological groundwork was provided by Jennifer C. Greene, Valerie J. Caracelli and Wendy F. Graham (1989) who distinguished five purposes of MMR:

  1. Triangulation aims at convergence. If the conduction of multiple methods produces a consensus of results, validity is increased.
  2. Complementarity searches for elaboration, enhancement, illustration, or clarification of the results from one method with the results from the other method.
  3. Development seeks to use the results from one method to develop or inform the other method concerning the sampling strategy or the construction of instruments.
  4. Initiation aims at the discovery of paradox or conflicting results. Results are analyzed from different perspectives using various methods. In doing so, mixed methods increase the breadth and depth of inquiry results and interpretations.
  5. Expansion targets at extending the breadth and range of a study by using different methods that are most appropriate for parts of the study.

Up to today, Greene and colleagues’ (1989) list of purposes of MMR is a reference point of the debate which has been continuously supplemented by more concrete rationales (e.g., Bryman, 2006).

How to Construct an MMR Design?

After researchers have decided that mixing is sensible and to what end they want to mix, they can start on actually constructing the MMR design. When doing so, researchers have to reflect various primary and secondary dimensions (Schoonenboom & Johnson, 2017):

While dimensions are equally important, the secondary dimensions are not specific to MMR but have to be considered in single method research as well. They include the phenomenon, social theory, ideological drive, sampling methods, the degrees to which the research participants as well as the researchers on the research team will be similar or different, the type of implementation setting, the degree to which the methods are similar or different, validity criteria and strategies, as well as study type.

In contrast, the primary dimensions are specific to MMR designs in the sense that they cover issues that only arise if one actually mixes methods. The key primary dimensions are:

  1. Purpose. Researchers should start with the research questions and then consider why they mix methods, that is, if mixing methods actually makes sense and—if so—what the specific purposes of mixing are. Different purposes call for different ways of mixing methods.
  2. Theoretical Drive. Theoretical drive refers to whether the qualitative and quantitative components have “equal status” or whether either the qualitative or the quantitative component is prioritized. In the latter case, the prioritized component is called “core component,” the other component is called “supplemental component.” A core component must be able to stand on its own.
  3. Timing. In research practice, research takes time (e.g., several weeks, months, or even years). Timing consists of two subdimensions and addresses the issue, how the qualitative and quantitative components are linked to each other over time in the course of the research process:
    1. Simultaneity. This deals with whether research components are conducted at the same time (“parallel design” or “concurrent design”) or whether research components are conducted one after another (“sequential design”).
    2. Dependence. Research components are considered “dependent,” if the implementation of the second component depends on the results of the first component. In this case, the research design needs to be sequential, as the second component can only be started after the first one has been completed. In contrast, components are “independent” if the implementation does not require results of one another. In this case, the research design can be either parallel or sequential.
  4. Point of Integration or Point of Interface. According to the definition of mixed methods, qualitative and quantitative components must be brought together at some point. At this point of interface, components are mixed or more precisely, carefully integrated. Researchers need to determine where the points of integration will be (e.g., during conceptualization, data collection, data analysis, generalization, or when writing up research) and how the results will be integrated. While this is one of the most important decisions in designing MMR, many issues are still unresolved in the methodological debate. Even though in theory, many points of interface would be possible, due to lack of systematic methodological guidance, in research practice, there are common ways of integrating the components: For dependent sequential research designs, researchers often apply the “analytical point of integration”—that is, they first conduct the research of one component and analyze it. If the first component was qualitative, they integrate by “quantitizing.” If the first component was quantitative, they integrate by “qualitizing.” For all other MMR designs, researchers typically choose the “result point of integration”—that is, they first write up the results of one component and then add the results of the second one.
  5. Typological Versus Interactive Design Approach. The basic question here is, should researchers select a design from a typology or design a customized one? In opposition to a typological design approach, an interactive design approach views the development of an MMR design as an interactive process. During the research process, the components are continuously compared to each other and adapted to each other. Especially complex designs often require an interactive approach in which the research design is adapted and modified as needed.
  6. Planned Versus Emergent Design. Researchers have to decide if they want to plan and develop the whole research design prior to the conduction of the actual study or if the research design continuously adapts to the issues arising during the conduction of a study and thus slowly emerges.
  7. Complexity. Complexity is viewed in various ways. For example, a design can be complex if there is more than one point of integration, or a design is complex if it refers not only to the number of components but also to how and to what extent they depend on each other. How complex the research design needs to be depends on what is needed in order to answer the research questions.

Deciding for a Specific MMR Design

The answers to the aforementioned questions are the basis for operationalization—that is, for actually constructing a specific MMR design. When doing this, researchers not only have to decide on the number of components—there must be at least one qualitative and one quantitative component but there may be more—but they also have to address how they plan to conduct the typical phases of a research project, namely planning, sampling, data collection, data analysis and interpretation, and evaluation of the results. It is characteristic for mixed methods studies that certain phases are run through several times (e.g., once for the qualitative component and once for the quantitative component) and that the components must be integrated or mixed at some point.

In the course of discussion on MMR designs, researchers also developed notation systems in order to simplify the representation of MMR designs in academic writing. A commonly used notation system is suggested by Janice M. Morse (1991/2006), in which qualitative research components are indicated with “qual” or “QUAL” and quantitative research components with “quan” or “QUAN.” Capital letters signal priority or core components; lower case letters stand for supplemental components. While the concurrent implementation of both or multiple methods is indicated by a plus sign (“+”), a sequential implementation of components is symbolized with an arrow (“→”). Using this notation system, a sequential mixed methods design with a quantitative core component would be notated as follows: qual → QUAN.

When needed, researchers can develop their own, complex mixed methods design. However, MMR has also provided a number of basic research designs which are suitable for most purposes (Kuckartz, 2014):

  1. Parallel mixed designs (also termed “convergent mixed designs” or “concurrent mixed designs”). Each mixed methods study starts with the planning phase. When conducting a parallel design, after the planning phase, the qualitative and quantitative components are carried out separately and simultaneously. For sampling, data collection and data analysis, each subproject adheres to the respective standards of the corresponding single method research strands. Results, too, are compiled separately for each component in the research reports. Only after this, results of both subprojects are related to each other. Parallel design studies can aim at triangulation or complementarity. Depending on the study’s theoretical drive, parallel design studies are notated as QUAL + quan, qual + QUAN, or QUAL + QUAN.
  2. Sequential mixed designs. In sequential designs, too, the subprojects are carried out independently but (in contrast to parallel designs) one after another. Results of the first substudy influence the subsequent substudy—that is, its conception or its implementation. In addition, sampling strategies of the subsequent phase can be informed by the first phase. Therefore, sequential designs are not suitable for triangulation. Two subtypes of sequential designs can be distinguished:
    1. Explanatory mixed designs. The explanatory design starts with a quantitative component which is followed by a qualitative component. It aims at better understanding the quantitative part of the study by using qualitative methods for getting in-depth knowledge on some unresolved aspects of the research question, thus increasing the study’s overall validity. Depending on the study’s theoretical drive, the study can be notated with QUAN →; qual, quan →; QUAL, and QUAN →; QUAL.
    2. Exploratory mixed designs. The exploratory design starts with a qualitative component which is followed by a quantitative component and aims at generalizing the results of the qualitative study by using quantitative methods. Depending on the study’s theoretical drive, the notation can be QUAL →; quan, qual →; QUAN, and QUAL →; QUAN.
  3. Transfer mixed designs (also termed “conversion mixed designs” or “transformative mixed designs”) are characterized by the idea that one data type is converted into the other during data analysis by either “quantitizing” qualitative data into quantitative data or “qualitizing” quantitative data into qualitative data. This can mean that either during data collection, only one data type is used, which is later analyzed by using both qualitative and quantitative methods, or that both qualitative and quantitative data are collected, which during data analysis are integrated by using either mostly qualitative methods (qualitizing) or quantitative methods (quantitizing). The respective other method is mainly used for illustration or serves as background information. There is no specific notation for transfer designs.
  4. Complex mixed designs. If the previous research designs do not suffice to answer the research question, more complex research designs can be developed which also have more complex notation systems. Popular complex designs are three-phase or multiphase designs, embedded designs, fully integrated designs, hybrid designs, or designs with multiple points of inference. Designs in evaluation studies are particularly often complex designs.

Data for Mixed Methods

Researchers must carefully evaluate what kind of data they need in order to get answers to their research questions and how to obtain them. In all research traditions, data collection is entwined with sampling (see Mixed Methods Sampling section) and consists of several components which need to be considered: obtaining permissions (e.g., in health research, an approval by ethics commissions or internal review boards is often necessary), collecting data, recording data, administering, and archiving the data (Creswell & Plano Clark, 2011). MMR mixes qualitative with quantitative data, which is why data collection needs to be done alongside both research strands and their standards.

Within-strategies of mixed methods data collection involve collecting both qualitative and quantitative information in the same data collection process (e.g., asking both closed- and open-ended questions in a survey). Between-strategies of mixed methods data collection involve collecting qualitative and quantitative data separately (Teddlie & Tashakkori, 2009). In research practice, the most common between-strategy is mixing surveys and qualitative interviews (Teddlie & Tashakkori, 2009), which is why the difference between qualitative and quantitative data as well as how they can be mixed is best explained using surveys and qualitative interviews as examples.

Mixing Qualitative Interviews and Surveys

For quantitative data, data collection is highly standardized in order to make data as objective as possible. For example, for surveys, every interviewee is asked exactly the same questions in the same order, and interviewees may only choose from a given set of answers. The wording, number, and order of both questions and answers are carefully planned and tested in advance. The interview situation is standardized, too, and has the character of an interrogation. This allows for collecting comparable data for many people. However, in comparison to qualitative interviews, the amount of information collected on a single person is limited, and interviewees get squeezed into a schematic frame which becomes a problem, for example, if researchers forgot to ask an important question, if the interviewee’s life situation does not fit the given frame, or if interviewees have difficulties with verbal expressions.

In contrast, qualitative interviews (e.g., narrative interviews, guided interviews, expert interviews, focused interviews) are more comparable to an everyday conversation: Interviewers may adapt the questions asked, their wording, and their order to the interview situation. They may add, delete, or reformulate questions, and interviewees do not select answers from a given set but choose how and in what length they want to answer. As qualitative interviews are typically much longer than surveys and answers are usually given as full sentences, qualitative interviews provide much more in-depth information, and data can be analyzed in different ways. For example, one can analyze not only the literal words said but also hidden meanings or the interaction between interviewer and interviewees. As more data are collected on a person and as data are less structured, each interview has to be analyzed separately using suitable data analysis techniques. While standardized data can be analyzed using statistics and therefore the total time needed for data analysis remains constant regardless of how many persons have been interviewed, there are no economies of scale in qualitative research—as data analysis takes almost the same time for each single interview, the total time needed for data analysis increases with the number of interviews. Therefore, not only data collection but also data analysis usually takes much longer for qualitative interviews than for surveys, limiting the number of interviews that can be collected in a single study.

All in all, there is a trade-off between a high number of cases (quantitative data), which makes it easier to generalize, and in-depth information (qualitative data), which helps in avoiding misinterpretation. A major aim for mixing is getting the best of two worlds.

The key to successfully mixing qualitative interviews with surveys is to carefully plan in advance on when and how to mix data, and many of these ways of mixing have been part of the standard repertoire of single method research for a long time. For example, if one draws a random sample for a survey, one can draw a subsample of persons who are additionally interviewed qualitatively. Likewise, at the end of a qualitative interview, one can collect some sociodemographic and other standardized data in order to assess later, if and how the qualitative sample is skewed and which strata of the population the interview partners belong to.

In an exploratory mixed design, researchers can use the qualitative interviews for formulating survey questions and/or they can use the survey for generalizing results from the qualitative interviews. For example, Jane B. Lemaire and Jean E. Wallace (2010) used qualitative interviews to identify Canadian physicians’ coping strategies with work-related stress (e.g., talking with coworkers, humor, physical exercise, spending time with family). These major themes were used to construct survey items. The survey allowed not only for generalization but also to correlate these coping strategies with other factors related to physicians’ wellness.

In an explanatory mixed design, researchers can ask respondents at the end of the interview if they would be willing to participate in a qualitative interview. This allows for using the whole information in the survey for selecting the interview partners later and/or specifically asking questions that have remained unanswered or unclear in the survey or seem to be especially interesting after having analyzed the survey. For instance, in a study on cancer, Sharlene Hesse-Biber (2018) used the results of an online survey both for drawing a subsample of persons for semistructured telephone interviews and for developing the interview questions. Integrated results show gender differences in reasons for getting tested and surgical decisions.

Mixing Other Data Types

In MMR, it is often implicitly assumed that mixing qualitative data and quantitative data means mixing qualitative interviews with surveys. In consequence, MMR has mostly forgotten that there are numerous other data types and that there are usually good reasons for using these alternative data types, for example, because interviews are not possible or not suitable for answering the research question (Baur, 2011; Baur & Hering, 2017).

When thinking about these other data types, two dimensions are important both for data collection and data analysis, and like qualitative interviews and surveys, all these data can be both qualitative and quantitative:

  1. Verbal Versus Visual Data. Qualitative interviews and surveys both provide verbal information. Verbal data can reveal part of a person’s “inner world,” such as why they choose to act in a specific way, what meaning and motives they attribute to their actions, their opinions, attitudes, willingness to act, or plans for the future. Interviews can also help obtain information that is important to people’s lives but does not necessarily manifest itself in social action (e.g., religious affiliation). They can give access to past events, as long as people remember them, including unrepeatable and once-in-history events (e.g., 9/11, the 2007–2009 financial crisis), to rare forms of interaction (e.g., corporal punishment, voting behavior) or to situations where researchers are likely to be denied access (e.g., intimate interactions, business meetings, interactions between homeless people). In contrast to interviews, many data provide either visual-only information (e.g. photographs, artwork, technical artefacts, buildings) or combined visual and verbal information (e.g., observational data, ethnography, videos, films). Visual data cover very different kinds of information than verbal data. For example, people’s clothing, body image (e.g., skin color, weight, height), body language, facial expressions, gestures, way of moving, other physical reactions (e.g., stagnant breath, blushing, paleness) can reveal their social status, rank and prestige, or aspects of personality that could not be asked directly. Researchers can also learn about people’s relation to the “outer world”—that is, how they interact with the physical and social world, including spontaneous behavior, social interactions over time, and behavior they are either unaware of or do not want to talk about. Visual data can also help to collect information when either researchers do not speak the research subjects’ language or when the research subjects can express themselves verbally not at all or only to a limited extent (e.g., very small children, persons with specific disabilities, or chronic diseases such as dementia or muteness; Baur & Hering, 2017).
  2. Research-Elicited Versus Process-Produced Data. Qualitative interviews, surveys, and ethnography are examples of so-called research-elicited data—that is, data that are produced by researchers solely for research purposes. This means that researchers (at least in theory) can control every step of the research process and therefore also the types of errors that occur. In contrast, process-produced data (also termed “process-generated data”) are side products of social processes. Classical examples of process-produced qualitative data are arts, literature, newspaper articles, biographical documents, technical artefacts, and architecture. A classic example of process-produced quantitative data (also termed “mass data” or “big data”) are public administrational data which are produced by governments, public administrations, companies, and other organizations in order to conduct their everyday business. Digital data, too, are typically side products of social processes and therefore count as process-produced data. Many process-produced data are mixed from the start, for example, most archival data and digital data—a typical example being social media data which often contain both quantitative data (e.g., log files and users’ sociodemographic data) and qualitative data (e.g., pictures and verbal communication between users). In contrast to research-elicited data, process-produced data have the advantage of being nonreactive; and for many research questions (e.g., in economic sociology or in historical sociology), they are the only data type available. However, as they are not produced for research purposes, researchers cannot control the research process or types of errors that may occur during data collection—researchers can only assess how the data are biased before analyzing them (Baur, 2011).

As MMR has mostly focused on mixing qualitative interviews with surveys (i.e., on mixing verbal research-elicited data), the debate on how to mix other types of data is still at the beginning. The scarce existing research on mixing other types of data suggests that it is rather unproblematic to mix qualitative data of the same data type (e.g., structured observation with ethnography, administrational documents with administrational mass data): Because the data are structurally similar, qualitative data can be quantified or quantitative data can be qualified. For these cases, it seems that the rules developed for mixing interviews with surveys can be mostly transferred to the other data types.

Problems start when the qualitative and quantitative components of the study are also of different types, as is the case when, for example, ethnography (qualitative visual data) and surveys (quantitative verbal data) are mixed: These data provide complementary information. If the information provided by the data converge, then validity is increased. However, if results diverge, then there is no way for deciding which results are better by using the data. Instead, researchers have to use social theory to decide which substudy is the core component and gets precedence when deciding on the overall result (Baur & Hering, 2017).

Mixed Methods Sampling

Sampling in general is a very complicated issue because there are not only different sampling logics but sampling is also linked to both data collection (in the sense that not all ways of sampling are possible or make sense for all ways of data collection) and to the way one can make inferences, generalize results, and/or transfer them to other contexts. The fact that sampling designs justify if, how, and which generalizations researchers make from their data is called “interpretive consistency” (Onwuegbuzie & Collins, 2017). In addition, there is often a trade-off between generalizability and validity. While this is true even for single method research, sampling becomes all the more complicated in MMR. At the same time, the literature on mixed methods sampling is scant (Onwuegbuzie & Collins, 2017). To understand the unresolved issues in mixed methods sampling, it is necessary to grasp the different logics of single method sampling and how they are linked to data collection and generalizability, which is why here they are reviewed before discussing the current state of research of mixed methods sampling.

Single Methods Sampling and Ways of Generalizing

Ideally, researchers would analyze the full population (also known as the whole field). However, for most social science research questions, the social phenomenon of interest is extensive in time, space, and number of cases, so that it is simply impossible to cover all past, current, and future incidents of the phenomenon, especially as all research is limited by time and resources. Therefore, even when researchers work in teams, they can never investigate social reality in its entirety, but only parts of it, which is why usually a sample of the population or field is drawn.

In this context, most (single and mixed methods) methodologists agree that convenience samples should be avoided, if any other options are available: Because researchers do not follow a specific sampling rationale, but simply sample cases that are easy to contact or to reach, it is usually impossible to generalize results from these samples. Examples are snowball sampling (for qualitative research) as well as most big data, digital data, public administrational data, and other process-generated data (for quantitative research).

Instead, social scientists have developed many different sampling strategies which differ in their rationales for sampling, their strategies for generalization, and the number of cases needed for a “good” sample—that is, a sample that actually allows for inferences according to the respective sampling strategy. For the reflection on open issues concerning mixed methods sampling, the differentiation between random sampling, purposeful sampling and theoretical sampling is important.

Random sampling is most commonly used in quantitative research (e.g., survey research, experimental research), as there is an elective affinity between random sampling and quantitative data collection (because the latter provides numerical data anyway, making it easier to process the data statistically) and between random sampling and quantitative data analysis (because this sampling procedure uses probability theory and inferential statistics both for drawing the sample and for generalization, which makes it easy to combine with descriptive statistical data analysis).

When drawing a random sample, researchers first have to define a population (or field) by demarcating it substantially, spatially, and temporarily. Then, using probability theory, researchers calculate the ideal number of cases and randomly select them from the population. Experimental research is a variation of this sampling strategy insofar, as instead of randomly selecting the cases from the population, cases that are part of the population are randomly assigned to the experimental groups.

After having selected the cases randomly, scholars continue by collecting data on the selected cases and then analyzing them, typically using descriptive statistics. Finally, the results of descriptive statistical analysis are generalized to the population with the help of inferential statistics (also called “inductive statistics”). Researchers express the level of security they place in their inference by significance levels (in statistical tests) or confidence levels (in confidence intervals). There are several important features about random sampling which are rarely discussed in single method research but important in the context of mixed methods sampling:

  1. Probability Theory Versus Social Theory. Statistical inference is by no means “objective” in the sense that its results are not subject to interpretation. Rather, there are various statistical theories on what “probability” means and how the results of inductive statistics themselves are to be interpreted (Ziegler, 2017). Using inferential statistics simply means that researchers trust probability theory more than social theory for developing a sampling rationale—but they still use theory. The logic of probability theory is increasingly threatened by increasing nonresponse rates and the increasing difficulty of defining populations: In the past, a “population” were usually the “people living in a nation state.” Due to transnationalization and digitalization, it becomes increasingly unclear what “populations” are or could be (Baur et al., 2017).
  2. Necessity of Large Sample Size. Random sampling needs a relatively large sample size—typically a minimum sample size of n ≥ 30—because otherwise the random errors are too high for drawing any sensible conclusions from the data. This makes random sampling unfeasible for many strands of qualitative research (e.g., social science hermeneutics, biographical research), simply because data collection and data analysis would exceed the project’s resources. A sample size of n ≥ 30 is also often not possible because there are simply not enough cases in social reality, such as with rare events (e.g., the Covid-19 crisis) or macrolevel phenomena (e.g., the European Union only has 27 member states).
  3. Quantity Versus Quality. Collecting data on a large number of cases usually comes at the price that fewer information is gathered per case and that researchers neither have the time nor data for in-depth analysis (Baur et al., 2017). Such in-depth analyses are sorely needed, if the nature of the research phenomenon is yet unknown, in cross-cultural research or during times of rapid social change (Kelle, 2017).
  4. Sampling Populations and Causality. Many researchers, especially quantitative researchers, aim at causal analysis. Causal analysis is intrinsically linked with sampling via the demarcation of the population because it is a prerequisite for causal analysis that all cases come from the same population and that both the population’s and the cases’ properties are comparable, stable, and unchangeable. As Charles C. Ragin (2000) underlined, all variables used to define the population are automatically kept constant and can therefore no longer be used for causal analysis due to lack of variation. The more variables are used for defining the population, the less results can be transferred to contexts other than the study population. This effect is especially large in experimental research: In experiments, a large number of variables are usually kept constant in order to control them experimentally and thus to clarify the causal relationships as precisely as possible by maximizing internal validity. However, this decreases external validity (i.e., generalizability).
  5. Linearity. Random sampling implies a linear research process, in which sampling must always take place before data collection and analysis, and once the random sample has been drawn, the sampling strategy may not be changed (e.g., by replacing selected cases by other cases) because otherwise inferential statistics do not work. This becomes a problem if it turns out researchers’ hypotheses and measuring instruments do not (at all or partly) capture the relevant aspects of the social phenomenon under question or if the standardized instruments need adjustment (Baur et al., 2017).

To address the first three problems, social science methodology has developed purposeful sampling, which is also suitable for small numbers of cases and therefore allows for more in-depth analysis. There are many different ways of sampling purposefully, which can be subsumed in several broad categories (Onwuegbuzie & Collins, 2017): Scholars can select a single case (see Yin, 2018, for an overview on criteria for selecting single cases). When sampling several cases (see Creswell & Poth, 2016, for an overview), researchers can either follow John Stuart Mills’s logic of “most similar cases design” (i.e., focus on cases of the same group in order to assess their typical characteristics) or “most different cases design” (i.e., maximize the variation between the cases in order to allow for comparison) or they can combine the two logics (which allows for analytically switching between logics). Quota sampling is a version of purposeful sampling which combines most similar and most different case designs.

Like random sampling, purposeful sampling uses theory as a prerequisite for sampling. However, purposeful sampling uses social theory instead of probability theory and therefore strategies of generalization are not based on statistical reasoning but on other rationales (Baur et al., 2018). There are several different rationales for generalization using a purposeful sample. Amongst them are naturalistic generalizations (which make use of thick description), moderatum generalizations, analytic generalizations, and case-to-case transfer (Onwuegbuzie & Collins, 2017). The latter strategies use a comparative analysis strategy by either using the variation created between the cases during sampling or by systematically conducting internal and external case comparisons in the tradition of case studies (Yin, 2018).

The aforementioned sampling strategies are still linear, which results in the same problem random sampling has: If new issues arise during research or if researchers notice that they asked the wrong questions, it is hard to make adjustments. Therefore, grounded theory has suggested sampling iteratively (Corbin & Strauss, 2015): In theoretical sampling, sampling, data collection, and data analysis are alternated. This allows for continuously adjusting the research design to the question and thus maximizing theoretical output while minimizing the amount of field work needed for this output. Theoretical sampling also allows for handling the problem that the definition of the cases and population limits the possibility of causal analysis: How the population should be demarcated and what effects this has can be empirically analyzed as part of the iterative research strategy.

While purposeful sampling and theoretical sampling allow for tackling many of the weaknesses of random sampling, they, too, have their limits: While in-depth analysis of a small number of cases increases interpretative accuracy and, if combined with an appropriate sampling strategy, allows for the generalization of theories, researchers still do not know if they have analyzed a common or rare social phenomenon. Case selection may have been one-sided or distorted, and there is a danger of neglecting counterevidence to the researchers’ favorite hypotheses. For these kinds of inferences, either an analysis of the full population or random samples are needed (Baur et al., 2017).

Mixed Methods Sampling

Since interpretative accuracy and generalizability are equally important, there is a fundamental trade-off between these two strategies. MMR promises overcoming these limitations of single method research by maximizing both in-depth analysis and generalizability (Baur et al., 2017). To succeed in this, the mixed methods sampling strategy is vital. Therefore, Charles Teddlie and Fen Yu (2007) developed an eight-point guideline for mixed method sampling and identified four prototypes of mixed method sampling: basic mixed method sampling, sequential sampling, parallel sampling, and multilevel mixed method sampling. Fuzzy set analysis, which is usually discussed as a sampling strategy of QCA (Ragin, 2000) but is in fact a mixed methods sampling strategy, can be added.

Mixed methods sampling is not a single, isolated research step but rather runs through the entire research process and must be reflected during all research phases—that is, conceptualization, planning, implementation, and dissemination. During all these stages, researchers need to consider six aspects concerning sampling and case selection: emtic orientation, probabilistic orientation, abductive orientation, intrinsic versus instrumental orientation, particularistic versus universalistic orientation, and epistemological precision (Onwuegbuzie & Collins, 2017).

However, a closer look at the current practice of mixed methods sampling reveals that typically random samples are combined with purposeful samples. These sampling procedures allow for combining generalization based on probability and social theory, both analyzing large samples and smaller subsamples and thus balancing the trade-off between quality and quantity and thus allowing for both generalizability and in-depth analysis. However, both in research practice and in methodological discourse, the other issues, namely how to handle the trade-off between defining the population and causal analysis and how to combine linear with iterative sampling logics, are rarely addressed. This in turn typically enforces linear-quantitative logics on some qualitative research traditions, thus not only prohibiting these qualitative approaches to unfold their full potential when used in MMR but also hindering MMR itself from unfolding its full potential. A debate on how to tackle these issues has started in recent years.

Mixed Methods Data Analysis

For a long time, mixed methods data analysis was conducted in an intuitive (common sense) way, and only since 2005 has a systematic debate on mixed methods data analysis been initiated, covering not only the issues on how to integrate data but also which software packages to use.

Separate Versus Integrative Data Analysis

In research practice, researchers often analyze data separately for each research strand (O’Cathain et al., 2008)—that is, quantitative data are analyzed statistically, and qualitative data are analyzed with qualitative data analysis procedures such as biographical methods, social-science hermeneutics, grounded theory, or qualitative content analysis. Within each research strand, the rules for data analysis for this specific single method apply. Results are only compared with each other when writing up the report. This procedure has the advantage that researchers can practice a division of labor in a team and therefore do not need to be equally competent in both methods.

However, in order to fully use the analytical potential of a mixed methods study, scholars need to conduct an “integrative mixed methods data analysis strategy,” which analyzes not only the qualitative and quantitative strands but also their interaction (O’Cathain et al., 2008). Such approaches permit complementary views, increase the credibility of results, and can reveal aspects in the data that cannot be analyzed by using single method techniques only (Vogl, 2019).

At the current state of debate, MMR provides several strategies for integrative data analysis (Bazeley, 2018; Kuckartz & Rädiker, 2019). The choice of a specific analysis strategy depends on three factors: the purpose of the mixed methods study, the sampling strategy, and the research design. Concerning the research design, the points of integration are especially important, resulting in three major strategies of data integration: result-based, data-based, and sequence-oriented integration (Kuckartz & Rädiker, 2019).

Result-Based Integration

Result-based integration (also called “data linkage”) is typical for parallel designs but can be also used for sequential designs. It starts off with separately analyzing the qualitative and quantitative data and then linking the results by association, comparison, or relational analysis in order to obtain a complementary account of the research question or reveal differences across subgroups. While the original data are still separated, in contrast to separate analysis, results are not only compared verbally in the final report (Vogl, 2019). Instead, the written preliminary reports, graphs, or tables that exist for the qualitative and quantitative parts of the study are loaded into a QDA software (e.g., MAXQDA, NVivo), then analyzed as if they were data themselves and visually represented in so-called joint displays. Two main integration strategies and corresponding joint displays are particularly suitable for result-based integration (Kuckartz & Rädiker, 2019):

  1. Hyperlinks. Researchers link the parts of the single method reports that correspond in content or are of interest, for example, by so-called hyperlinks. Hyperlinks allow for switching from one part of the single method reports to another by clicking on the hyperlink. They are especially helpful when writing an integrative final report because results of both parts of the study can be discussed together and also in a contrasting manner.
  2. Side-by-Side-Display (also called “interaction profile” or “interaction matrix”). Instead of simply linking the different single methods results, researchers thematically code them in accordance with the research questions. In doing so, differences between the data sets are systematically explored during data analysis and represented in a so-called side-by-side-display. Researchers conduct a table-based comparison of the results by listing the themes in the rows and results in the columns (one column for each subproject) of a matrix. In addition, scholars can insert a third column into the matrix in which they evaluate the qualitative and quantitative results as being convergent or divergent. This procedure is especially useful when the mixed method study aims at triangulation. Alternatively, or supplementary, this matrix can be converted into plain text.

Data-Based Integration

Data-based integration strategies (also termed “data consolidation,” “transformative analysis,” or “conversion”) are typical for transfer designs. They start integration not after but before single method data analysis by transforming one type of data into the other type of data and creating a single data set, which is then used for data analysis. Therefore, researchers have data on the same cases in both research components. The overall purpose is to reduce the multidimensionality of data, compare qualitative and quantitative data, create consolidated codes and variables, and allow for joint displays of results (Vogl, 2019). Conversion can go either way (Kuckartz & Rädiker, 2019):

  1. Quantitizing (Vogl, 2017) is a common practice in content analysis and describes the transformation of qualitative data into quantitative data, for example, verbal statements from qualitative interviews are transformed into numerical variables and then integrated into the quantitative data set. Afterwards, the transformed numerical data are analyzed statistically.
  2. Qualitizing (Onwuegbuzie & Leech, 2019) describes the transformation of quantitative data into visual or verbal statements. A basic procedure of qualitizing is labelling variables and values. A more complex form is the description of factors identified by factor analysis or clusters generated by cluster analysis. In doing so, results may be presented in a verbal manner as well as written up in a report. Likewise, researchers may explore the extreme cases of a quantitative study qualitatively, use quantitative groups for arranging qualitative topics, or use qualitative topics or typologies for arranging statistical data.

In addition to the already mentioned analysis methods, consolidated data allow for more so-called advanced crossover mixed methods analyses (Onwuegbuzie & Hitchcock, 2015; Hitchcock & Onwuegbuzie, 2020).

Sequence-Oriented Integration

In contrast to data-based integration, sequence-oriented integration is only possible for sequential MMR designs. It resembles data-based integration but goes a step further as it links different points of integration within the study, namely data collection and data analysis during the transition between the two parts of the study. As the first substudy influences or determines the conception and data collection of the second substudy, data can be matched more thoroughly because data of the first phase can affect the overall results (e.g., by influencing the instruments or sampling strategies of the second phase). Also, data of a supplementary component can be embedded into the core component, which is practiced, for example, in randomized control trials.

Assessing the Quality of MMR

Assessing the quality of a study—so-called validation—involves the evaluation of the rigor of the study’s methodological procedures and ensuring that results are accurate and resilient. In MMR, validation is a complex endeavor: Firstly, each component of a mixed methods study has to meet the quality criteria of the respective research tradition. Secondly, the mixed methods study itself has to be validated, which requires additional criteria (Plano Clark & Ivankova, 2016).

Similar to data analysis, validation has moved into the focus of the mixed methods debate since 2005 because researchers, journal editors, and funding agencies increasingly demand both quality criteria and these criteria to be made explicit in publications (O’Cathain et al., 2008). The following points are at the center of the current discussion (Fàbregues & Molina-Azorín, 2017; Plano Clark & Ivankova, 2016; Heyvaert et al., 2013):

  1. Terminology. Should MMR use existing terms originating from the qualitative or quantitative strands or should new terms unique to MMR be created?
  2. Desirability of quality standards. Do quality standards provide more restrictions than they are useful, or are they indispensable?
  3. Type of quality criteria. Do the quality criteria for single method research suffice, or are additional quality criteria for assessing the quality of special features of MMR needed?
  4. Consensus. How important is it that the mixed methods community as a whole agrees on quality criteria and what these common criteria would look like (e.g., core criteria)?

Using Mixed Methods for Overcoming the Quality Deficits of Single Method Research

MMR can reveal weaknesses of single method approaches and present ways to overcome them, thus enhancing the quality of inferences or research instruments. The purposes of MMR (e.g., complementarity, development, expansion, initiation, triangulation) and integration strategies (result-based, data-based, and sequential integration) indicate how one research strand informs the respective other.

Assessing the Quality of MMR Itself

While MMR can be used to increase the quality of single method research, this does not answer the question of how to assess the quality of MMR itself. At the current state of the debate, various quality frameworks exist which differ in the criteria they suggest to assess mixed methods studies’ quality (Heyvaert et al., 2013).

For example, the legitimation model concentrates on assessing the way in which researchers draw meta-inferences (i.e., inferences drawn when combining both research strands) and identifies nine criteria: sample integration legitimation, inside–outside legitimation, weakness minimization legitimation, sequential legitimation, conversion legitimation, paradigmatic mixing legitimation, commensurability legitimation, multiple validities legitimation, and political legitimation (Onwuegbuzie & Johnson, 2006).

The integrative framework of inference quality suggests four criteria of design quality (design suitability, design fidelity, within-design consistency, analytic adequacy) and examines interpretive rigor—that is, the process of making inferences by applying six criteria: interpretive consistency, theoretical consistency, interpretive agreement, interpretive distinctiveness, integrative efficacy, and interpretive correspondence (Teddlie & Tashakkori, 2009).

Final Remarks

As discussed in this entry, MMR is rooted in the 19th century, has become increasingly differentiated since the 1960s, and has been an established strand of social science methodology since the 1990s. While the specific benefits of MMR (in compensating and overcoming the respective shortcomings of qualitative and quantitative approaches by combining them as well as tailoring research designs to specific research subjects) are undisputed, the methodological debates on qualitative, quantitative, and mixed methods proceed independently and remain strangely unconnected. One reason might be that the epistemological debate on the commonalities and differences between these research strands has been characterized by oversimplifications and false assumptions and thus—despite being very old—is still at its beginning.

What can be said about MMR itself is that there are different good-practice ways of mixing methods. To attain the full potential of mixed methods, researchers need to consider systematically the methodological specifics of mixed methods along the whole research process and reflect their decisions related to the overall research strategy. There are several key steps in the research process:

  1. Researchers have to choose an appropriate research design and in this context decide upon the purposes of mixed methods.
  2. When deciding which data to mix in which way, it is not only essential if data are qualitative or quantitative but also if data are verbal or visual as well as research-elicited or process-produced. Researchers have to consider how data are compatible and can be combined along all these dimensions.
  3. Mixed methods sampling procedures allow for resolving several trade-offs between qualitative and quantitative research, namely if generalization should be based on probability or social theory and if a large sample allowing for better generalizations or a smaller sample allowing for in-depth analysis is preferable. However, it is yet unresolved how to reconcile linear with iterative sampling strategies and how to properly define populations, contexts, and fields.
  4. Concerning data analysis, MMR has shown that qualitative and quantitative analysis strategies not only can complement each other but also can provide new insights, if they are integrated.
  5. Concerning the quality of MMR, MMR has to meet the criteria of single methods research and also needs specific quality criteria for MMR itself—how these should look like, is still being intensively discussed.

The debate on mixed methods methodology is much more advanced than many single method researchers are aware of. Still, much needs to be done in the next years, especially in the fields of mixed methods epistemology, sampling, data collection, and data analysis. In addition, the discussions on the three methodological paradigms are in need of being properly integrated instead of continuing the long tradition of ignoring each other.

Further Reading

Baur, N., Kelle, U., & Kuckartz, U. (Eds.). (2017). Mixed methods. Kölner Zeitschrift für Soziologie und Sozialpsychologie, 69(2). Springer.

Bazeley, P. (2018). Integrating analyses in mixed methods research. SAGE.

Creswell, J. W., & Plano Clark, V. L. (2011). Designing and conducting mixed methods research. SAGE.

Hesse-Biber, S. N., & Johnson, R. B. (Eds.). (2015). The Oxford handbook of multimethod and mixed methods research inquiry. Oxford University Press.

Plano Clark, V. L., & Ivankova, N. V. (2016). Mixed methods research. SAGE.

Tashakkori, A., & Teddlie, C. (Eds.). (2010). SAGE handbook of mixed methods in social & behavioral research. SAGE. (Original work published 2003)

References

Baur, N. (2011). Mixing process-generated data in market sociology. Quality & Quantity, 45(6), 12331251. https://doi.org/10.1007/s11135-009-9288-x

Baur, N., & Hering, L. (2017). Die Kombination von ethnografischer Beobachtung und standardisierter Befragung. Kölner Zeitschrift für Soziologie und Sozialpsychologie, 69(S2), 387414. https://doi.org/10.1007/s11577-017-0468-8

Baur, N., Kelle, U., & Kuckartz, U. (2017). Mixed Methods—Stand der Debatte und aktuelle Problemlagen. Kölner Zeitschrift für Soziologie und Sozialpsychologie, 69(S2), 137. https://doi.org/10.1007/s11577-017-0450-5

Baur, N., Knoblauch, H., Akremi, L., & Traue, B. (2018). Qualitativ—quantitativ—interpretativ. In L.Akremi, N.Baur, H.Knoblauch, & B.Traue (Eds.), Handbuch Interpretativ forschen (pp. 246284). Beltz Juventa.

Bazeley, P. (2018). Integrating analyses in mixed methods research. SAGE.

Bryman, A. (2006). Integrating quantitative and qualitative research: How is it done?Qualitative Research, 6(1), 97113. https://doi.org/10.1177/1468794106058877

Corbin, J., & Strauss, A. (2015). Basics of qualitative research. SAGE.

Creswell, J. W., & Plano Clark, V. L. (2011). Designing and conducting mixed methods research. SAGE.

Creswell, J. W., & Poth, C. N. (2016). Qualitative inquiry and research design. SAGE.

Engels, F. (1887). The condition of the working class in England (Original work published 1845). https://www.marxists.org/archive/marx/works/1845/condition-working-class/index.htm

Fàbregues, S., & Molina-Azorín, J. F. (2017). Addressing quality in mixed methods research. Quality & Quantity, 51(6), 28472863. https://doi.org/10.1007/s11135-016-0449-4

Greene, J. C., Caracelli, V. J., & Graham, W. F. (1989). Toward a conceptual framework for mixed-method evaluation designs. Educational Evaluation and Policy Analysis, 11(3), 255. https://doi.org/10.2307/1163620

Heyvaert, M., Hannes, K., Maes, B., & Onghena, P. (2013). Critical appraisal of mixed methods studies. JMMR, 7(4), 302327. https://doi.org/10.1177/1558689813479449

Hesse-Biber, S. (2018). Gender differences in psychosocial and medical outcomes stemming from testing positive for the BRCA1/2 genetic mutation for breast cancer. JMMR, 12(3), 280304. https://doi.org/10.1177/1558689816655257

Hitchcock, J. H., & Onwuegbuzie, A. J. (2020). Developing mixed methods crossover analysis approaches. Journal of Mixed Methods Research, 14(1), 6383. https://doi.org/10.1177/1558689819841782

Jahoda, M., Lazarsfeld, P. F., & Zeisel, H. (1933/1971). Marienthal. Aldine Altherton.

Johnson, R. B., Onwuegbuzie, A. J., & Turner, L. A. (2007). Toward a definition of mixed methods research. Journal of Mixed Methods Research, 1(2), 112133. https://doi.org/10.1177/1558689806298224

Johnson, R. B., de Waal, C., Stefurak, T., & Hildebrand, D. L. (2017). Understanding the philosophical positions of classical and neopragmatists for mixed methods research. Kölner Zeitschrift für Soziologie und Sozialpsychologie, 69(S2), 6386. https://doi.org/10.1007/s11577-017-0452-3

Kelle, U. (2017). Die Integration qualitativer und quantitativer Forschung—theoretische Grundlagen von “Mixed Methods”. Kölner Zeitschrift für Soziologie und Sozialpsychologie, 69(S2), 3961. https://doi.org/10.1007/s11577-017-0451-4

Kuckartz, U. (2014). Mixed Methods. Springer. https://doi.org/10.1007/978-3-531-93267-5

Kuckartz, U., & Rädiker, S. (2019). Analyzing qualitative data with MAXQDA. Springer. https://doi.org/10.1007/978-3-030-15671-8

Lemaire, J. B., & Wallace, J. E. (2010). Not all coping strategies are created equal. BMC Health Services Research, 10, 208. https://doi.org/10.1186/1472-6963-10-208

Lynd, R. S., & Lynd, H. M. (1929/1957). Middletown. Harcourt.

Morse, J. M. (2006). Approaches to qualitative-quantitative methodological triangulation. In A.Bryman (Ed.), Mixed methods (Vol. 2, pp. 317324). SAGE. (Original work published 1991)

O’Cathain, A., Murphy, E., & Nicholl, J. (2008). The quality of mixed methods studies in health services research. Journal of Health Services Research & Policy, 13(2), 9298. https://doi.org/10.1258/jhsrp.2007.007074

Onwuegbuzie, A. J., & Johnson, R. B. (2006). The validity issues in mixed research. Research in the Schools, 13, 4863.

Onwuegbuzie, A. J., & Hitchcock, J. H. (2015). Advanced mixed analysis approaches. In S. N.Hesse-Biber, & R. B.Johnson (Eds.), The Oxford handbook of multimethod and mixed methods research inquiry (pp. 275295). Oxford University Press.

Onwuegbuzie, A. J., & Collins, K. M. T. (2017). The role of sampling in mixed methods-research. Kölner Zeitschrift für Soziologie und Sozialpsychologie, 69(S2), 133156. https://doi.org/10.1007/s11577-017-0455-0

Onwuegbuzie, A. J., & Leech, N. L. (2019). On qualitizing. International Journal of Multiple Research Approaches, 11(2), 98131. https://doi.org/10.29034/ijmra.v11n2editorial2

Plano Clark, V. L., & Ivankova, N. V. (2016). Mixed methods research. SAGE.

Ragin, C. C. (2000). Fuzzy-set social science. University of Chicago Press.

Schoonenboom, J., & Johnson, R. B. (2017). How to construct a mixed methods research design. Kölner Zeitschrift für Soziologie und Sozialpsychologie, 69(S2), 107131. https://doi.org/10.1007/s11577-017-0454-1

Tashakkori, A., & Teddlie, C. (Eds.). (2010). SAGE handbook of mixed methods in social & behavioral research. SAGE. (Original work published 2003)

Teddlie, C., & Yu, F. (2007). Mixed methods sampling. Journal of Mixed Methods Research, 1(1), 77100. https://doi.org/10.1177/1558689806292430

Teddlie, C., & Tashakkori, A. (2009). Foundations of mixed methods research. SAGE.

Vogl, S. (2017). Quantifizierung. Kölner Zeitschrift für Soziologie und Sozialpsychologie, 69(S2), 287312. https://doi.org/10.1007/s11577-017-0461-2

Vogl, S. (2019). Integrating and consolidating data in mixed methods data analysis. Journal of Mixed Methods Research, 13(4), 536554. https://doi.org/10.1177/1558689818796364

Yin, R. K. (2018). Case study research and applications. SAGE.

Ziegler, M. (2017). Induktive Statistik und soziologische Theorie. [Inductive statistics and sociological theory]. Beltz Juventa.

Copy and paste the following HTML into your website