Entry
Reader's guide
Entries A-Z
Subject index
Mixed Methods Design
Mixed methods is a research orientation that possesses unique purposes and techniques. It integrates techniques from quantitative and qualitative paradigms to tackle research questions that can be best addressed by mixing these two traditional approaches. As long as 40 years ago, scholars noted that quantitative and qualitative research were not antithetical and that every research process, through practical necessity, should include aspects of both quantitative and qualitative methodology. In order to achieve more useful and meaningful results in any study, it is essential to consider the actual needs and purposes of a research problem to determine the methods to be implemented. The literature on mixed methods design is vast, and contributions have been made by scholars from myriad disciplines in the social sciences. Therefore, this entry is grounded in the work of these scholars. This entry provides a historical overview of mixed methods as a paradigm for research, establishes differences between quantitative and qualitative designs, shows how qualitative and quantitative methods can be integrated to address different types of research questions, and illustrates some implications for using mixed methods. Though still new as an approach to research, mixed methods design is expected to soon dominate the social and behavioral sciences.
The objective of social science research is to understand the complexity of human behavior and experience. The task of the researcher, whose role is to describe and explain this complexity, is limited by his or her methodological repertoire. As tradition shows, different methods often are best applied to different kinds of research. Having the opportunity to apply various methods to a single research question can broaden the dimensions and scope of that research and perhaps lead to a more precise and holistic perspective of human behavior and experience. Research is not knowledge itself, but a process in which knowledge is constructed through step-by-step data gathering.
Data are gathered most typically through two distinct classical approaches—qualitative and quantitative. The use of both these approaches for a single study, although sometimes controversial, is becoming more widespread in social science. Methods are really “design” components that include the following: (a) the relationship between the researcher and research “subjects,” (b) details of the experimental environment (place, time, etc.), (c) sampling and data collection methods, (d) data analysis strategies, and (e) knowledge dissemination. The design of a study thus leads to the choice of method strategy. The framework for a study, then, depends on the phenomenon being studied, with the participants and relevant theories informing the research design. Most study designs today need to include both quantitative and qualitative methods for gathering effective data and can thereby incorporate a more expansive set of assumptions and a broader worldview.
Mixing methods (or multiple-methods design) is generally acknowledged as being more pertinent to modern research than using a single approach. Quantitative and qualitative methods may rely more on single data collection methods. For example, whereas a quantitative study may rely on surveys for collecting data, a qualitative study may rely on observations or open-ended questions. However, it is also possible that each of these approaches may use multiple data collection methods. Mixed methods design “triangulates” these two types of methods. When these two methods are used within a single research study, different types of data are combined to answer the research question—a defining feature of mixed methods. This approach is already standard in most major designs. For example, in social sciences, interviews and participant observation form a large part of research and are often combined with other data (e.g., biological markers).
Even though the integration of these two research models is considered fairly novel (emerging significantly in the 1960s), the practice of integrating these two models has a long history. Researchers have often combined these methods, if perhaps only for particular portions of their investigations. Mixed methods research was more common in earlier periods when methods were less specialized and compartmentalized and when there was less orthodoxy in method selection. Researchers observed and cross-tabulated, recognizing that each methodology alone could be inadequate. Synthesis of these two classic approaches in data gathering and interpretation does not necessarily mean that they are wholly combined or that they are uniform. Often they need to be employed separately within a single research design so as not to corrupt either process.
Important factors to consider when one is using mixed methods can be summarized as follows. Mixed methods researchers agree that there are some resonances between the two paradigms that encourage mutual use. The distinctions between these two methods cannot necessarily be reconciled. Indeed, this “tension” can produce more meaningful interactions and thus new results. Combination of qualitative and quantitative methods must be accomplished productively so that the integrity of each approach is not violated: Methodological congruence needs to be maintained so that data collection and analytical strategies are not jeopardized and can be consistent. The two seemingly antithetical research approaches can be productively combined in a pragmatic, interactive, and integrative design model. The two “classical” methods can complement each other and make a study more successful and resourceful by eliminating the possibility of distortion by strict adherence to a single formal theory.
Qualitative and Quantitative Data
Qualitative and quantitative distinctions are grounded in two contrasting approaches to categorizing and explaining data. Different paradigms produce and use different types of data. Early studies distinguished the two methods according to the kind of data collected, whether textual or numerical. The classic qualitative approach includes study of real-life settings, focus on participants' context, inductive generation of theory, open-ended data collection, analytical strategies based on textual data, and use of narrative forms of analysis and presentation. Basically, the qualitative method refers to a research paradigm that addresses interpretation and socially constructed realities. The classic quantitative approach encompasses hypothesis formulation based on precedence, experiment, control groups and variables, comparative analysis, sampling, standardization of data collection, statistics, and the concept of causality. Quantitative design refers to a research paradigm that hypothesizes relationships between variables in an objective way.
Quantitative methods are related to deductivist approaches, positivism, data variance, and factual causation. Qualitative methods include inductive approaches, constructivism, and textual information. In general, quantitative design relies on comparisons of measurements and frequencies across categories and correlations between variables whereas the qualitative method concentrates on events within a context, relying on meaning and process. When the two are used together, data can be transformed. Essentially, “qualitized” data can represent data collected using quantitative methods that are converted into narratives that are analyzed qualitatively. “Quantitized” data represent data collected using qualitative methods that can be converted into numerical codes and analyzed statistically. Many research problems are not linear. Purpose drives the research questions. The course of the study, however, may change as it progresses, leading possibly to different questions and the need to alter method design. As in any rigorous research, mixed methods allows for the research question and purpose to lead the design.
Historical overview
In the Handbook of qualitative research, Norman K. Denzin and Yvonna S. Lincoln classified four historic periods in research history for the social sciences. Their classification shows an evolution from strict quantitative methodology, a gradual implementation and acceptance of qualitative methods, to a merging of the two: (1) traditional (quantitative), 1900 to 1950; (2) modernist, 1950 to 1970; (3) ascendance of constructivism, 1970 to 1990; and (4) pragmatism and the “compatibility thesis” (discussed later), 1990 to the present.
Quantitative methodology, and its paradigm, positivism, dominated methodological orientation during the first half of the 20th century. This “traditional” period, although primarily focused on quantitative methods, did include some mixed method approaches without directly acknowledging implementation of qualitative data: Studies often made extensive use of interviews and researcher observations, as demonstrated in the Hawthorne effect. In the natural sciences, such as biology, paleontology, and geology, goals and methods that typically would be considered qualitative (naturalistic settings, inductive approaches, narrative description, and focus on context and single cases) have been integrated with those that were regarded as quantitative (experimental mani-pulation, controls and variables, hypothesis testing, theory verification, and measurement and analysis of samples) for more than a century.
After World War II, positivism began to be discredited, which led to its “intellectual” successor, postpositivism. Postpositivism (still largely in the domain of the quantitative method) asserts that research data are influenced by the values of the researchers, the theories used by the researchers, and the researchers' individually constructed realities. During this period, some of the first explicit mixed method designs began to emerge. While there was no distinctive categorization of mixed methods, numerous studies began to employ components of its design, especially in the human sciences. Data obtained from participant observation (qualitative information) was often implemented, for example, to explain quantitative results from a field experiment.
The subsequent “modernist” period, or “Golden Age” (1950–1970), has been demarcated, then, by two trends: positivism's losing its stronghold and research methods that began to incorporate “multi methods.” The discrediting of positivism resulted in methods that were more radical than those of postpositivism. From 1970 to 1985—defined by some scholars as the “qualitative revolution”—qualitative researchers became more vocal in their criticisms of pure quantitative approaches and proposed new methods associated with constructivism, which began to gain wider acceptance. In the years from 1970 to 1990, qualitative methods, along with mixed method syntheses, were becoming more eminent. In the 1970s, the combination of data sources and multiple methods was becoming more fashionable, and new paradigms, such as interpretivism and naturalism, were gaining precedence and validity.
In defense of a “paradigm of purity,” a period known as the paradigm wars took place. Different philosophical camps held that quantitative and qualitative methods could not be combined; such a “blending” would corrupt accurate scientific research. Compatibility between quantitative and qualitative methods, according to these proponents of quantitative methods, was impossible due to the distinction of the paradigms. Researchers who combined these methods were doomed to fail because of the inherent differences in the underlying systems. Qualitative researchers defined such “purist” traditions as being based on “received” paradigms (paradigms preexisting a study that are automatically accepted as givens), and they argued against the prejudices and restrictions of positivism and postpositivism. They maintained that mixed methods were already being employed in numerous studies.
The period of pragmatism and compatibility (1990–the present) as defined by Denzin and Lincoln constitutes the establishment of mixed methods as a separate field. Mixed methodologists are not representative of either the traditional (quantitative) or “revolutionary” (qualitative) camps. In order to validate this new field, mixed methodologists had to show a link between epistemology and method and demonstrate that quantitative and qualitative methods were compatible. One of the main concerns in mixing methods was to determine whether it was also viable to mix paradigms—a concept that circumscribes an interface, in practice, between epistemology (historically learned assumptions) and methodology. A new paradigm, pragmatism, effectively combines these two approaches and allows researchers to implement them in a complementary way.
Pragmatism addresses the philosophical aspect of a paradigm by concentrating on what works. Paradigms, under pragmatism, do not represent the primary organizing principle for mixed methods practice. Believing that paradigms (socially constructed) are malleable assumptions that change through history, pragmatists make design decisions based on what is practical, contextually compatible, and consequential. Decisions about methodology are not based solely on congruence with established philosophical assumptions but are founded on a methodology's ability to further the particular research questions within a specified context. Because of the complexity of most contexts under research, pragmatists incorporate a dual focus between sense making and value making. Pragmatic research decisions, grounded in the actual context being studied, lead to a logical design of inquiry that has been termed fitness for purpose. Mixed methodologies are the result. Pragmatism demonstrates that singular paradigm beliefs are not intrinsically connected to specific methodologies; rather, methods and techniques are developed from multiple paradigms.
Researchers began to believe that the concept of a single best paradigm was a relic of the past and that multiple, diverse perspectives were critical to addressing the complexity of a pluralistic society. They proposed what they defined as the dialectical stance: Opposing views (paradigms) are valid and provide for more realistic interaction. Multiple paradigms, then, are considered a foundation for mixed methods research. Researchers, therefore, need to determine which paradigms are best for a particular mixed methods design for a specific study.
Currently, researchers in social and behavioral studies generally comprise three groups: Quantitatively oriented researchers, primarily interested in numerical and statistical analyses; qualitatively oriented researchers, primarily interested in analysis of narrative data; and mixed methodologists, who are interested in working with both quantitative and qualitative data. The differences between the three groups (particularly between quantitatively and qualitatively oriented researchers) have often been characterized as the paradigm wars. These three movements continue to evolve simultaneously, and all three have been practiced concurrently. Mixed methodology is in its adolescent stage as scholars work to determine how to best integrate different methods.
Integrated Design Models
A. Tashakkori and C. Teddlie have referred to three categories of multiple-method designs: multi-method research, mixed methods research, and mixed model research. The terms multimethod and mixed method are often confused, but they actually refer to different processes. In multi-method studies, research questions use both quantitative and qualitative procedures, but the process is applied principally to quantitative studies. This method is most often implemented in an interrelated series of projects whose research questions are theoretically driven. Multimethod research is essentially complete in itself and uses simultaneous and sequential designs.
Mixed methods studies, the primary concern of this entry, encompass both mixed methods and mixed model designs. This type of research implements qualitative and quantitative data collection and analysis techniques in parallel phases or sequentially. Mixed methods (combined methods) are distinguished from mixed model designs (combined quantitative and qualitative methods in all phases of the research). In mixed methods design, the “mixing” occurs in the type of questions asked and in the inferences that evolve. Mixed model research is implemented in all stages of the study (questions, methods, data collection, analysis, and inferences).
The predominant approach to mixing methods encompasses two basic types of design: component and integrated. In component designs, methods remain distinct and are used for discreet aspects of the research. Integrative design incorporates substantial integration of methods. Although typologies help researchers organize actual use of both methods, use of typologies as an organizing tool demonstrates a lingering linear concept that refers more to the duality of quantitative and qualitative methods than to the recognition and implementation of multiple paradigms. Design components (based on objectives, frameworks, questions, and validity strategies), when organized by typology, are perceived as separate entities rather than as interactive parts of a whole. This kind of typology illustrates a pluralism that “combines” methods without actually integrating them.
Triangulation and Validity
Triangulation is a method that combines different theoretical perspectives within a single study. As applied to mixed methods, triangulation determines an unknown point from two or more known points, that is, collection of data from different sources, which improves validity of results. In The Research Act, Denzin argued that a hypothesis explored under various methods is more valid than one tested under only one method. Triangulation in methods, where differing processes are implemented, maximizes the validity of the research: Convergence of results from different measurements enhances validity and verification. It was also argued that using different methods, and possibly a faulty commonality of framework, could lead to increased error in results. Triangulation may not increase validity but does increase consistency in methodology: Though empirical results may be conflicting, they are not inherently damaging but render a more holistic picture.
Triangulation allows for the exploration of both theoretical and empirical observation (inductive and deductive), two distinct types of knowledge that can be implemented as a methodological “map” and are logically connected. A researcher can structure a logical study, and the tools needed for organizing and analyzing data, only if the theoretical framework is established prior to empirical observations. Triangulation often leads to a situation in which different findings do not converge or complement each other. Divergence of results, however, may lead to additional valid explanations of the study. Divergence, in this case, can be reflective of a logical reconciliation of quantitative and qualitative methods. It can lead to a productive process in which initial concepts need to be modified and adapted to differing study results.
Recently, two new approaches for mixing methods have been introduced: an interactive approach, in which the design components are integrated and mutually influence each other, and a conceptual approach, using an analysis of the fundamental differences between quantitative and qualitative research. The interactive method, as employed in architecture, engineering, and art, is neither linear nor cyclic. It is a schematic method that addresses data in a mutually ongoing arrangement. This design model is a tool that focuses on analyzing the research question rather than providing a template for creating a study type. This more qualitative approach to mixed methods design emphasizes particularity, context, comprehensiveness, and the process by which a particular combination of qualitative and quantitative components develops in practice, in contrast to the categorization and comparison of data typical of the pure quantitative approach.
Implications for Mixed Methods
As the body of research regarding the role of the environment and its impact on the individual has developed, the status and acceptance of mixed methods research in many of the applied disciplines is accelerating. This acceptance has been influenced by the historical development of these disciplines and an acknowledgment of a desire to move away from traditional paradigms of positivism and post-positivism. The key contributions of mixed methods have been to an understanding of individual factors that contribute to social outcomes, the study of social determinants of medical and social problems, the study of service utilization and delivery, and translational research into meaningful practice.
Mixed methods research may bridge postmodern critiques of scientific inquiry and the growing interest in qualitative research. Mixed methods research provides an opportunity to test research questions, hypotheses, and theory and to acknowledge the phenomena of human experience. Quantitative methods support the ability to generalize findings to the general population. However, quantitative approaches that are well regarded by researchers may not necessarily be comprehensible or useful to lay individuals. Qualitative approaches can help contextualize problems in narrative forms and thus can be more meaningful to lay individuals. Mixing these two methods offers the potential for researchers to understand, contextualize, and develop interventions.
Mixed methods have been used to examine and implement a wide range of research topics, including instrument design, validation of constructs, the relationship of constructs, and theory development or disconfirmation. Mixed methods are rooted, for one example, in the framework of feminist approaches whereby the study of participants' lives and personal interpretations of their lives has implications in research. In terms of data analysis, content analysis is a way for scientists to confirm hypotheses and to gather qualitative data from study participants through different methods (e.g., grounded theory, phenomenological, narrative). The application of triangulation methodology is extremely invaluable in mixed methods research.
While there are certainly advantages to employing mixed methods in research, their use also presents significant challenges. Perhaps the most significant issue to consider is the amount of time associated with the design and implementation of mixed methods. In addition to time restrictions, costs or barriers to obtaining funding to carry out mixed methods research are a consideration.
Conclusion
Rather than choosing one paradigm or method over another, researchers often use multiple and mixed methods. Implementing these newer combinations of methods better supports the modern complexities of social behavior the changing perceptions of reality and knowledge better serve the purposes of the framework of new studies in social science research. The classic quantitative and qualitative models alone cannot encompass the interplay between theoretical and empirical knowledge. Simply, combining methods makes common sense and serves the purposes of complex analyses. Methodological strategies are tools for inquiry and represent collections of strategies that corroborate a particular perspective. The strength of mixed methods is that research can evolve comprehensively and adapt to empirical changes, thus going beyond the traditional dualism of quantitative and qualitative methods, redefining and reflecting the nature of social reality.
Paradigms are social constructions, culturally and historically embedded as discourse practices, and contain their own set of assumptions. As social constructions, paradigms are changeable and dynamic. The complexity and pluralism of our contemporary world require rejecting investigative constraints of singular methods and implementing more diverse and integrative methods that can better address research questions and evolving social constructions. Knowledge and information change with time and mirror evolving social perceptions and needs. Newer paradigms and belief systems can help transcend and expand old dualisms and contribute to redefining the nature of social reality and knowledge.
Scholars generally agree that it is possible to use qualitative and quantitative methods to answer objective–value and subjective–constructivist questions, to include both inductive–exploratory and deductive–confirmatory questions in a single study, to mix different orientations, and to integrate qualitative and quantitative data in one or more stages of research, and that many research questions can only be answered with a mixed methods design. Traditional approaches meant aligning oneself to either quantitative or qualitative methods. Modern scholars believe that if research is to go forward, this dichotomy needs to be fully reconciled.
Further Readings
- Descriptive Statistics
- Distributions
- Graphical Displays of Data
- Hypothesis Testing
- p Value
- Alternative Hypotheses
- Beta
- Critical Value
- Decision Rule
- Hypothesis
- Nondirectional Hypotheses
- Nonsignificance
- Null Hypothesis
- One-Tailed Test
- Power
- Power Analysis
- Significance Level, Concept of
- Significance Level, Interpretation and Construction
- Significance, Statistical
- Two-Tailed Test
- Type I Error
- Type II Error
- Type III Error
- Important Publications
- “Coefficient Alpha and the Internal Structure of Tests”
- “Convergent and Discriminant Validation by the Multitrait–Multimethod Matrix”
- “Meta-Analysis of Psychotherapy Outcome Studies”
- “On the Theory of Scales of Measurement”
- “Probable Error of a Mean, The”
- “Psychometric Experiments”
- “Sequential Tests of Statistical Hypotheses”
- “Technique for the Measurement of Attitudes, A”
- “Validity”
- Aptitudes and Instructional Methods
- Doctrine of Chances, The
- Logic of Scientific Discovery, The
- Nonparametric Statistics for the Behavioral Sciences
- Probabilistic Models for Some Intelligence and Attainment Tests
- Statistical Power Analysis for the Behavioral Sciences
- Teoria Statistica Delle Classi e Calcolo Delle Probabilità
- Inferential Statistics
- Q-Statistic
- R2
- Association, Measures of
- Coefficient of Concordance
- Coefficient of Variation
- Coefficients of Correlation, Alienation, and Determination
- Confidence Intervals
- Margin of Error
- Nonparametric Statistics
- Odds Ratio
- Parameters
- Parametric Statistics
- Partial Correlation
- Pearson Product-Moment Correlation Coefficient
- Polychoric Correlation Coefficient
- Randomization Tests
- Regression Coefficient
- Semipartial Correlation Coefficient
- Spearman Rank Order Correlation
- Standard Error of Estimate
- Standard Error of the Mean
- Student's t Test
- Unbiased Estimator
- Weights
- Item Response Theory
- Mathematical Concepts
- Measurement Concepts
- Organizations
- Publishing
- Qualitative Research
- Reliability of Scores
- Research Design Concepts
- Aptitude-Treatment Interaction
- Cause and Effect
- Concomitant Variable
- Confounding
- Control Group
- Interaction
- Internet-Based Research Method
- Intervention
- Matching
- Natural Experiments
- Network Analysis
- Placebo
- Replication
- Research
- Research Design Principles
- Treatment(s)
- Triangulation
- Unit of Analysis
- Yoked Control Procedure
- Research Designs
- A Priori Monte Carlo Simulation
- Action Research
- Adaptive Designs in Clinical Trials
- Applied Research
- Behavior Analysis Design
- Block Design
- Case-Only Design
- Causal-Comparative Design
- Cohort Design
- Completely Randomized Design
- Crossover Design
- Cross-Sectional Design
- Double-Blind Procedure
- Ex Post Facto Study
- Experimental Design
- Factorial Design
- Field Study
- Group-Sequential Designs in Clinical Trials
- Laboratory Experiments
- Latin Square Design
- Longitudinal Design
- Meta-Analysis
- Mixed Methods Design
- Mixed Model Design
- Monte Carlo Simulation
- Nested Factor Design
- Nonexperimental Design
- Observational Research
- Panel Design
- Partially Randomized Preference Trial Design
- Pilot Study
- Pragmatic Study
- Pre-Experimental Designs
- Pretest–Posttest Design
- Prospective Study
- Quantitative Research
- Quasi-Experimental Design
- Randomized Block Design
- Repeated Measures Design
- Response Surface Design
- Retrospective Study
- Sequential Design
- Single-Blind Study
- Single-Subject Design
- Split-Plot Factorial Design
- Thought Experiments
- Time Studies
- Time-Lag Study
- Time-Series Study
- Triple-Blind Study
- True Experimental Design
- Wennberg Design
- Within-Subjects Design
- Zelen's Randomized Consent Design
- Research Ethics
- Research Process
- Clinical Significance
- Clinical Trial
- Cross-Validation
- Data Cleaning
- Delphi Technique
- Evidence-Based Decision Making
- Exploratory Data Analysis
- Follow-Up
- Inference: Deductive and Inductive
- Last Observation Carried Forward
- Planning Research
- Primary Data Source
- Protocol
- Q Methodology
- Research Hypothesis
- Research Question
- Scientific Method
- Secondary Data Source
- Standardization
- Statistical Control
- Type III Error
- Wave
- Research Validity Issues
- Bias
- Critical Thinking
- Ecological Validity
- Experimenter Expectancy Effect
- External Validity
- File Drawer Problem
- Hawthorne Effect
- Heisenberg Effect
- Internal Validity
- John Henry Effect
- Mortality
- Multiple Treatment Interference
- Multivalued Treatment Effects
- Nonclassical Experimenter Effects
- Order Effects
- Placebo Effect
- Pretest Sensitization
- Random Assignment
- Reactive Arrangements
- Regression to the Mean
- Selection
- Sequence Effects
- Threats to Validity
- Validity of Research Conclusions
- Volunteer Bias
- White Noise
- Sampling
- Cluster Sampling
- Convenience Sampling
- Demographics
- Error
- Exclusion Criteria
- Experience Sampling Method
- Nonprobability Sampling
- Population
- Probability Sampling
- Proportional Sampling
- Quota Sampling
- Random Sampling
- Random Selection
- Sample
- Sample Size
- Sample Size Planning
- Sampling
- Sampling and Retention of Underrepresented Groups
- Sampling Error
- Stratified Sampling
- Systematic Sampling
- Scaling
- Software Applications
- Statistical Assumptions
- Statistical Concepts
- Autocorrelation
- Biased Estimator
- Cohen's Kappa
- Collinearity
- Correlation
- Criterion Problem
- Critical Difference
- Data Mining
- Data Snooping
- Degrees of Freedom
- Directional Hypothesis
- Disturbance Terms
- Error Rates
- Expected Value
- Fixed-Effects Model
- Inclusion Criteria
- Influence Statistics
- Influential Data Points
- Intraclass Correlation
- Latent Variable
- Likelihood Ratio Statistic
- Loglinear Models
- Main Effects
- Markov Chains
- Method Variance
- Mixed- and Random-Effects Models
- Models
- Multilevel Modeling
- Odds
- Omega Squared
- Orthogonal Comparisons
- Outlier
- Overfitting
- Pooled Variance
- Precision
- Quality Effects Model
- Random-Effects Models
- Regression Artifacts
- Regression Discontinuity
- Residuals
- Restriction of Range
- Robust
- Root Mean Square Error
- Rosenthal Effect
- Serial Correlation
- Shrinkage
- Simple Main Effects
- Simpson's Paradox
- Sums of Squares
- Statistical Procedures
- Accuracy in Parameter Estimation
- Analysis of Covariance (ANCOVA)
- Analysis of Variance (ANOVA)
- Barycentric Discriminant Analysis
- Bivariate Regression
- Bonferroni Procedure
- Bootstrapping
- Canonical Correlation Analysis
- Categorical Data Analysis
- Confirmatory Factor Analysis
- Contrast Analysis
- Descriptive Discriminant Analysis
- Discriminant Analysis
- Dummy Coding
- Effect Coding
- Estimation
- Exploratory Factor Analysis
- Greenhouse–Geisser Correction
- Hierarchical Linear Modeling
- Holm's Sequential Bonferroni Procedure
- Jackknife
- Latent Growth Modeling
- Least Squares, Methods of
- Logistic Regression
- Mean Comparisons
- Missing Data, Imputation of
- Multiple Regression
- Multivariate Analysis of Variance (MANOVA)
- Pairwise Comparisons
- Path Analysis
- Post Hoc Analysis
- Post Hoc Comparisons
- Principal Components Analysis
- Propensity Score Analysis
- Sequential Analysis
- Stepwise Regression
- Structural Equation Modeling
- Survival Analysis
- Trend Analysis
- Yates's Correction
- Statistical Tests
- F Test
- t Test, Independent Samples
- t Test, One Sample
- t Test, Paired Samples
- z Test
- Bartlett's Test
- Behrens–Fisher t′ Statistic
- Chi-Square Test
- Duncan's Multiple Range Test
- Dunnett's Test
- Fisher's Least Significant Difference Test
- Friedman Test
- Honestly Significant Difference (HSD) Test
- Kolmogorov-Smirnov Test
- Kruskal–Wallis Test
- Mann–Whitney U Test
- Mauchly Test
- McNemar's Test
- Multiple Comparison Tests
- Newman–Keuls Test and Tukey Test
- Omnibus Tests
- Scheffé Test
- Sign Test
- Tukey's Honestly Significant Difference (HSD)
- Welch's t Test
- Wilcoxon Rank Sum Test
- Theories, Laws, and Principles
- Bayes's Theorem
- Central Limit Theorem
- Classical Test Theory
- Correspondence Principle
- Critical Theory
- Falsifiability
- Game Theory
- Gauss–Markov Theorem
- Generalizability Theory
- Grounded Theory
- Item Response Theory
- Occam's Razor
- Paradigm
- Positivism
- Probability, Laws of
- Theory
- Theory of Attitude Measurement
- Weber–Fechner Law
- Types of Variables
- Validity of Scores
- Loading...