Entry
Reader's guide
Entries A-Z
Subject index
Missing Data Analysis
Missing data refers to a common problem that researchers face in all fields. In the context of testing and survey research, this phenomenon can occur due to participants running out of time, not knowing the correct answer, simply not wanting to answer the question, and even issues within the measurement. In the context of longitudinal research, missing data may result from study attrition. Missing data may also result from simple problems with data entry and management. In short, research paradigms, the phenomenon of missing data, is a potential problem.
The researcher must decide how to deal with missing data prior to using statistical analyses. Many (though not all) such analyses require each variable to be complete prior to its inclusion. If it is not dealt with appropriately, the presence of missing data can lead to biased statistical tests and parameter estimates. Luckily, there are myriad approaches for dealing with missing data that can help to reduce the problems associated with this occurrence. The selection of an appropriate technique for dealing with missing data is, however, largely dependent on the type of missing data that is present. This entry briefly reviews the types of missing data, then introduces some of the main methods for dealing with this potential problem, and concludes with some limitations of dealing with missing data.
Types of Missing Data
Missing data is typically described as coming from one of three sources: missing completely at random (MCAR), missing at random (MAR), and missing not at random (MNAR). MCAR occurs when there is no systematic cause to a data value being missing. For example, an MCAR item response was left blank by the respondent completely by accident. With MAR, the missing data is not truly random in nature, but the variable associated with the missing data has been measured by the researcher. For example, if males are more likely to leave an item on a survey unanswered, and the researcher has collected data on the gender of the respondents, then the missing values would be considered MAR. Finally, MNAR occurs when the missing data is directly linked to the missing value itself. MNAR data would occur if an examinee taking a test were to leave an item missing because the examinee did not know the correct answer. Each of the methods that are discussed in the following section may be appropriate for certain types of missing data but not for others. This entry explores the various methods for dealing with missing data. Specifically, this entry elaborates on traditional missing data methods, maximum likelihood estimation, multiple imputation (MI), and methods for MNAR.
Missing Data Methods
Traditional methods for dealing with missing data are still commonly used throughout research, though in many cases, they have not proven to be very effective. These methods include deletion methods, single imputation methods, averaging items, and last observation. Deletion methods are commonly used and, while never optimal, are less harmful for MCAR data than for the other types described previously. When deletion methods are used, the cases with missing data are simply eliminated from the data set and not used in the analyses. However, the type of deletion method will dictate when missing data is removed. Perhaps the most popular and convenient method is listwise deletion (LD). When LD is used, all cases that have missing data are removed from the data set. This results in a data set with only those cases that are 100% complete. Data analyses are then conducted on this subset of complete data.
Another deletion method that is commonly used is pairwise deletion. Unlike LD, pairwise deletion does not completely eliminate cases with missing data to create a single complete data set. Instead, it only removes cases with missing data if, and only if, the analysis being used requires the variable with the missing data. For example, a researcher may use pairwise deletion in conjunction with estimation of a correlation matrix. A data point with a missing value for one variable will only be removed for the correlations using that variable but will be included in calculation of the other correlation coefficients. In contrast, with LD the individual would be removed from all correlation calculations, even though the individual’s responses were only missing data for one of the variables. Although deletion methods are popular, they create the disadvantage of removing data, which can in turn lead to inaccurate parameter estimation when the missing data is not MCAR, and low statistical power for all types of missing data.
Another traditional approach for dealing with missing data is single imputation. Unlike deletion methods, single imputation methods do not remove cases with missing data. Instead, the missing data is replaced through the generation of a replacement value for each missing data point. The data analysis of interest (e.g., regression) would then be conducted on this revised data set that includes data for all observations. However, like deletion methods, when most single imputation methods are used, biased parameter estimates are still produced for MAR data and even MCAR data. Single imputation methods will also underestimate sampling errors, as the replaced values are treated as real data and not distinguished as missing. But even though there are disadvantages in using single imputation methods, there are a variety of approaches to choose from that vary based on how the one replacement value is generated, with some methods performing better than others. These methods include arithmetic mean imputation, regression imputation, stochastic regression imputation, hot-deck imputation, and similar response pattern imputation. Of these methods, the missing data must be MCAR, except for stochastic regression imputation. Rather, stochastic regression imputation can be used for MCAR and MAR without producing biased estimates. For this reason, stochastic regression imputation is one of the best single imputation methods.
Stochastic regression imputation is unique among single imputation methods, though it does share some traits with regression imputation. Both approaches rely on a regression model to impute the missing data. However, what makes stochastic regression imputation unique is that it adds a random value to the prediction from the regression model. By adding this random number to the values imputed from the regression model, stochastic regression imputation alters the imputed values from
where yi is the replacement value, β0 is the intercept, and β1 is the slope of the missing value, to
where zi is the random value and is generated from the normal distribution with a mean of 0, and variance equal to the variance of residuals from the regression model. This additional, random information acknowledges the fact that the imputation is merely an estimate of what the actual value would have been and that the imputation itself is almost sure to be not exactly correct.
The final two traditional methods differ from the methods previously discussed, as these methods are used for specific types of research. First, averaging the available values for the variable in question is common, particularly when using an instrument that computes a scale score with multiple items that measure a single construct. When missing data occurs for this type of scenario, this method will average the items the participants did respond to in order to create the scale score. For example, if a participant responded to only 18 of the 20 items, this participant’s scale score would be the average of the 18 items that were responded to and then multiplying the average by the total number of items (e.g., 20).
Last, observation carried forward is specific for longitudinal designed research, and the last observation is used to fill the missing time points. For example, if a participant drops out of the study in the 8th week of a 10-week study, the participant’s data from the 7th week will be used for the remaining 3 weeks.
Although traditional methods have been used, and are still utilized today, these methods can lead to biased estimates. Therefore, traditional methods should be avoided when full information maximum likelihood estimation (FIML) or MI can be used. FIML and MI are both excellent approaches to use for either MCAR or MAR data. In such cases, neither biased estimates are produced nor statistical power is maximized because all available information is used in the observed data. FIML is a popular method for estimating parameters for latent variable models and in regression. Essentially, FIML estimates the parameter values for the model, filtering out observations with missing data when that data value would be used for parameter estimation. On the other hand, when the observation is not missing a data point being used in the parameter estimation, it is included. For example, if observation A is missing a value for variable x, but not for y or z, then parameter estimation involving x will not include observation A, but estimation involving y or z will include observation A.
MI fundamentally differs from FIML as it is a data imputation approach. However, instead of a single imputed value being generated for a missing data point, multiple values are generated. MI replaces the missing data by using available information from all variables. Imputed values are generated for each missing value and a random value is added to each, much as with stochastic regression imputation. This is done m times, and the analysis of interest is then conducted using each of the resulting data sets. By creating multiple data sets, MI acknowledges the uncertainty inherent in the imputed values. MI can incorporate information from all available variables into the imputation process, providing more accurate imputations.
One popular MI method is joint modeling multiple imputation. Joint modeling multiple imputation works by first making an assumption about the probability model underlying the data (e.g., multivariate normal, multinomial). Next, parameter estimates are calculated from the Bayesian posterior distribution created using the Markov Chain Monte Carlo method of data augmentation based on the probability model, observed data, and a prior distribution. The resulting posterior distribution includes imputed values. This process is repeated m times to create complete data sets, each of which is used in the analysis of interest (e.g., regression) with the results then combined. There are also several relatively new imputation methods, including multivariate imputation by chained equations, random forest imputation, and extensions of multivariate imputation by chained equations that incorporate recursive partitioning.
Each of the more advanced methods previously described can be used with MCAR and MAR data. Very recently, methods have been described for use with MNAR data, though these are less popular in practice than MCAR- and MAR-based methods. This relative lack of popularity is largely due to the assumptions underlying these MNAR methods and lack of methods to check these assumptions. Violations of these assumptions can result in severely biased parameter estimates. Thus, extreme caution must be used when attempting to use MNAR methods. Given these limitations, the MNAR methods will not be elaborated on here; however, for those interested readers, two MNAR models to consider are the selection model and the pattern mixture model. More research should continue to be devoted to MNAR data. This may ultimately lead to more reliable missing data methods for MNAR data in the future.
Limitations
Each of the methods described in this entry have been used and continue to be used throughout all fields of research. However, not all of these are appropriate for use in all (or sometimes any) situations. Thus, the researcher must make some considerations prior to selecting a method for dealing with missing data. The type of missing data that is present should be considered. As previously mentioned, some missing data methods are more appropriate for certain types of missing data (i.e., MAR, MCAR, and MNAR) than others. Because of this, it is important for researchers to consider the mechanism underlying the missing data present in their data and select from only those methods that can adjust their data based on the missing data type. Another issue to consider is the availability of the missing data methods in the software that is being used. Depending on the type of missing data technique, statistical software such as Amos, EQS, LISREL, MPLUS, NORM, SAS, SPSS, or R may be required.
See also Maximum Likelihood Estimation; Structural Equation Modeling
Further Readings
- Assessment
- Assessment Issues
- Standards for Educational and Psychological Testing
- Accessibility of Assessment
- Accommodations
- African Americans and Testing
- Asian Americans and Testing
- Cheating
- Ethical Issues in Testing
- Gender and Testing
- High-Stakes Tests
- Latinos and Testing
- Minority Issues in Testing
- Second Language Learners, Assessment of
- Test Security
- Testwiseness
- Assessment Methods
- Ability Tests
- Achievement Tests
- Adaptive Behavior Assessments
- Admissions Tests
- Alternate Assessments
- Aptitude Tests
- Attenuation, Correction for
- Attitude Scaling
- Basal Level and Ceiling Level
- Benchmark
- Buros Mental Measurements Yearbook
- Classification
- Cognitive Diagnosis
- Computer-Based Testing
- Computerized Adaptive Testing
- Confidence Interval
- Curriculum-Based Assessment
- Diagnostic Tests
- Difficulty Index
- Discrimination Index
- English Language Proficiency Assessment
- Formative Assessment
- Intelligence Tests
- Interquartile Range
- Minimum Competency Testing
- Mood Board
- Personality Assessment
- Power Tests
- Progress Monitoring
- Projective Tests
- Psychometrics
- Reading Comprehension Assessments
- Screening Tests
- Self-Report Inventories
- Sociometric Assessment
- Speeded Tests
- Standards-Based Assessment
- Summative Assessment
- Technology-Enhanced Items
- Test Battery
- Testing, History of
- Tests
- Value-Added Models
- Written Language Assessment
- Classroom Assessment
- Authentic Assessment
- Backward Design
- Bloom’s Taxonomy
- Classroom Assessment
- Constructed-Response Items
- Curriculum-Based Measurement
- Essay Items
- Fill-in-the-Blank Items
- Formative Assessment
- Game-Based Assessment
- Grading
- Matching Items
- Multiple-Choice Items
- Paper-and-Pencil Assessment
- Performance-Based Assessment
- Portfolio Assessment
- Rubrics
- Second Language Learners, Assessment of
- Selection Items
- Student Self-Assessment
- Summative Assessment
- Supply Items
- Technology in Classroom Assessment
- True-False Items
- Universal Design of Assessment
- Item Response Theory
- Reliability
- Scores and Scaling
- T Scores
- Z Scores
- Age Equivalent Scores
- Analytic Scoring
- Automated Essay Evaluation
- Criterion-Referenced Interpretation
- Decile
- Grade-Equivalent Scores
- Guttman Scaling
- Holistic Scoring
- Intelligence Quotient
- Interval-Level Measurement
- Ipsative Scales
- Levels of Measurement
- Lexiles
- Likert Scaling
- Multidimensional Scaling
- Nominal-Level Measurement
- Norm-Referenced Interpretation
- Normal Curve Equivalent Score
- Ordinal-Level Measurement
- Percentile Rank
- Primary Trait Scoring
- Propensity Scores
- Quartile
- Rankings
- Rating Scales
- Reverse Scoring
- Scales
- Score Reporting
- Semantic Differential Scaling
- Standardized Scores
- Stanines
- Thurstone Scaling
- Visual Analog Scales
- W Difference Scores
- Standardized Tests
- Achievement Tests
- ACT
- Bayley Scales of Infant and Toddler Development
- Beck Depression Inventory
- Dynamic Indicators of Basic Early Literacy Skills
- Educational Testing Service
- Iowa Test of Basic Skills
- Kaufman-ABC Intelligence Test
- Minnesota Multiphasic Personality Inventory
- National Assessment of Educational Progress
- Partnership for Assessment of Readiness for College and Careers
- Peabody Picture Vocabulary Test
- Programme for International Student Assessment
- Progress in International Reading Literacy Study
- Raven’s Progressive Matrices
- SAT
- Smarter Balanced Assessment Consortium
- Standardized Tests
- Standards-Based Assessment
- Stanford-Binet Intelligence Scales
- Summative Assessment
- Torrance Tests of Creative Thinking
- Trends in International Mathematics and Science Study
- Wechsler Intelligence Scales
- Woodcock-Johnson Tests of Achievement
- Woodcock-Johnson Tests of Cognitive Ability
- Woodcock-Johnson Tests of Oral Language
- Validity
- Concurrent Validity
- Consequential Validity Evidence
- Construct Irrelevance
- Construct Underrepresentation
- Content-Related Validity Evidence
- Criterion-Based Validity Evidence
- Measurement Invariance
- Multicultural Validity
- Multitrait–Multimethod Matrix
- Predictive Validity
- Sensitivity
- Social Desirability
- Specificity
- Test Bias
- Unitary View of Validity
- Validity
- Validity Coefficients
- Validity Generalization
- Validity, History of
- Assessment Issues
- Cognitive and Affective Variables
- Data Visualization Methods
- Disabilities and Disorders
- Distributions
- Educational Policies
- Brown v. Board of Education
- Adequate Yearly Progress
- Americans with Disabilities Act
- Coleman Report
- Common Core State Standards
- Corporal Punishment
- Every Student Succeeds Act
- Family Educational Rights and Privacy Act
- Great Society Programs
- Health Insurance Portability and Accountability Act
- Inclusion
- Individualized Education Program
- Individuals With Disabilities Education Act
- Least Restrictive Environment
- No Child Left Behind Act
- Policy Research
- Race to the Top
- School Vouchers
- Special Education Identification
- Special Education Law
- State Standards
- Evaluation Concepts
- Evaluation Designs
- Appreciative Inquiry
- CIPP Evaluation Model
- Collaborative Evaluation
- Consumer-Oriented Evaluation Approach
- Cost–Benefit Analysis
- Culturally Responsive Evaluation
- Democratic Evaluation
- Developmental Evaluation
- Empowerment Evaluation
- Evaluation Capacity Building
- Evidence-Centered Design
- External Evaluation
- Feminist Evaluation
- Formative Evaluation
- Four-Level Evaluation Model
- Goal-Free Evaluation
- Internal Evaluation
- Needs Assessment
- Participatory Evaluation
- Personnel Evaluation
- Policy Evaluation
- Process Evaluation
- Program Evaluation
- Responsive Evaluation
- Success Case Method
- Summative Evaluation
- Utilization-Focused Evaluation
- Human Development
- Instrument Development
- Accreditation
- Alignment
- Angoff Method
- Body of Work Method
- Bookmark Method
- Construct-Related Validity Evidence
- Content Analysis
- Content Standard
- Content Validity Ratio
- Curriculum Mapping
- Cut Scores
- Ebel Method
- Equating
- Instructional Sensitivity
- Item Analysis
- Item Banking
- Item Development
- Learning Maps
- Modified Angoff Method
- Norming
- Proficiency Levels in Language
- Readability
- Score Linking
- Standard Setting
- Table of Specifications
- Vertical Scaling
- Organizations and Government Agencies
- American Educational Research Association
- American Evaluation Association
- American Psychological Association
- Educational Testing Service
- Institute of Education Sciences
- Interstate School Leaders Licensure Consortium Standards
- Joint Committee on Standards for Educational Evaluation
- National Council on Measurement in Education
- National Science Foundation
- Office of Elementary and Secondary Education
- Organisation for Economic Co-operation and Development
- Partnership for Assessment of Readiness for College and Careers
- Smarter Balanced Assessment Consortium
- Teachers’ Associations
- U.S. Department of Education
- World Education Research Association
- Professional Issues
- Diagnostic and Statistical Manual of Mental Disorders
- Guiding Principles for Evaluators
- Standards for Educational and Psychological Testing
- Accountability
- Certification
- Classroom Observations
- Compliance
- Confidentiality
- Conflict of Interest
- Data-Driven Decision Making
- Educational Researchers, Training of
- Ethical Issues in Educational Research
- Ethical Issues in Evaluation
- Ethical Issues in Testing
- Evaluation Consultants
- Federally Sponsored Research and Programs
- Framework for Teaching
- Licensure
- Professional Development of Teachers
- Professional Learning Communities
- School Psychology
- Teacher Evaluation
- Teachers’ Associations
- Publishing
- Qualitative Research
- Auditing
- Delphi Technique
- Discourse Analysis
- Document Analysis
- Ethnography
- Field Notes
- Focus Groups
- Grounded Theory
- Historical Research
- Interviewer Bias
- Interviews
- Market Research
- Member Check
- Narrative Research
- Naturalistic Inquiry
- Participant Observation
- Qualitative Data Analysis
- Qualitative Research Methods
- Transcription
- Trustworthiness
- Research Concepts
- Applied Research
- Aptitude-Treatment Interaction
- Causal Inference
- Data
- Ecological Validity
- External Validity
- File Drawer Problem
- Fraudulent and Misleading Data
- Generalizability
- Hypothesis Testing
- Impartiality
- Interaction
- Internal Validity
- Objectivity
- Order Effects
- Representativeness
- Response Rate
- Scientific Method
- Type III Error
- Research Designs
- ABA Designs
- Action Research
- Case Study Method
- Causal-Comparative Research
- Cross-Cultural Research
- Crossover Design
- Design-Based Research
- Double-Blind Design
- Experimental Designs
- Gain Scores, Analysis of
- Latin Square Design
- Meta-Analysis
- Mixed Methods Research
- Monte Carlo Simulation Studies
- Nonexperimental Designs
- Pilot Studies
- Posttest-Only Control Group Design
- Pre-experimental Designs
- Pretest–Posttest Designs
- Quasi-Experimental Designs
- Regression Discontinuity Analysis
- Repeated Measures Designs
- Single-Case Research
- Solomon Four-Group Design
- Split-Plot Design
- Static Group Design
- Time Series Analysis
- Triple-Blind Studies
- Twin Studies
- Zelen’s Randomized Consent Design
- Research Methods
- Classroom Observations
- Cluster Sampling
- Control Variables
- Convenience Sampling
- Debriefing
- Deception
- Expert Sampling
- Judgment Sampling
- Markov Chain Monte Carlo Methods
- Quantitative Research Methods
- Quota Sampling
- Random Assignment
- Random Selection
- Replication
- Simple Random Sampling
- Snowball Sampling
- Stratified Random Sampling
- Survey Methods
- Systematic Sampling
- Weighting
- Research Tools
- Amos
- ATLAS.ti
- BILOG-MG
- Bubble Drawing
- C Programming Languages
- Collage Technique
- Computer Programming in Quantitative Analysis
- Concept Mapping
- EQS
- Excel
- FlexMIRT
- HLM
- HyperResearch
- IRTPRO
- Johari Window
- Kelly Grid
- LISREL
- Mplus
- National Assessment of Educational Progress
- NVivo
- PARSCALE
- Programme for International Student Assessment
- Progress in International Reading Literacy Study
- R
- SAS
- SPSS
- Stata
- Surveys
- Trends in International Mathematics and Science Study
- UCINET
- Social and Ethical Issues
- 45 CFR Part 46
- Accessibility of Assessment
- Accommodations
- Accreditation
- African Americans and Testing
- Alignment
- Asian Americans and Testing
- Assent
- Belmont Report
- Cheating
- Confidentiality
- Conflict of Interest
- Corporal Punishment
- Cultural Competence
- Deception in Human Subjects Research
- Declaration of Helsinki
- Dropouts
- Ethical Issues in Educational Research
- Ethical Issues in Evaluation
- Ethical Issues in Testing
- Falsified Data in Large-Scale Surveys
- Flynn Effect
- Fraudulent and Misleading Data
- Gender and Testing
- High-Stakes Tests
- Human Subjects Protections
- Human Subjects Research, Definition of
- Informed Consent
- Institutional Review Boards
- ISO 20252
- Latinos and Testing
- Minority Issues in Testing
- Nuremberg Code
- Second Language Learners, Assessment of
- Service-Learning
- Social Justice
- STEM Education
- Social Network Analysis
- Statistics
- Bayesian Statistics
- Statistical Analyses
- t Tests
- Analysis of Covariance
- Analysis of Variance
- Binomial Test
- Canonical Correlation
- Chi-Square Test
- Cluster Analysis
- Cochran Q Test
- Confirmatory Factor Analysis
- Cramér’s V Coefficient
- Descriptive Statistics
- Discriminant Function Analysis
- Exploratory Factor Analysis
- Fisher Exact Test
- Friedman Test
- Goodness-of-Fit Tests
- Hierarchical Regression
- Inferential Statistics
- Kolmogorov-Smirnov Test
- Kruskal-Wallis Test
- Levene’s Homogeneity of Variance Test
- Logistic Regression
- Mann-Whitney Test
- Mantel-Haenszel Test
- McNemar Change Test
- Measures of Central Tendency
- Measures of Variability
- Median Test
- Mixed Model Analysis of Variance
- Multiple Linear Regression
- Multivariate Analysis of Variance
- Part Correlations
- Partial Correlations
- Path Analysis
- Pearson Correlation Coefficient
- Phi Correlation Coefficient
- Repeated Measures Analysis of Variance
- Simple Linear Regression
- Spearman Correlation Coefficient
- Standard Error of Measurement
- Stepwise Regression
- Structural Equation Modeling
- Survival Analysis
- Two-Way Analysis of Variance
- Two-Way Chi-Square
- Wilcoxon Signed Ranks Test
- Statistical Concepts
- p Value
- R2
- Alpha Level
- Autocorrelation
- Bonferroni Procedure
- Bootstrapping
- Categorical Data Analysis
- Central Limit Theorem
- Conditional Independence
- Convergence
- Correlation
- Data Mining
- Dummy Variables
- Effect Size
- Estimation Bias
- Eta Squared
- Gauss-Markov Theorem
- Holm’s Sequential Bonferroni Procedure
- Kurtosis
- Latent Class Analysis
- Local Independence
- Longitudinal Data Analysis
- Matrix Algebra
- Mediation Analysis
- Missing Data Analysis
- Multicollinearity
- Odds Ratio
- Parameter Invariance
- Parameter Mean Squared Error
- Parameter Random Error
- Post Hoc Analysis
- Power
- Power Analysis
- Probit Transformation
- Residuals
- Robust Statistics
- Sample Size
- Significance
- Simpson’s Paradox
- Skewness
- Standard Deviation
- Type I Error
- Type II Error
- Type III Error
- Variance
- Winsorizing
- Statistical Models
- Teaching and Learning
- Active Learning
- Andragogy
- Bilingual Education, Research on
- College Success
- Constructivist Approach
- Cooperative Learning
- Curriculum
- Distance Learning
- Dropouts
- Evidence-Based Interventions
- Framework for Teaching
- Head Start
- Homeschooling
- Instructional Objectives
- Instructional Rounds
- Kindergarten
- Kinesthetic Learning
- Laddering
- Learning Progressions
- Learning Styles
- Learning Theories
- Literacy
- Mastery Learning
- Montessori Schools
- Out-of-School Activities
- Pygmalion Effect
- Quantitative Literacy
- Reading Comprehension
- Scaffolding
- School Leadership
- Self-Directed Learning
- Service-Learning
- Social Learning
- Socio-Emotional Learning
- STEM Education
- Waldorf Schools
- Theories and Conceptual Frameworks
- g Theory of Intelligence
- Ability–Achievement Discrepancy
- Andragogy
- Applied Behavior Analysis
- Attribution Theory
- Behaviorism
- Cattell–Horn–Carroll Theory of Intelligence
- Classical Conditioning
- Classical Test Theory
- Cognitive Neuroscience
- Constructivist Approach
- Data-Driven Decision Making
- Debriefing
- Educational Psychology
- Educational Research, History of
- Emotional Intelligence
- Epistemologies, Teacher and Student
- Experimental Phonetics
- Feedback Intervention Theory
- Framework for Teaching
- Generalizability Theory
- Grounded Theory
- Improvement Science Research
- Information Processing Theory
- Instructional Theory
- Item Response Theory
- Learning Progressions
- Learning Styles
- Learning Theories
- Mastery Learning
- Multiple Intelligences, Theory of
- Naturalistic Inquiry
- Operant Conditioning
- Paradigm Shift
- Phenomenology
- Positivism
- Postpositivism
- Pragmatic Paradigm
- Premack Principle
- Punishment
- Reinforcement
- Response to Intervention
- School-Wide Positive Behavior Support
- Scientific Method
- Self-Directed Learning
- Social Cognitive Theory
- Social Learning
- Socio-Emotional Learning
- Speech-Language Pathology
- Terman Study of the Gifted
- Transformative Paradigm
- Triarchic Theory of Intelligence
- True Score
- Unitary View of Validity
- Universal Design in Education
- Wicked Problems
- Zone of Proximal Development
- Threats to Research Validity
- Loading...