Maximum Likelihood Estimation

Abstract

Maximum likelihood (ML) estimation is the foundational platform for modern empirical research. The methodology provides organizing principles for combining observational information and underlying theory to understand the workings of the natural and social environment in the face of uncertainty about the origins and interrelations of those data. Alternatives to ML estimator (MLE) are proposed in comparison to or as modifications of the central methodology. This entry develops the topic of ML estimation from the viewpoints of classical statistics and modern econometrics. It begins with an understanding of the methodology. This departs from a consideration of what is meant by the likelihood function and a useful description of the notion of estimation based on the principle of ML. It then develops the theory of the MLE. The MLE has a set of properties, including consistency and efficiency, which establish it among classes of estimators. These are the basic results that motivate MLE as a method of estimation. This entry examines the topics of inference and hypothesis testing in the ML framework—how to compute standard errors and how to accommodate sampling variability in estimation and testing. It concludes with modern extensions of ML that broaden the framework. Notions of robust estimation and inference, latent heterogeneity in panel data and quasi-ML are also considered. Some practical aspects of ML estimation, such as optimization and maximum simulated likelihood are considered in passing. Examples are woven through the development. This entry introduces the theory, language, and practicalities of the methodology.

locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles