Posterior Distribution

In Bayesian analysis, the posterior distribution, or posterior, is the distribution of a set of unknown parameters, latent variables, or otherwise missing variables of interest, conditional on the current data. The posterior distribution uses the current data to update previous knowledge, called a prior, about that parameter. A posterior distribution, p(θ|x), is derived using Bayes’s theorem

p(θ|x)=p(x|θ)p(θ)p(x)=p(x|θ)p(θ)p(x|θ)p(θ)dθ,

where θ is the unknown parameter(s) and x is the current data. The probability of the data given the parameter p(x|θ) is the likelihood L(θ|x). The prior distribution, p(θ), is user specified to represent prior knowledge about the unknown parameter(s). The last piece of Bayes’s theorem, the marginal distribution of data, p(x), is computed using the likelihood and the prior. The distribution of the posterior is determined by the ...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles