Skip to main content

Posterior Distribution

Encyclopedia
Edited by: Published: 2018
+- LessMore information
Download PDF

In Bayesian analysis, the posterior distribution, or posterior, is the distribution of a set of unknown parameters, latent variables, or otherwise missing variables of interest, conditional on the current data. The posterior distribution uses the current data to update previous knowledge, called a prior, about that parameter. A posterior distribution, p(θ|x), is derived using Bayes’s theorem

p(θ|x)=p(x|θ)p(θ)p(x)=p(x|θ)p(θ)p(x|θ)p(θ)dθ,

where θ is the unknown parameter(s) and x is the current data. The probability of the data given the parameter p(x|θ) is the likelihood L(θ|x). The prior distribution, p(θ), is user specified to represent prior knowledge about the unknown parameter(s). The last piece of Bayes’s theorem, the marginal distribution of data, p(x), is computed using the likelihood and the prior. The distribution of the posterior is determined by the ...

Looks like you do not have access to this content.

Reader's Guide