# Posterior Distribution

Encyclopedia
Edited by: Published: 2018

• ## Subject Index

In Bayesian analysis, the posterior distribution, or posterior, is the distribution of a set of unknown parameters, latent variables, or otherwise missing variables of interest, conditional on the current data. The posterior distribution uses the current data to update previous knowledge, called a prior, about that parameter. A posterior distribution, p(θ|x), is derived using Bayes’s theorem

$p\left(\theta \text{\hspace{0.17em}}|x\right)=\frac{p\left(x\text{\hspace{0.17em}}|\theta \right)p\left(\theta \right)}{p\left(x\right)}=\frac{p\left(x\text{\hspace{0.17em}}|\theta \right)p\left(\theta \right)}{\int p\left(x\text{\hspace{0.17em}}|\theta \right)p\left(\theta \right)d\theta },$

where θ is the unknown parameter(s) and x is the current data. The probability of the data given the parameter p(x|θ) is the likelihood L(θ|x). The prior distribution, p(θ), is user specified to represent prior knowledge about the unknown parameter(s). The last piece of Bayes’s theorem, the marginal distribution of data, p(x), is computed using the likelihood and the prior. The distribution of the posterior is determined by the ...

• All
• A
• B
• C
• D
• E
• F
• G
• H
• I
• J
• K
• L
• M
• N
• O
• P
• Q
• R
• S
• T
• U
• V
• W
• X
• Y
• Z

## Methods Map

Research Methods

Copy and paste the following HTML into your website