Full Bayesian Learning
Intuition Bayesian updating of a probability distribution over the Hypothesis Space.
- is a Random Variable with hypothesis values with Prior Probability
- is a random variable for which is a possible outcome
- is the training Dataset of an Inductive Learning problem
Given the data we could calculate for each hypothesis its Posterior Probability with Bayes Rule und Normalization where we call the Likelihood of the data under the hypothesis and the hypothesis Prior.
Prediction
Problem We are summing over the entire Hypothesis Space by looking at all . This is of course intractable for most large spaces.
Solution We try to approximate the above probability by getting rid of the sum over the entire Hypothesis Space. Instead we only look at the most probable hypothesis.
- Maximum A Posteriori Learning
- an alternative representation Minimum Description Length Learning
- Maximum Likelihood Estimation