Conditional Probability

Introduction
Conditional Probability, in probability theory, measures the chance of the occurrence of an event given (or based upon assumption) that another event has already happened.

Bayes' Theorem
Named after Thomas Bayes, Bayes' Theorem calculates conditional probability by using known related probability variables. Specifically, the theorem states that the probability of event A occurring given that event B has already occurred is equal to the probability of event A multiplied by the probability of event B occurring given that event A has occurred divided by the probability of event B.

$$P(\Alpha|\Beta)=\frac{P(\Beta|\Alpha)P(\Alpha)}{P(\Beta)} $$ for A and B are events; $$P(\Beta)\neq0 $$.

Proof of Theorem
The general conditional probability proof uses the probability of both events A and B occurring and divides it by its condition, event B, i.e. the intersection of events A and B scaled by the known variable event B (events A and B are dependent).

$$P(\Alpha|\Beta)=\frac{P(\Alpha\cap\Beta)}{P(\Beta)} $$ for A and B are events; $$P(\Beta)\neq0 $$.

$$P(\Beta|\Alpha)=\frac{P(\Alpha\cap\Beta)}{P(\Alpha)} $$ for A and B are events; $$P(\Alpha)\neq0 $$.

Therefore,

$$P(\Alpha\cap\Beta)=P(\Alpha|\Beta)P(\Beta) $$ for A and B are events; $$P(\Beta)\neq0 $$.

$$P(\Alpha\cap\Beta)=P(\Beta|\Alpha)P(\Alpha) $$ for A and B are events; $$P(\Alpha)\neq0 $$.

The two equations can be substituted back into the two on top, which will result in Bayes' Theorem.

Application
Bayes' Theorem is prominent in scientific discovery and machine learning. It allows conditional probabilities to accommodate new evidence in that new evidence will further restrict the prior hypothesis. The theorem will also account for the varying levels of influence the new evidence will have on events A and B respectively.