Bayes' formula, named after the 18th century's English statistician Thomas Bayes, is a powerful tool that allows us to reverse conditional probabilities when direct computation is difficult or impossible. It plays a central role in statistical inference, allowing us to update our beliefs about the likelihood of a cause given some observed effect.
The formula states that $P(A \mid B)\;=$$\;\frac{P(B \mid A) \cdot P(A)}{P(B)}\;=$$\;\frac{P(B \mid A) \cdot P(A)}{P(B \mid A) \cdot P(A) + P(B \mid A^c) \cdot P(A^c)}$, where \( A \) is a hypothesis and \( B \) is observed evidence.
For example, in email filtering, if \( A \) is "message is spam" and \( B \) is "message contains the word 'damn'," Bayes' rule helps estimate the probability that the message is spam given that the word 'damn' is in the message.
Another application is in medical testing, where the base rate of a disease, false positive rate, and true positive rate can be used to compute the actual chance of illness after a positive test. Moreover, Bayes' theorem bridges observed data and theoretical probabilities, making it foundational to machine learning, decision theory, and artificial intelligence.