Jump to content

Bayes' theorem: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
History: remove stuff specifically related to Bayesian inference
History: add doi to ref
Line 178: Line 178:


Bayes' theorem was named after the Reverend [[Thomas Bayes]] (1701–61), who studied how to compute a distribution for the probability parameter of a [[binomial distribution]] (in modern terminology). His friend [[Richard Price]] edited and presented this work in 1763, after Bayes' death, as ''[[An Essay towards solving a Problem in the Doctrine of Chances]]''.<ref name="Price1763">
Bayes' theorem was named after the Reverend [[Thomas Bayes]] (1701–61), who studied how to compute a distribution for the probability parameter of a [[binomial distribution]] (in modern terminology). His friend [[Richard Price]] edited and presented this work in 1763, after Bayes' death, as ''[[An Essay towards solving a Problem in the Doctrine of Chances]]''.<ref name="Price1763">
{{cite journal |doi = 10.1098/rstl.1763.0053 |journal = Philosophical Transactions of the Royal Society of London | volume = 53 |issue = 0 | year = 1763 | pages = 370–418 | url = http://www.stat.ucla.edu/history/essay.pdf | title = An Essay towards solving a Problem in the Doctrine of Chance. By the late Rev. Mr. Bayes, communicated by Mr. Price, in a letter to John Canton, M. A. and F. R. S. | author = Bayes, Thomas, and Price, Richard }}</ref> The French [[mathematician]] [[Pierre-Simon Laplace]] reproduced and extended Bayes' results in 1774, apparently quite unaware of Bayes' work.<ref>{{cite book |title=Classical Probability in the Enlightenment |first=Lorraine |last=Daston |publisher=Princeton Univ Press |year=1988 |page=268 |isbn=0-691-08497-1}}</ref> [[Stephen Stigler]] suggested in 1983 that Bayes' theorem was discovered by [[Nicholas Saunderson]] some time before Bayes.<ref>Stigler, Stephen M. (1983), "Who Discovered Bayes' Theorem?" The American Statistician 37(4):290–296.</ref> However, this interpretation has been disputed.<ref>Edwards, A. W. F. (1986), "Is the Reference in Hartley (1749) to Bayesian Inference?", The American Statistician 40(2):109–110</ref>
{{cite journal |doi = 10.1098/rstl.1763.0053 |journal = Philosophical Transactions of the Royal Society of London | volume = 53 |issue = 0 | year = 1763 | pages = 370–418 | url = http://www.stat.ucla.edu/history/essay.pdf | title = An Essay towards solving a Problem in the Doctrine of Chance. By the late Rev. Mr. Bayes, communicated by Mr. Price, in a letter to John Canton, M. A. and F. R. S. | author = Bayes, Thomas, and Price, Richard }}</ref> The French [[mathematician]] [[Pierre-Simon Laplace]] reproduced and extended Bayes' results in 1774, apparently quite unaware of Bayes' work.<ref>{{cite book |title=Classical Probability in the Enlightenment |first=Lorraine |last=Daston |publisher=Princeton Univ Press |year=1988 |page=268 |isbn=0-691-08497-1}}</ref> [[Stephen Stigler]] suggested in 1983 that Bayes' theorem was discovered by [[Nicholas Saunderson]] some time before Bayes.<ref>Stigler, Stephen M. (1983), "Who Discovered Bayes' Theorem?" The American Statistician 37(4):290–296.</ref> However, this interpretation has been disputed.<ref>Edwards, A. W. F. (1986), "Is the Reference in Hartley (1749) to Bayesian Inference?", The American Statistician 40(2):109–110</ref>


==Notes==
==Notes==

Revision as of 14:06, 20 April 2013

A blue neon sign at the Autonomy Corporation, showing the simple statement of Bayes' theorem

In probability theory and statistics, Bayes' theorem (alternatively Bayes' law) is a result that is of importance in the mathematical manipulation of conditional probabilities. It is a result that derives from the more basic axioms of probability.

When applied, the probabilities involved in Bayes' theorem may have any of a number of probability interpretations. In one of these interpretations, the theorem is used directly as part of a particular approach to statistical inference. In particular, with the Bayesian interpretation of probability, the theorem expresses how a subjective degree of belief should rationally change to account for evidence: this is Bayesian inference, which is fundamental to Bayesian statistics. However, Bayes' theorem has applications in a wide range of calculations involving probabilities, not just in Bayesian inference.

Bayes' theorem is named after Thomas Bayes (/ˈbz/; 1701–1761), who first suggested using the theorem to update beliefs. His work was significantly edited and updated by Richard Price before it was posthumously read at the Royal Society. The ideas gained limited exposure until they were independently rediscovered and further developed by Laplace, who first published the modern formulation in his 1812 Théorie analytique des probabilités.

Sir Harold Jeffreys wrote that Bayes' theorem “is to the theory of probability what Pythagoras's theorem is to geometry”.[1]

Introductory example

Suppose someone told you they had a nice conversation with someone on the train. Not knowing anything else about this conversation, the probability that they were speaking to a woman is 50%. Now suppose they also told you that this person had long hair. It is now more likely they were speaking to a woman, since women are more likely to have long hair than men. Bayes' theorem can be used to calculate the probability that the person is a woman.

To see how this is done, let W represent the event that the conversation was held with a woman, and L denote the event that the conversation was held with a long-haired person. It can be assumed that women constitute half the population for this example. So, not knowing anything else, the probability that W occurs is P(W) = 0.5.

Suppose it is also known that 75% of women have long hair, which we denote as P(L|W) = 0.75 (read: the probability of event L given event W is 0.75). Likewise, suppose it is known that 25% of men have long hair, or P(L|M) = 0.25, where M is the complementary event of W, i.e., the event that the conversation was held with a man (assuming that every human is either a man or a woman).

Our goal is to calculate the probability that the conversation was held with a woman, given the fact that the person had long hair, or, in our notation, P(W|L). Using the formula for Bayes' theorem, we have:

where we have used the law of total probability. The numeric answer can be obtained by substituting the above values into this formula. This yields

i.e., the probability that the conversation was held with a woman, given that the person had long hair, is 75%.

Another way to do this calculation is as follows. Initially, it is equally likely that the conversation is held with a woman as to a man. The prior odds on a woman versus a man are 1:1. The respective chances that a man and a woman have long hair are 75% and 25%. It is three times more likely that a woman has long hair than that a man has long hair. We say that the likelihood ratio or Bayes factor is 3:1. Bayes' theorem in odds form, also known as Bayes' rule, tells us that the posterior odds that the person was a woman is also 3:1 ( the prior odds, 1:1, times the likelihood ratio, 3:1). In a formula:

Statement and interpretation

Mathematically, Bayes' theorem gives the relationship between the probabilities of A and B, P(A) and P(B), and the conditional probabilities of A given B and B given A, P(A|B) and P(B|A). In its most common form, it is:

The meaning of this statement depends on the interpretation of probability ascribed to the terms:

Bayesian interpretation

In the Bayesian (or epistemological) interpretation, probability measures a degree of belief. Bayes' theorem then links the degree of belief in a proposition before and after accounting for evidence. For example, suppose somebody proposes that a biased coin is twice as likely to land heads than tails. Degree of belief in this might initially be 50%. The coin is then flipped a number of times to collect evidence. Belief may rise to 70% if the evidence supports the proposition.

For proposition A and evidence B,

  • P(A), the prior, is the initial degree of belief in A.
  • P(A|B), the posterior, is the degree of belief having accounted for B.
  • the quotient P(B|A)/P(B) represents the support B provides for A.

For more on the application of Bayes' theorem under the Bayesian interpretation of probability, see Bayesian inference.

Frequentist interpretation

Illustration of frequentist interpretation with tree diagrams. Bayes' theorem connects conditional probabilities to their inverses.

In the frequentist interpretation, probability measures a proportion of outcomes. For example, suppose an experiment is performed many times. P(A) is the proportion of outcomes with property A, and P(B) that with property B. P(B|A) is the proportion of outcomes with property B out of outcomes with property A, and P(A|B) the proportion of those with A out of those with B.

The role of Bayes' theorem is best visualized with tree diagrams, as shown to the right. The two diagrams partition the same outcomes by A and B in opposite orders, to obtain the inverse probabilities. Bayes' theorem serves as the link between these different partitionings.

Forms

Events

Simple form

For events A and B, provided that P(B) ≠ 0,

In a Bayesian inference step, the probability of evidence B is constant for all models An. The posterior may then be expressed as proportional to the numerator:

Extended form

Often, for some partition {Aj} of the event space, the event space is given or conceptualized in terms of P(Aj) and P(B|Aj). It is then useful to eliminate P(B) using the law of total probability:

In the special case of a binary partition,

Random variables

Diagram illustrating the meaning of Bayes' theorem as applied to an event space generated by continuous random variables X and Y. Note that there exists an instance of Bayes' theorem for each point in the domain. In practice, these instances might be parametrized by writing the specified probability densities as a function of x and y.

Consider a sample space Ω generated by two random variables X and Y. In principle, Bayes' theorem applies to the events A = {X = x} and B = {Y = y}. However, terms become 0 at points where either variable has finite probability density. To remain useful, Bayes' theorem may be formulated in terms of the relevant densities (see Derivation).

Simple form

If X is continuous and Y is discrete,

If X is discrete and Y is continuous,

If both X and Y are continuous,

Extended form

Diagram illustrating how an event space generated by continuous random variables X and Y is often conceptualized.

A continuous event space is often conceptualized in terms of the numerator terms. It is then useful to eliminate the denominator using the law of total probability. For fY(y), this becomes an integral:

Bayes' rule

Bayes' rule is Bayes' theorem in odds form.

where

is called the Bayes factor or likelihood ratio

and the odds between two events is simply the ratio of the probabilities of the two events. Thus

,
,

So the rule says that the posterior odds are the prior odds times the Bayes factor, or in other words, posterior is proportional to prior times likelihood.

Derivation

For events

Bayes' theorem may be derived from the definition of conditional probability:


<hr\>

For random variables

For two continuous random variables X and Y, Bayes' theorem may be analogously derived from the definition of conditional density:

Examples

Frequentist example

Tree diagram illustrating frequentist example. R, C, P and P bar are the events representing rare, common, pattern and no pattern. Percentages in parentheses are calculated. Note that three independent values are given, so it is possible to calculate the inverse tree (see figure above).

An entomologist spots what might be a rare subspecies of beetle, due to the pattern on its back. In the rare subspecies, 98% have the pattern. In the common subspecies, 5% have the pattern. The rare subspecies accounts for only 0.1% of the population. How likely is the beetle to be rare?

From the extended form of Bayes' theorem,

Drug testing

Tree diagram illustrating drug testing example. U, U bar, "+" and "−" are the events representing user, non-user, positive result and negative result. Percentages in parentheses are calculated.

Suppose a drug test is 99% sensitive and 99% specific. That is, the test will produce 99% true positive results for drug users and 99% true negative results for non-drug users. Suppose that 0.5% of people are users of the drug. If a randomly selected individual tests positive, what is the probability he or she is a user?

Despite the apparent accuracy of the test, if an individual tests positive, it is more likely that they do not use the drug than that they do.

This surprising result arises because the number of non-users is very large compared to the number of users, such that the number of false positives (0.995%) outweighs the number of true positives (0.495%). To use concrete numbers, if 1000 individuals are tested, there are expected to be 995 non-users and 5 users. From the 995 non-users, 0.01 × 995 ≃ 10 false positives are expected. From the 5 users, 0.99 × 5 ≃ 5 true positives are expected. Out of 15 positive results, only 5, about 33%, are genuine.

History

Bayes' theorem was named after the Reverend Thomas Bayes (1701–61), who studied how to compute a distribution for the probability parameter of a binomial distribution (in modern terminology). His friend Richard Price edited and presented this work in 1763, after Bayes' death, as An Essay towards solving a Problem in the Doctrine of Chances.[2] The French mathematician Pierre-Simon Laplace reproduced and extended Bayes' results in 1774, apparently quite unaware of Bayes' work.[3] Stephen Stigler suggested in 1983 that Bayes' theorem was discovered by Nicholas Saunderson some time before Bayes.[4] However, this interpretation has been disputed.[5]

Notes

  1. ^ Jeffreys, Harold (1973), Scientific Inference (3rd ed.), Cambridge University Press, p. 31, ISBN 978-0-521-18078-8
  2. ^ Bayes, Thomas, and Price, Richard (1763). "An Essay towards solving a Problem in the Doctrine of Chance. By the late Rev. Mr. Bayes, communicated by Mr. Price, in a letter to John Canton, M. A. and F. R. S." (PDF). Philosophical Transactions of the Royal Society of London. 53 (0): 370–418. doi:10.1098/rstl.1763.0053.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  3. ^ Daston, Lorraine (1988). Classical Probability in the Enlightenment. Princeton Univ Press. p. 268. ISBN 0-691-08497-1.
  4. ^ Stigler, Stephen M. (1983), "Who Discovered Bayes' Theorem?" The American Statistician 37(4):290–296.
  5. ^ Edwards, A. W. F. (1986), "Is the Reference in Hartley (1749) to Bayesian Inference?", The American Statistician 40(2):109–110 doi:10.1080/00031305.1986.10475370

Further reading

  • Pierre-Simon Laplace. (1774/1986), "Memoir on the Probability of the Causes of Events", Statistical Science 1(3):364–378.
  • Stephen M. Stigler (1986), "Laplace's 1774 Memoir on Inverse Probability", Statistical Science 1(3):359–363.