pingouin.bayesfactor_binom

pingouin.bayesfactor_binom(k, n, p=0.5)[source]

Bayes factor of a binomial test with \(k\) successes, \(n\) trials and base probability \(p\).

Parameters
kint

Number of successes.

nint

Number of trials.

pfloat

Base probability of success (range from 0 to 1).

Returns
bf10float

The Bayes Factor quantifies the evidence in favour of the alternative hypothesis, where the null hypothesis is that the random variable is binomially distributed with base probability \(p\).

See also

bayesfactor_pearson

Bayes Factor of a correlation

bayesfactor_ttest

Bayes Factor of a T-test

Notes

Adapted from a Matlab code found at https://github.com/anne-urai/Tools/blob/master/stats/BayesFactors/binombf.m

The Bayes Factor is given by the formula below:

\[BF_{10} = \frac{\int_0^1 \binom{n}{k}g^k(1-g)^{n-k}} {\binom{n}{k} p^k (1-p)^{n-k}}\]

References

1

http://pcl.missouri.edu/bf-binomial

2

https://en.wikipedia.org/wiki/Bayes_factor

Examples

We want to determine if a coin if fair. After tossing the coin 200 times in a row, we report 115 heads (hereafter referred to as “successes”) and 85 tails (“failures”). The Bayes Factor can be easily computed using Pingouin:

>>> import pingouin as pg
>>> bf = float(pg.bayesfactor_binom(k=115, n=200, p=0.5))
>>> # Note that Pingouin returns the BF-alt by default, formatted as a str.
>>> # BF-null is simply 1 / BF-alt
>>> print("BF-null: %.3f, BF-alt: %.3f" % (1 / bf, bf))
BF-null: 1.198, BF-alt: 0.835

Since the Bayes Factor of the null hypothesis (“the coin is fair”) is higher than the Bayes Factor of the alternative hypothesis (“the coin is not fair”), we can conclude that there is more evidence to support the fact that the coin is indeed fair. However, the strength of the evidence in favor of the null hypothesis (1.198) is “barely worth mentionning” according to Jeffreys’s rule of thumb.

Interestingly, a frequentist alternative to this test would give very different results. It can be performed using the scipy.stats.binom_test() function:

>>> from scipy.stats import binom_test
>>> pval = binom_test(115, 200, p=0.5)
>>> round(pval, 5)
0.04004

The binomial test rejects the null hypothesis that the coin is fair at the 5% significance level (p=0.04). Thus, whereas a frequentist hypothesis test would yield significant results at the 5% significance level, the Bayes factor does not find any evidence that the coin is unfair.

Last example using a different base probability of successes

>>> bf = pg.bayesfactor_binom(k=100, n=1000, p=0.1)
>>> print("Bayes Factor: %s" % bf)
Bayes Factor: 0.024