pingouin.bonf

pingouin.bonf(pvals, alpha=0.05)[source]

P-values correction with Bonferroni method.

Parameters
pvalsarray_like

Array of p-values of the individual tests.

alphafloat

Error rate (= alpha level).

Returns
rejectarray, bool

True if a hypothesis is rejected, False if not

pval_correctedarray

P-values adjusted for multiple hypothesis testing using the Bonferroni procedure (= multiplied by the number of tests).

See also

holm

Holm-Bonferroni correction

fdr

Benjamini/Hochberg and Benjamini/Yekutieli FDR correction

Notes

From Wikipedia:

Statistical hypothesis testing is based on rejecting the null hypothesis if the likelihood of the observed data under the null hypotheses is low. If multiple hypotheses are tested, the chance of a rare event increases, and therefore, the likelihood of incorrectly rejecting a null hypothesis (i.e., making a Type I error) increases. The Bonferroni correction compensates for that increase by testing each individual hypothesis \(p_i\) at a significance level of \(p_i = \alpha / n\) where \(\alpha\) is the desired overall alpha level and \(n\) is the number of hypotheses. For example, if a trial is testing \(n=20\) hypotheses with a desired \(\alpha=0.05\), then the Bonferroni correction would test each individual hypothesis at \(\alpha=0.05/20=0.0025\).

The Bonferroni adjusted p-values are defined as:

\[\widetilde {p}_{{(i)}}= n \cdot p_{{(i)}}\]

The Bonferroni correction tends to be a bit too conservative.

Note that NaN values are not taken into account in the p-values correction.

References

Examples

>>> from pingouin import bonf
>>> pvals = [.50, .003, .32, .054, .0003]
>>> reject, pvals_corr = bonf(pvals, alpha=.05)
>>> print(reject, pvals_corr)
[False  True False False  True] [1.     0.015  1.     0.27   0.0015]