pairwise_ttests(data=None, dv=None, between=None, within=None, subject=None, parametric=True, marginal=True, alpha=0.05, tail='two-sided', padjust='none', effsize='hedges', correction='auto', nan_policy='listwise', return_desc=False, interaction=True, within_first=True)
DataFrame. Note that this function can also directly be used as a Pandas method, in which case this argument is no longer needed.
Name of column containing the dependent variable.
- betweenstring or list with 2 elements
Name of column(s) containing the between-subject factor(s).
Note that Pingouin gives slightly different T and p-values compared to JASP posthoc tests for 2-way factorial design, because Pingouin does not pool the standard error for each factor, but rather calculate each pairwise T-test completely independent of others.
- withinstring or list with 2 elements
Name of column(s) containing the within-subject factor(s), i.e. the repeated measurements.
Name of column containing the subject identifier. This is mandatory when
If True, average over repeated measures factor when working with mixed or two-way repeated measures design. For instance, in mixed design, the between-subject pairwise T-test(s) will be calculated after averaging across all levels of the within-subject repeated measures factor (the so-called “marginal means”).
Similarly, in two-way repeated measures factor, the pairwise T-test(s) will be calculated after averaging across all levels of the other repeated measures factor.
marginal=Trueis recommended when doing posthoc testing with multiple factors in order to avoid violating the assumption of independence and conflating the degrees of freedom by the number of repeated measurements. This is the default behavior of JASP.
The default behavior of Pingouin <0.3.2 was
marginal = False, which may have led to incorrect p-values for mixed or two-way repeated measures design. Make sure to always use the latest version of Pingouin.
New in version 0.3.2.
Specify whether the alternative hypothesis is ‘two-sided’ or ‘one-sided’. Can also be ‘greater’ or ‘less’ to specify the direction of the test. ‘greater’ tests the alternative that
xhas a larger mean than
y. If tail is ‘one-sided’, Pingouin will automatically infer the one-sided alternative hypothesis of the test based on the test statistic.
Method used for testing and adjustment of pvalues.
'none': no correction
'bonf': one-step Bonferroni correction
'sidak': one-step Sidak correction
'holm': step-down method using Bonferroni adjustments
'fdr_bh': Benjamini/Hochberg FDR correction
'fdr_by': Benjamini/Yekutieli FDR correction
- effsizestring or None
Effect size type. Available methods are:
'none': no effect size
'cohen': Unbiased Cohen d
'hedges': Hedges g
'glass': Glass delta
'r': Pearson correlation coefficient
'odds-ratio': Odds ratio
'AUC': Area Under the Curve
'CLES': Common Language Effect Size
- correctionstring or boolean
For unpaired two sample T-tests, specify whether or not to correct for unequal variances using Welch separate variances T-test. If ‘auto’, it will automatically uses Welch T-test when the sample sizes are unequal, as recommended by Zimmerman 2004.
New in version 0.3.2.
Can be ‘listwise’ for listwise deletion of missing values in repeated measures design (= complete-case analysis) or ‘pairwise’ for the more liberal pairwise deletion (= available-case analysis).
New in version 0.2.9.
If True, append group means and std to the output dataframe
If there are multiple factors and
interactionis True (default), Pingouin will also calculate T-tests for the interaction term (see Notes).
New in version 0.2.9.
Determines the order of the interaction in mixed design. Pingouin will return within * between when this parameter is set to True (default), and between * within otherwise.
New in version 0.3.6.
'Contrast': Contrast (= independent variable or interaction)
'A': Name of first measurement
'B': Name of second measurement
'Paired': indicates whether the two measurements are paired or independent
'Parametric': indicates if (non)-parametric tests were used
'Tail': indicate whether the p-values are one-sided or two-sided
'T': T statistic (only if parametric=True)
'U-val': Mann-Whitney U stat (if parametric=False and unpaired data)
'W-val': Wilcoxon W stat (if parametric=False and paired data)
'dof': degrees of freedom (only if parametric=True)
'p-unc': Uncorrected p-values
'p-corr': Corrected p-values
'p-adjust': p-values correction method
'BF10': Bayes Factor
'hedges': effect size (or any effect size defined in
Data are expected to be in long-format. If your data is in wide-format, you can use the
pandas.melt()function to convert from wide to long format.
withinis a list (e.g. [‘col1’, ‘col2’]), the function returns 1) the pairwise T-tests between each values of the first column, 2) the pairwise T-tests between each values of the second column and 3) the interaction between col1 and col2. The interaction is dependent of the order of the list, so [‘col1’, ‘col2’] will not yield the same results as [‘col2’, ‘col1’], and will only be calculated if
In other words, if
betweenis a list with two elements, the output model is between1 + between2 + between1 * between2.
withinis a list with two elements, the output model is within1 + within2 + within1 * within2.
withinare specified, the output model is within + between + within * between (= mixed design), unless
within_first=Falsein which case the model becomes between + within + between * within.
Missing values in repeated measurements are automatically removed using a listwise (default) or pairwise deletion strategy. However, you should be very careful since it can result in undesired values removal (especially for the interaction effect). We strongly recommend that you preprocess your data and remove the missing values before using this function.
This function has been tested against the pairwise.t.test R function.
Versions of Pingouin below 0.3.2 gave incorrect results for mixed and two-way repeated measures design (see above warning for the
Pingouin gives slightly different results than the JASP’s posthoc module when working with multiple factors (e.g. mixed, factorial or 2-way repeated measures design). This is mostly caused by the fact that Pingouin does not pool the standard error for between-subject and interaction contrasts. You should always double check your results with JASP or another statistical software.
For more examples, please refer to the Jupyter notebooks
One between-subject factor
>>> import pandas as pd >>> import pingouin as pg >>> df = pg.read_dataset('mixed_anova.csv') >>> pg.pairwise_ttests(dv='Scores', between='Group', data=df).round(3) Contrast A B Paired Parametric T dof Tail p-unc BF10 hedges 0 Group Control Meditation False True -2.29 178.0 two-sided 0.023 1.813 -0.34
One within-subject factor
>>> post_hocs = pg.pairwise_ttests(dv='Scores', within='Time', ... subject='Subject', data=df) >>> post_hocs.round(3) Contrast A B Paired Parametric T dof Tail p-unc BF10 hedges 0 Time August January True True -1.740 59.0 two-sided 0.087 0.582 -0.328 1 Time August June True True -2.743 59.0 two-sided 0.008 4.232 -0.483 2 Time January June True True -1.024 59.0 two-sided 0.310 0.232 -0.170
Non-parametric pairwise paired test (wilcoxon)
>>> pg.pairwise_ttests(dv='Scores', within='Time', subject='Subject', ... data=df, parametric=False).round(3) Contrast A B Paired Parametric W-val Tail p-unc hedges 0 Time August January True False 716.0 two-sided 0.144 -0.328 1 Time August June True False 564.0 two-sided 0.010 -0.483 2 Time January June True False 887.0 two-sided 0.840 -0.170
Mixed design (within and between) with bonferroni-corrected p-values
>>> posthocs = pg.pairwise_ttests(dv='Scores', within='Time', ... subject='Subject', between='Group', ... padjust='bonf', data=df) >>> posthocs.round(3) Contrast Time A B Paired Parametric T dof Tail p-unc p-corr p-adjust BF10 hedges 0 Time - August January True True -1.740 59.0 two-sided 0.087 0.261 bonf 0.582 -0.328 1 Time - August June True True -2.743 59.0 two-sided 0.008 0.024 bonf 4.232 -0.483 2 Time - January June True True -1.024 59.0 two-sided 0.310 0.931 bonf 0.232 -0.170 3 Group - Control Meditation False True -2.248 58.0 two-sided 0.028 NaN NaN 2.096 -0.573 4 Time * Group August Control Meditation False True 0.316 58.0 two-sided 0.753 1.000 bonf 0.274 0.081 5 Time * Group January Control Meditation False True -1.434 58.0 two-sided 0.157 0.471 bonf 0.619 -0.365 6 Time * Group June Control Meditation False True -2.744 58.0 two-sided 0.008 0.024 bonf 5.593 -0.699
Two between-subject factors. The order of the list matters!
>>> pg.pairwise_ttests(dv='Scores', between=['Group', 'Time'], ... data=df).round(3) Contrast Group A B Paired Parametric T dof Tail p-unc BF10 hedges 0 Group - Control Meditation False True -2.290 178.0 two-sided 0.023 1.813 -0.340 1 Time - August January False True -1.806 118.0 two-sided 0.074 0.839 -0.328 2 Time - August June False True -2.660 118.0 two-sided 0.009 4.499 -0.483 3 Time - January June False True -0.934 118.0 two-sided 0.352 0.288 -0.170 4 Group * Time Control August January False True -0.383 58.0 two-sided 0.703 0.279 -0.098 5 Group * Time Control August June False True -0.292 58.0 two-sided 0.771 0.272 -0.074 6 Group * Time Control January June False True 0.045 58.0 two-sided 0.964 0.263 0.011 7 Group * Time Meditation August January False True -2.188 58.0 two-sided 0.033 1.884 -0.558 8 Group * Time Meditation August June False True -4.040 58.0 two-sided 0.000 148.302 -1.030 9 Group * Time Meditation January June False True -1.442 58.0 two-sided 0.155 0.625 -0.367
Same but without the interaction
>>> df.pairwise_ttests(dv='Scores', between=['Group', 'Time'], ... interaction=False).round(3) Contrast A B Paired Parametric T dof Tail p-unc BF10 hedges 0 Group Control Meditation False True -2.290 178.0 two-sided 0.023 1.813 -0.340 1 Time August January False True -1.806 118.0 two-sided 0.074 0.839 -0.328 2 Time August June False True -2.660 118.0 two-sided 0.009 4.499 -0.483 3 Time January June False True -0.934 118.0 two-sided 0.352 0.288 -0.170