I ran a number of Pearson's R correlations and I then wanted to correct for multiple comparisons using Statsmodels. Specifically, I used this function. Given a list of p-values, the function returns whether the hypothesis was rejected or not, based on the provided alpha value, as well as the adjusted p-value, based on the method specified. In my case, I used the method "fdr_bh", which is the step-wise Benjamini & Hochberg (1995) method.
However, when assessing the adjusted p-values, I noticed that there are "groups" of p-values that have all the same value, and I don't understand if this is a normal behaviour or not. Is this related to point 2 of the second answer to this question?
This is an example of my adjusted p-values
This is the code I use, given the list of p val
fdr_p = multitest.multipletests(uncorrected_p, alpha=0.05, method='fdr_bh', is_sorted=False)
Any help is appreciated.
Thanks
Is this related to point 2 of the second answer to this question?
Most likely yes.
All stepwise p-value correction methods have a monotonicity restriction.
This is imposed in the computation so that the corrected p-values are weakly increasing in the original, uncorrected p-values.