You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I wanted to ask about the behaviour of the 'control' parameter on etest() and how it affects the latter multiple testing correction.
As far as I understand from the source code of that function, when you provide a list of perturbed genes as control groups to test again it does the multiple testing correction within a loop per each of the groups and then it aggregates the results, not applying the correction to the whole set of comparisons.
Wouldn't that lead to inflated adj. p-values and wouldn't it be better to do the adjustment for all aggregated interactions at the end?
I'm testing some ideas with the Replogle dataset and after a lengthy etest computation, done for 10k permutations, the vast majority of my results were significant for my criteria (< FDR10%) but I noticed that the non-adjusted p-values seem to be all significant and after correction they keep being so, when that is not a usual behaviour for some correction methods.
Again, thanks for the help and sorry for the lengthy message
The text was updated successfully, but these errors were encountered:
Hi! Thanks for creating this tool.
I wanted to ask about the behaviour of the
'control'
parameter onetest()
and how it affects the latter multiple testing correction.As far as I understand from the source code of that function, when you provide a list of perturbed genes as control groups to test again it does the multiple testing correction within a loop per each of the groups and then it aggregates the results, not applying the correction to the whole set of comparisons.
Wouldn't that lead to inflated adj. p-values and wouldn't it be better to do the adjustment for all aggregated interactions at the end?
I'm testing some ideas with the Replogle dataset and after a lengthy etest computation, done for 10k permutations, the vast majority of my results were significant for my criteria (< FDR10%) but I noticed that the non-adjusted p-values seem to be all significant and after correction they keep being so, when that is not a usual behaviour for some correction methods.
Again, thanks for the help and sorry for the lengthy message
The text was updated successfully, but these errors were encountered: