I am trying to find if there is a significant difference in stratification of severity for individuals with and without a disease. This is what my table looks like.
I ran a fisher test and got a significant p-value
and then I ran a pairwise fisher test and a fisher.multcomp test but I am confused about interpreting the outcome.
I am confused about which comparisons the pairwise/multcomp test are running. For instance does the first row in the pairwise fisher test say that there is a significant difference between the number of mild and moderate cases between those who have the disease and those who do not?
While running chi-square test, you would have come across situates where the expected frequency is less than 5. If I'm not wrong, fisher's test can be used in such situations.
This pairwise test shows what the significance would be, considering only two groups in the variable and ignoring the observations of other groups. It give the p-value for all possible combinations of the levels in the variable.
Here in this example:
# Fisher's test ingoring third row, Severe (Inclusing mild and moderate only)
> fisher.test(table1[-3,])
data: table1[-3, ]
p-value = 0.01356
alternative hypothesis: true odds ratio is not equal to 1
95 percent confidence interval:
0.08293417 0.82733771
sample estimates:
odds ratio
0.2709515
> fisher.test(table1[-2,])
data: table1[-2, ]
p-value = 3.881e-06
alternative hypothesis: true odds ratio is not equal to 1
95 percent confidence interval:
0.0005229306 0.1980644402
sample estimates:
odds ratio
0.02454
> fisher.test(table1[-1,])
data: table1[-1, ]
p-value = 0.008936
alternative hypothesis: true odds ratio is not equal to 1
95 percent confidence interval:
0.001895554 0.703424501
sample estimates:
odds ratio
0.08829437
You can observe that these p.values are the same as what you have.