The basic idea
Whenever a statistical test concludes that a relationship is significant, when, in reality, there is no relationship, a false discovery has been made. When multiple tests are conducted this leads to a problem known as the multiple testing problem (also known as the multiple comparisons problem, or the post hoc testing problem, data dredging and, sometimes, data mining), whereby the more tests that are conducted, the more false discoveries that are made. Multiple comparison corrections attempt to fix this problem. The basic way that they work is that they require results to have smaller p-Values in order to be classified as significant.
Refer to the Multiple Comparisons (Post Hoc Testing) page for more information about the theory and practice of correcting for multiple comparisons.
How multiple comparison corrections are performed within Q and Displayr
Multiple comparison corrections are, by default, applied in the following situations:
- When determining which cells to show as significant on tables and charts (see How to Do Planned Tests Of Statistical Significance in Q and Displayr).
- When showing statistical significance on charts (see How to Do Planned Tests Of Statistical Significance in Q and Displayr).
- When displaying Column Comparisons on tables.
- When determining which tables are significant in Smart Tables (available in Q).
By default, Q and Displayr use the False Discovery Rate correction but other corrections can be selected in Statistical Assumptions.
The False Discovery Rate and other corrections can be selected in Statistical Assumptions > Column Comparisons).
Q and Displayr do not correct for multiple comparisons when:
- Using computing statistical significance of parameters computed using Latent Class Analysis (in Q and Displayr).
- When conducting Planned Tests Of Statistical Significance except when conducting column comparisons as a part of Planned ANOVA-Type Tests in Q and Displayr.
Multiple comparison corrections available within Q and Displayr
The false discovery rate is applied by first computing the p-values for all the cells in the table. These can all be viewed by selecting Statistics - Cells and p. Then, all the cells that are NETs or copies of other cells are discarded and the remaining p-values are sorted and the cutoff computed. This cutoff is then used to determine which cells are marked as significant and which are not; this cutoff is also applied to any NETs and copies of other cells. A corrected p-value, which is computed by multiplying the actual p-value by the correction factor is available by selecting Corrected p from Statistics - Cells.
Multiple comparison corrections
Multiple comparisons correction | Description |
None | A significance test is employed for each cell using the selected value of the Overall significance level. |
Fisher LSD | Uses the Multiple Comparisons t-Test (Fisher LSD), which makes no correction for multiple comparisons. Traditionally, an F-Test (ANOVA) is conducted initially and the t-tests are only conducted if this test is significant (this is done by selecting ANOVA-Type Test in Statistical Assumptions.
Note that this test has stringent requirements about the relationships between the columns - see How to Specify Comparisons for ANOVA-Based Tests. |
Duncan |
Duncan’s New Multiple Range Test The familywise error rate is determined using the Statistical Assumptions setting of Overall significance level. |
Tukey HSD |
Tukey HSD The familywise error rate is determined using the Statistical Assumptions setting of Overall significance level |
Newman-Keuls (S-N-K) |
Newman-Keuls (S-N-K) The familywise error rate is determined using the Statistical Assumptions setting of Overall significance level |
False Discovery Rate (FDR) | Significance tests are conducted in accordance with the specifications for Statistical tests for categorical and numeric data in Statistical Assumptions and whether Within Row and Span is or is not selected. The False Discovery Rate Correction is used to compute a Corrected p which is then evaluated using the specified Overall significance level is used as the false discovery rate (i.e., q). |
False Discovery Rate (pooled t-test) | Multiple Comparisons t-Test with False Discovery Rate Correction is used to compute a Corrected p. It is evaluated using the specified Overall significance level is used as the false discovery rate (i.e., q). |
Bonferroni | Significance tests are conducted in accordance with the specifications for Statistical tests for categorical and numeric data in Statistical Assumptions. The Bonferroni Correction is used to compute a Corrected p which is then evaluated using the specified Overall significance level is used as the false discovery rate (i.e., q). |
Bonferroni (pooled t-test) | Multiple Comparisons t-Test with Bonferroni Correction is used to compute a Corrected p which is then evaluated using the specified Overall significance level is used as the false discovery rate (i.e., q). |
Dunnett |
Dunnett’s Pairwise Multiple Comparison The familywise error rate is determined using the Statistical Assumptions setting of Overall significance level |
Specifying multiple comparison corrections
(Multiple comparison corrections can be selected in Statistical Assumptions > Column Comparisons)
Cell comparisons
The correction used when testing the cells on tables is specified using Multiple comparison correction in Cell comparisons in Statistical Assumptions.
There is a choice of None and False Discovery Rate (FDR).
Column comparisons
The correction used with Column Comparisons is specified using Multiple comparison correction in Column comparisons in Statistical Assumptions.
ANOVA-Type Tests
The corrections used in ANOVA-Type Tests are determined by the Multiple comparison correction specified for Column comparisons in Statistical Assumptions.
Smart Tables
Smart Tables uses the Multiple comparison correction specified for Cell comparisons in Statistical Assumptions. Smart Tables are available in Q.
See also
How to Show Statistical Significance in Q