The MULTTEST Procedure

p-Value Adjustments

Suppose you test m null hypotheses, upper H 01 comma ellipsis comma upper H Subscript 0 m Baseline, and obtain the p-values p 1 comma ellipsis comma p Subscript m Baseline. Denote the ordered p-values as p Subscript left-parenthesis 1 right-parenthesis Baseline less-than-or-equal-to midline-horizontal-ellipsis less-than-or-equal-to p Subscript left-parenthesis m right-parenthesis and order the tests appropriately: upper H Subscript 0 left-parenthesis 1 right-parenthesis Baseline comma ellipsis comma upper H Subscript 0 left-parenthesis m right-parenthesis Baseline. Suppose you know m 0 of the null hypotheses are true and m 1 equals m minus m 0 are false. Let R indicate the number of null hypotheses rejected by the tests, where V of these are incorrectly rejected (that is, V tests are Type I errors) and upper R minus upper V are correctly rejected (so m 1 minus upper R plus upper V tests are Type II errors). This information is summarized in the following table:

Null Is Rejected Null Is Not Rejected Total
Null is True V mSubscript 0V mSubscript 0
Null is False RV mSubscript 1R + V mSubscript 1
Total R mR m

The familywise error rate (FWE) is the overall Type I error rate for all the comparisons (possibly under some restrictions); that is, it is the maximum probability of incorrectly rejecting one or more null hypotheses:

FWE equals probability left-parenthesis upper V greater-than 0 right-parenthesis

The FWE is also known as the maximum experimentwise error rate (MEER), as discussed in the section Pairwise Comparisons in Chapter 53, The GLM Procedure.

The false discovery rate (FDR) is the expected proportion of incorrectly rejected hypotheses among all rejected hypotheses:

StartLayout 1st Row 1st Column FDR 2nd Column equals upper E left-parenthesis StartFraction upper V Over upper R EndFraction right-parenthesis where StartFraction upper V Over upper R EndFraction equals 0 when upper V equals upper R equals 0 2nd Row 1st Column Blank 2nd Column equals upper E left-parenthesis StartFraction upper V Over upper R EndFraction vertical-bar upper R greater-than 0 right-parenthesis probability left-parenthesis upper R greater-than 0 right-parenthesis EndLayout

Under the overall null hypothesis (all the null hypotheses are true), the FDR=FWE since V = R gives upper E left-parenthesis StartFraction upper V Over upper R EndFraction right-parenthesis equals 1 times probability left-parenthesis StartFraction upper V Over upper R EndFraction equals 1 right-parenthesis equals probability left-parenthesis upper V greater-than 0 right-parenthesis. Otherwise, FDR is always less than FWE, and an FDR-controlling adjustment also controls the FWE. Another definition used is the positive false discovery rate:

pFDR equals upper E left-parenthesis StartFraction upper V Over upper R EndFraction vertical-bar upper R greater-than 0 right-parenthesis

The p-value adjustment methods discussed in the following sections attempt to correct the raw p-values while controlling either the FWE or the FDR. Note that the methods might impose some restrictions in order to achieve this; restrictions are discussed along with the methods in the following sections. Discussions and comparisons of some of these methods are given in Dmitrienko et al. (2005), Dudoit, Shaffer, and Boldrick (2003), Westfall et al. (1999), and Brown and Russell (1997).

Familywise Error Rate Controlling Adjustments

PROC MULTTEST provides several p-value adjustments to control the familywise error rate. Single-step adjustment methods are computed without reference to the other hypothesis tests under consideration. The available single-step methods are the Bonferroni and Šidák adjustments, which are simple functions of the raw p-values that try to distribute the significance level alpha across all the tests, and the bootstrap and permutation resampling adjustments, which require the raw data. The Bonferroni and Šidák methods are calculated from the permutation distributions when exact permutation tests are used with the CA or Peto test.

Stepwise tests, or sequentially rejective tests, order the hypotheses in step-up (least significant to most significant) or step-down fashion, then sequentially determine acceptance or rejection of the nulls. These tests are more powerful than the single-step tests, and they do not always require you to perform every test. However, PROC MULTTEST still adjusts every p-value. PROC MULTTEST provides the following stepwise p-value adjustments: step-down Bonferroni (Holm), step-down Šidák, step-down bootstrap and permutation resampling, Hochberg’s (1988) step-up, Hommel’s (1988), Fisher’s combination method, and the Stouffer-Liptak combination method. Adaptive versions of Holm’s step-down Bonferroni and Hochberg’s step-up Bonferroni methods, which require an estimate of the number of true null hypotheses, are also available.

Liu (1996) shows that all single-step and stepwise tests based on marginal p-values can be used to construct a closed test (Marcus, Peritz, and Gabriel 1976; Dmitrienko et al. 2005). Closed testing methods not only control the familywise error rate at size alpha, but are also more powerful than the tests on which they are based. Westfall and Wolfinger (2000) note that several of the methods available in PROC MULTTEST are closed—namely, the step-down methods, Hommel’s method, and Fisher’s combination; see that reference for conditions and exceptions.

All methods except the resampling methods are calculated by simple functions of the raw p-values or marginal permutation distributions; the permutation and bootstrap adjustments require the raw data. Because the resampling techniques incorporate distributional and correlational structures, they tend to be less conservative than the other methods.

When a resampling (bootstrap or permutation) method is used with only one test, the adjusted p-value is the bootstrap or permutation p-value for that test, with no adjustment for multiplicity, as described by Westfall and Soper (1994).

Bonferroni

The Bonferroni p-value for test i comma i equals 1 comma ellipsis comma m is simply p overTilde Subscript i Baseline equals m p Subscript i. If the adjusted p-value exceeds 1, it is set to 1. The Bonferroni test is conservative but always controls the familywise error rate.

If the unadjusted p-values are computed by using exact permutation distributions, then the Bonferroni adjustment for p Subscript i is p overTilde Subscript i Baseline equals p 1 Superscript asterisk Baseline plus midline-horizontal-ellipsis plus p Subscript m Superscript asterisk, where p Subscript j Superscript asterisk is the largest p-value from the permutation distribution of test j satisfying p Subscript j Superscript asterisk Baseline less-than-or-equal-to p Subscript i, or 0 if all permutational p-values of test j are greater than p Subscript i. These adjustments are much less conservative than the ordinary Bonferroni adjustments because they incorporate the discrete distributional characteristics. However, they remain conservative in that they do not incorporate correlation structures between multiple contrasts and multiple variables (Westfall and Wolfinger 1997).

Šidák

A technique slightly less conservative than Bonferroni is the Šidák p-value (Šidák 1967), which is p overTilde Subscript i Baseline equals 1 minus left-parenthesis 1 minus p Subscript i Baseline right-parenthesis Superscript m. It is exact when all of the p-values are uniformly distributed and independent, and it is conservative when the test statistics satisfy the positive orthant dependence condition (Holland and Copenhaver 1987).

If the unadjusted p-values are computed by using exact permutation distributions, then the Šidák adjustment for p Subscript i is p overTilde Subscript i Baseline equals 1 minus left-parenthesis 1 minus p 1 Superscript asterisk Baseline right-parenthesis midline-horizontal-ellipsis left-parenthesis 1 minus p Subscript m Superscript asterisk Baseline right-parenthesis, where the p Subscript j Superscript asterisk are as described previously. These adjustments are less conservative than the corresponding Bonferroni adjustments, but they do not incorporate correlation structures between multiple contrasts and multiple variables (Westfall and Wolfinger 1997).

Bootstrap

The bootstrap method creates pseudo-data sets by sampling observations with replacement from each within-stratum pool of observations. An entire data set is thus created, and p-values for all tests are computed on this pseudo-data set. A counter records whether the minimum p-value from the pseudo-data set is less than or equal to the actual p-value for each base test. (If there are m tests, then there are m such counters.) This process is repeated a large number of times, and the proportion of resampled data sets where the minimum pseudo-p-value is less than or equal to an actual p-value is the adjusted p-value reported by PROC MULTTEST. The algorithms are described in Westfall and Young (1993).

In the case of continuous data, the pooling of the groups is not likely to re-create the shape of the null hypothesis distribution, since the pooled data are likely to be multimodal. For this reason, PROC MULTTEST automatically mean-centers all continuous variables prior to resampling. Such mean-centering is akin to resampling residuals in a regression analysis, as discussed by Freedman (1981). You can specify the NOCENTER option if you do not want to center the data.

The bootstrap method implicitly incorporates all sources of correlation, from both the multiple contrasts and the multivariate structure. The adjusted p-values incorporate all correlations and distributional characteristics. This method always provides weak control of the familywise error rate, and it provides strong control when the subset pivotality condition holds; that is, for any subset of the null hypotheses, the joint distribution of the p-values for the subset is identical to that under the complete null (Westfall and Young 1993).

Permutation

The permutation-style-adjusted p-values are computed in identical fashion as the bootstrap-adjusted p-values, with the exception that the within-stratum resampling is performed without replacement instead of with replacement. This produces a rerandomization analysis such as in Brown and Fears (1981) and Heyse and Rom (1988). In the spirit of rerandomization analyses, the continuous variables are not centered prior to resampling. This default can be overridden by using the CENTER option.

The permutation method implicitly incorporates all sources of correlation, from both the multiple contrasts and the multivariate structure. The adjusted p-values incorporate all correlations and distributional characteristics. This method always provides weak control of the familywise error rate, and it provides strong control of the familywise error rate under the subset pivotality condition, as described in the preceding section.

Step-Down Methods

Step-down testing is available for the Bonferroni, Šidák, bootstrap, and permutation methods. The benefit of using step-down methods is that the tests are made more powerful (smaller adjusted p-values) while, in most cases, maintaining strong control of the familywise error rate. The step-down method was pioneered by Holm (1979) and further developed by Shaffer (1986), Holland and Copenhaver (1987), and Hochberg and Tamhane (1987).

The Bonferroni step-down (Holm) p-values p overTilde Subscript left-parenthesis 1 right-parenthesis Baseline comma ellipsis comma p overTilde Subscript left-parenthesis m right-parenthesis Baseline are obtained from

p overTilde Subscript left-parenthesis i right-parenthesis Baseline equals StartLayout Enlarged left-brace 1st Row 1st Column m p Subscript left-parenthesis 1 right-parenthesis Baseline 2nd Column for i equals 1 2nd Row 1st Column max left-parenthesis p overTilde Subscript left-parenthesis i minus 1 right-parenthesis Baseline comma left-parenthesis m minus i plus 1 right-parenthesis p Subscript left-parenthesis i right-parenthesis Baseline right-parenthesis 2nd Column for i equals 2 comma ellipsis comma m EndLayout

As always, if any adjusted p-value exceeds 1, it is set to 1.

The Šidák step-down p-values are determined similarly:

p overTilde Subscript left-parenthesis i right-parenthesis Baseline equals StartLayout Enlarged left-brace 1st Row 1st Column 1 minus left-parenthesis 1 minus p Subscript left-parenthesis 1 right-parenthesis Baseline right-parenthesis Superscript m Baseline 2nd Column for i equals 1 2nd Row 1st Column max left-parenthesis p overTilde Subscript left-parenthesis i minus 1 right-parenthesis Baseline comma 1 minus left-parenthesis 1 minus p Subscript left-parenthesis i right-parenthesis Baseline right-parenthesis Superscript m minus i plus 1 Baseline right-parenthesis 2nd Column for i equals 2 comma ellipsis comma m EndLayout

Step-down Bonferroni adjustments that use exact tests are defined as

p overTilde Subscript left-parenthesis i right-parenthesis Baseline equals StartLayout Enlarged left-brace 1st Row 1st Column p Subscript left-parenthesis 1 right-parenthesis Superscript asterisk Baseline plus midline-horizontal-ellipsis plus p Subscript left-parenthesis m right-parenthesis Superscript asterisk Baseline 2nd Column for i equals 1 2nd Row 1st Column max left-parenthesis p overTilde Subscript left-parenthesis i minus 1 right-parenthesis Baseline comma p Subscript left-parenthesis i right-parenthesis Superscript asterisk Baseline plus midline-horizontal-ellipsis plus p Subscript left-parenthesis m right-parenthesis Superscript asterisk Baseline comma right-parenthesis 2nd Column for i equals 2 comma ellipsis comma m EndLayout

where the p Subscript j Superscript asterisk are defined as before. Note that p Subscript j Superscript asterisk is taken from the permutation distribution corresponding to the jth-smallest unadjusted p-value. Also, any p overTilde Subscript j greater than 1.0 is reduced to 1.0.

Step-down Šidák adjustments for exact tests are defined analogously by substituting 1 minus left-parenthesis 1 minus p Subscript left-parenthesis j right-parenthesis Superscript asterisk Baseline right-parenthesis midline-horizontal-ellipsis left-parenthesis 1 minus p Subscript left-parenthesis m right-parenthesis Superscript asterisk Baseline right-parenthesis for p Subscript left-parenthesis j right-parenthesis Superscript asterisk Baseline plus midline-horizontal-ellipsis plus p Subscript left-parenthesis m right-parenthesis Superscript asterisk.

The resampling-style step-down methods are analogous to the preceding step-down methods; the most extreme p-value is adjusted according to all m tests, the second-most extreme p-value is adjusted according to (m – 1) tests, and so on. The difference is that all correlational and distributional characteristics are incorporated when you use resampling methods. More specifically, assuming the same ordering of p-values as discussed previously, the resampling-style step-down-adjusted p-value for test i is the probability that the minimum pseudo-p-value of tests i comma ellipsis comma m is less than or equal to p Subscript i.

This probability is evaluated by using Monte Carlo simulation, as are the previously described resampling-style-adjusted p-values. In fact, the computations for step-down-adjusted p-values are essentially no more time-consuming than the computations for the non-step-down-adjusted p-values. After Monte Carlo, the step-down-adjusted p-values are corrected to ensure monotonicity; this correction leaves the first adjusted p-values alone, then corrects the remaining ones as needed. The step-down method approximately controls the familywise error rate, and it is described in more detail by Westfall and Young (1993), Westfall et al. (1999), and Westfall and Wolfinger (2000).

Hommel

Hommel’s (1988) method is a closed testing procedure based on Simes’ test (Simes 1986). The Simes p-value for a joint test of any set of S hypotheses with p-values p Subscript left-parenthesis 1 right-parenthesis Baseline less-than-or-equal-to p Subscript left-parenthesis 2 right-parenthesis Baseline less-than-or-equal-to ellipsis less-than-or-equal-to p Subscript left-parenthesis upper S right-parenthesis is min left-parenthesis left-parenthesis upper S slash 1 right-parenthesis p Subscript left-parenthesis 1 right-parenthesis Baseline comma left-parenthesis upper S slash 2 right-parenthesis p Subscript left-parenthesis 2 right-parenthesis Baseline comma ellipsis comma left-parenthesis upper S slash upper S right-parenthesis p Subscript left-parenthesis upper S right-parenthesis Baseline right-parenthesis. The Hommel-adjusted p-value for test j is the maximum of all such Simes p-values, taken over all joint tests that include j as one of their components.

Hochberg-adjusted p-values are always as large or larger than Hommel-adjusted p-values. Sarkar and Chang (1997) shows that Simes’ method is valid under independent or positively dependent p-values, so Hommel’s and Hochberg’s methods are also valid in such cases by the closure principle.

Hochberg

Assuming p-values are independent and uniformly distributed under their respective null hypotheses, Hochberg (1988) demonstrates that Holm’s step-down adjustments control the familywise error rate even when calculated in step-up fashion. Since the adjusted p-values are uniformly smaller for Hochberg’s method than for Holm’s method, the Hochberg method is more powerful. However, this improved power comes at the cost of having to make the assumption of independence. Hochberg’s method can be derived from Hommel’s (Liu 1996), and is thus also derived from Simes’ test (Simes 1986).

Hochberg-adjusted p-values are always as large or larger than Hommel-adjusted p-values. Sarkar and Chang (1997) showed that Simes’ method is valid under independent or positively dependent p-values, so Hommel’s and Hochberg’s methods are also valid in such cases by the closure principle.

The Hochberg-adjusted p-values are defined in reverse order of the step-down Bonferroni:

p overTilde Subscript left-parenthesis i right-parenthesis Baseline equals StartLayout Enlarged left-brace 1st Row 1st Column p Subscript left-parenthesis m right-parenthesis Baseline 2nd Column for i equals m 2nd Row 1st Column min left-parenthesis p overTilde Subscript left-parenthesis i plus 1 right-parenthesis Baseline comma left-parenthesis m minus i plus 1 right-parenthesis p Subscript left-parenthesis i right-parenthesis Baseline right-parenthesis 2nd Column for i equals m minus 1 comma ellipsis comma 1 EndLayout
Fisher Combination

The FISHER_C option requests adjusted p-values by using closed tests, based on the idea of Fisher’s combination test. The Fisher combination test for a joint test of any set of S hypotheses with p-values uses the chi-square statistic chi squared equals minus 2 sigma-summation log left-parenthesis p Subscript i Baseline right-parenthesis, with 2S degrees of freedom. The FISHER_C adjusted p-value for test j is the maximum of all p-values for the combination tests, taken over all joint tests that include j as one of their components. Independence of p-values is required for the validity of this method.

Stouffer-Liptak Combination

The STOUFFER option requests adjusted p-values by using closed tests, based on the Stouffer-Liptak combination test. The Stouffer combination joint test of any set of S one-sided hypotheses with p-values, p 1 comma ellipsis comma p Subscript upper S Baseline, yields the p-value, 1 minus normal upper Phi left-parenthesis StartFraction 1 Over StartRoot upper S EndRoot EndFraction sigma-summation Underscript i Endscripts normal upper Phi Superscript negative 1 Baseline left-parenthesis 1 minus p Subscript i Baseline right-parenthesis right-parenthesis. The STOUFFER adjusted p-value for test j is the maximum of all p-values for the combination tests, taken over all joint tests that include j as one of their components.

Independence of the one-sided p-values is required for the validity of this method. Westfall (2005) shows that the Stouffer-Liptak adjustment might have more power than the Fisher combination and Simes’ adjustments when the test results reinforce each other.

Adaptive Adjustments

Adaptive adjustments modify the FWE- and FDR-controlling procedures by taking an estimate of the number m 0 or proportion pi 0 of true null hypotheses into account. The adjusted p-values for Holm’s and Hochberg’s methods involve the number of unadjusted p-values larger than left-parenthesis i right-parenthesis, m minus i plus 1. So the minimal significance level at which the ith ordered p-value is rejected implies that the number of true null hypotheses is m minus i plus 1. However, if you know m 0, then you can replace m minus i plus 1 with min left-parenthesis m 0 comma m minus i plus 1 right-parenthesis, thereby obtaining more power while maintaining the original alpha-level significance.

Since m 0 is unknown, there are several methods used to estimate the value—see the NTRUENULL= option for more information. The estimation method described by Hochberg and Benjamini (1990) considers the graph of 1 minus p Subscript left-parenthesis i right-parenthesis versus i, where the p Subscript left-parenthesis i right-parenthesis are the ordered p-values of your tests. See Output 86.6.4 for an example. If all null hypotheses are actually true (m 0 equals m), then the p-values behave like a sample from a uniform distribution and this graph should be a straight line through the origin. However, if points in the upper-right corner of this plot do not follow the initial trend, then some of these null hypotheses are probably false and 0 less-than m 0 less-than m.

The ADAPTIVEHOLM option uses this estimate of m 0 to adjust the step-up Bonferroni method while the ADAPTIVEHOCHBERG option adjusts the step-down Bonferroni method. Both of these methods are due to Hochberg and Benjamini (1990). When m 0 is known, these procedures control the familywise error rate in the same manner as their nonadaptive versions but with more power; however, since m 0 must be estimated, the FWE control is only approximate. The ADAPTIVEFDR and PFDR options also use ModifyingAbove m With caret Subscript 0, and are described in the following section.

The adjusted p-values for the ADAPTIVEHOLM method are computed by

p overTilde Subscript left-parenthesis i right-parenthesis Baseline equals StartLayout Enlarged left-brace 1st Row 1st Column min left-parenthesis m comma ModifyingAbove m With caret Subscript 0 Baseline right-parenthesis p Subscript left-parenthesis 1 right-parenthesis Baseline 2nd Column for i equals 1 2nd Row 1st Column max left-bracket p overTilde Subscript left-parenthesis i minus 1 right-parenthesis Baseline comma min left-parenthesis m minus i plus 1 comma ModifyingAbove m With caret Subscript 0 Baseline right-parenthesis p Subscript left-parenthesis i right-parenthesis Baseline right-bracket 2nd Column for i equals 2 comma ellipsis comma m EndLayout

The adjusted p-values for the ADAPTIVEHOCHBERG method are computed by

p overTilde Subscript left-parenthesis i right-parenthesis Baseline equals StartLayout Enlarged left-brace 1st Row 1st Column min left-parenthesis 1 comma ModifyingAbove m With caret Subscript 0 Baseline right-parenthesis p Subscript left-parenthesis m right-parenthesis Baseline 2nd Column for i equals m 2nd Row 1st Column min left-bracket p overTilde Subscript left-parenthesis i plus 1 right-parenthesis Baseline comma min left-parenthesis m minus i plus 1 comma ModifyingAbove m With caret Subscript 0 Baseline right-parenthesis p Subscript left-parenthesis i right-parenthesis Baseline right-bracket 2nd Column for i equals m minus 1 comma ellipsis comma 1 EndLayout

False Discovery Rate Controlling Adjustments

Methods that control the false discovery rate (FDR) were described by Benjamini and Hochberg (1995). These adjustments do not necessarily control the familywise error rate (FWE). However, FDR-controlling methods are more powerful and more liberal, and hence reject more null hypotheses, than adjustments protecting the FWE. FDR-controlling methods are often used when you have a large number of null hypotheses. To control the FDR, Benjamini and Hochberg’s (1995) linear step-up method is provided, as well as an adaptive version, a dependence version, and bootstrap and permutation resampling versions. Storey’s (2002) pFDR methods are also provided.

The FDR option requests p-values that control the "false discovery rate" described by Benjamini and Hochberg (1995). These linear step-up adjustments are potentially much less conservative than the Hochberg adjustments.

The FDR-adjusted p-values are defined in step-up fashion, like the Hochberg adjustments, but with less conservative multipliers:

p overTilde Subscript left-parenthesis i right-parenthesis Baseline equals StartLayout Enlarged left-brace 1st Row 1st Column p Subscript left-parenthesis m right-parenthesis Baseline 2nd Column for i equals m 2nd Row 1st Column min left-parenthesis p overTilde Subscript left-parenthesis i plus 1 right-parenthesis Baseline comma StartFraction m Over i EndFraction p Subscript left-parenthesis i right-parenthesis Baseline right-parenthesis 2nd Column for i equals m minus 1 comma ellipsis comma 1 EndLayout

The FDR method is guaranteed to control the false discovery rate at level less-than-or-equal-to StartFraction m 0 Over m EndFraction alpha less-than-or-equal-to alpha when you have independent p-values that are uniformly distributed under their respective null hypotheses. Benjamini and Yekutieli (2001) show that the false discovery rate is also controlled at level less-than-or-equal-to StartFraction m 0 Over m EndFraction alpha when the positive regression dependent condition holds on the set of the true null hypotheses, and they provide several examples where this condition is true.

Note: The positive regression dependent condition on the set of the true null hypotheses holds if the joint distribution of the test statistics bold upper X equals left-parenthesis upper X 1 comma ellipsis comma upper X Subscript m Baseline right-parenthesis for the null hypotheses upper H 01 comma ellipsis comma upper H Subscript 0 m Baseline satisfies: probability left-parenthesis bold upper X element-of upper A vertical-bar upper X Subscript i Baseline equals x right-parenthesis is nondecreasing in x for each upper X Subscript i where upper H Subscript 0 i is true, for any increasing set A. The set A is increasing if bold x element-of upper A and bold y greater-than-or-equal-to bold x implies bold y element-of upper A.

Dependent False Discovery Rate

The DEPENDENTFDR option requests a false discovery rate controlling method that is always valid for p-values under any kind of dependency (Benjamini and Yekutieli 2001), but is thus quite conservative. Let gamma equals sigma-summation Underscript i equals 1 Overscript m Endscripts StartFraction 1 Over i EndFraction. The DEPENDENTFDR procedure always controls the false discovery rate at level less-than-or-equal-to StartFraction m 0 Over m EndFraction alpha gamma. The adjusted p-values are computed as

p overTilde Subscript left-parenthesis i right-parenthesis Baseline equals StartLayout Enlarged left-brace 1st Row 1st Column gamma p Subscript left-parenthesis m right-parenthesis Baseline 2nd Column for i equals m 2nd Row 1st Column min left-parenthesis p overTilde Subscript left-parenthesis i plus 1 right-parenthesis Baseline comma gamma StartFraction m Over i EndFraction p Subscript left-parenthesis i right-parenthesis Baseline right-parenthesis 2nd Column for i equals m minus 1 comma ellipsis comma 1 EndLayout
False Discovery Rate Resampling Adjustments

Bootstrap and permutation resampling methods to control the false discovery rate are available with the FDRBOOT and FDRPERM options (Yekutieli and Benjamini 1999). These methods approximately control the false discovery rate when the subset pivotality condition holds, as discussed in the section Bootstrap, and when the p-values corresponding to the true null hypotheses are independent of those for the false null hypotheses.

The resampling methodology for the BOOTSTRAP and PERMUTATION methods is used to create B resamples. For the bth resample, let upper R Superscript b Baseline left-parenthesis p right-parenthesis denote the number of p-values that are less than or equal to the observed p-value p. Let r Subscript beta Baseline left-parenthesis p right-parenthesis be the 100left-parenthesis 1 minus beta right-parenthesis quantile of StartSet upper R Superscript 1 Baseline left-parenthesis p right-parenthesis ellipsis upper R Superscript b Baseline left-parenthesis p right-parenthesis ellipsis upper R Superscript upper B Baseline left-parenthesis p right-parenthesis EndSet, and let r left-parenthesis p right-parenthesis be the number of observed p-values less than or equal to p. Compute one of the following estimators:

local estimator upper Q 1 left-parenthesis p right-parenthesis equals StartLayout Enlarged left-brace 1st Row 1st Column StartFraction 1 Over upper B EndFraction sigma-summation Underscript b equals 1 Overscript upper B Endscripts StartFraction upper R Superscript b Baseline left-parenthesis p right-parenthesis Over upper R Superscript b Baseline left-parenthesis p right-parenthesis plus r left-parenthesis p right-parenthesis minus p m EndFraction 2nd Column if r left-parenthesis p right-parenthesis minus r Subscript beta Baseline left-parenthesis p right-parenthesis greater-than-or-equal-to p m 2nd Row 1st Column number-sign StartSet upper R Superscript b Baseline left-parenthesis p right-parenthesis greater-than-or-equal-to 1 EndSet slash upper B 2nd Column otherwise EndLayout
upper limit estimator upper Q Subscript beta Baseline left-parenthesis p right-parenthesis equals StartLayout Enlarged left-brace 1st Row 1st Column sup Underscript x element-of left-bracket 0 comma p right-bracket Endscripts left-parenthesis StartFraction 1 Over upper B EndFraction sigma-summation Underscript b equals 1 Overscript upper B Endscripts StartFraction upper R Superscript b Baseline left-parenthesis x right-parenthesis Over upper R Superscript b Baseline left-parenthesis x right-parenthesis plus r left-parenthesis x right-parenthesis minus r Subscript beta Baseline left-parenthesis x right-parenthesis EndFraction right-parenthesis 2nd Column if r left-parenthesis x right-parenthesis minus r Subscript beta Baseline left-parenthesis x right-parenthesis greater-than-or-equal-to 0 2nd Row 1st Column number-sign StartSet upper R Superscript b Baseline left-parenthesis p right-parenthesis greater-than-or-equal-to 1 EndSet slash upper B 2nd Column otherwise EndLayout

where m is the number of tests and B is the number of resamples. Then for upper Q equals upper Q 1 or upper Q Subscript beta, the adjusted p-values are computed as

p overTilde Subscript left-parenthesis i right-parenthesis Baseline equals StartLayout Enlarged left-brace 1st Row 1st Column upper Q left-parenthesis p Subscript left-parenthesis m right-parenthesis Baseline right-parenthesis 2nd Column for i equals m 2nd Row 1st Column min left-parenthesis p overTilde Subscript left-parenthesis i plus 1 right-parenthesis Baseline comma upper Q left-parenthesis p Subscript left-parenthesis i right-parenthesis Baseline right-parenthesis right-parenthesis 2nd Column for i equals m minus 1 comma ellipsis comma 1 EndLayout
Adaptive False Discovery Rate

Since the FDR method controls the false discovery rate at less-than-or-equal-to StartFraction m 0 Over m EndFraction alpha less-than-or-equal-to alpha, knowledge of m 0 allows improvement of the power of the adjustment while still maintaining control of the false discovery rate. The ADAPTIVEFDR option requests adaptive adjusted p-values for approximate control of the false discovery rate, as discussed in Benjamini and Hochberg (2000). See the section Adaptive Adjustments for more details. These adaptive adjustments are also defined in step-up fashion but use an estimate ModifyingAbove m With caret Subscript 0 of the number of true null hypotheses:

p overTilde Subscript left-parenthesis i right-parenthesis Baseline equals StartLayout Enlarged left-brace 1st Row 1st Column StartFraction ModifyingAbove m With caret Subscript 0 Baseline Over m EndFraction p Subscript left-parenthesis m right-parenthesis Baseline 2nd Column for i equals m 2nd Row 1st Column min left-parenthesis p overTilde Subscript left-parenthesis i plus 1 right-parenthesis Baseline comma StartFraction ModifyingAbove m With caret Subscript 0 Baseline Over i EndFraction p Subscript left-parenthesis i right-parenthesis Baseline right-parenthesis 2nd Column for i equals m minus 1 comma ellipsis comma 1 EndLayout

Since ModifyingAbove m With caret Subscript 0 Baseline less-than-or-equal-to m, the larger p-values are adjusted down. This means that, as defined, controlling the false discovery rate enables you to reject these tests at a level less than the observed p-value. However, by default, this reduction is prevented with an additional restriction: p overTilde Subscript left-parenthesis i right-parenthesis Baseline equals max left-brace p overTilde Subscript left-parenthesis i right-parenthesis Baseline comma p Subscript left-parenthesis i right-parenthesis Baseline right-brace.

To use this adjustment, Benjamini and Hochberg (2000) suggest first specifying the FDR option—if at least one test is rejected at your level, then apply the ADAPTIVEFDR adjustment. Alternatively, Benjamini, Krieger, and Yekutieli (2006) apply the FDR adjustment at level StartFraction alpha Over alpha plus 1 EndFraction, then specify the resulting number of true hypotheses with the NTRUENULL= option and apply the ADAPTIVEFDR adjustment; they show that this two-stage linear step-up procedure controls the false discovery rate at level alpha for independent test statistics.

Positive False Discovery Rate

The PFDR option computes the "q-values" ModifyingAbove q With caret Subscript lamda Baseline left-parenthesis p Subscript i Baseline right-parenthesis (Storey 2002; Storey, Taylor, and Siegmund 2004), which are adaptive adjusted p-values for strong control of the false discovery rate when the p-values corresponding to the true null hypotheses are independent and uniformly distributed. There are four versions of the PFDR available. Let upper N left-parenthesis lamda right-parenthesis be the number of observed p-values that are less than or equal to lamda; let m be the number of tests; let f = 1 if the FINITE option is specified, and otherwise set f = 0; and denote the estimated proportion of true null hypotheses by

ModifyingAbove pi With caret Subscript 0 Baseline left-parenthesis lamda right-parenthesis equals StartFraction m minus upper N left-parenthesis lamda right-parenthesis plus f Over left-parenthesis 1 minus lamda right-parenthesis m EndFraction

The default estimate of FDR is

ModifyingAbove FDR With caret Subscript lamda Baseline left-parenthesis p right-parenthesis equals StartFraction ModifyingAbove pi With caret Subscript 0 Baseline left-parenthesis lamda right-parenthesis p Over max left-parenthesis upper N left-parenthesis p right-parenthesis comma 1 right-parenthesis slash m EndFraction

If you set lamda equals 0, then this is identical to the FDR adjustment.

The positive FDR is estimated by

ModifyingAbove pFDR With caret Subscript lamda Baseline left-parenthesis p right-parenthesis equals StartFraction ModifyingAbove FDR With caret Subscript lamda Baseline left-parenthesis p right-parenthesis Over 1 minus left-parenthesis 1 minus p right-parenthesis Superscript m Baseline EndFraction

The finite-sample versions of these two estimators for independent null p-values are given by

StartLayout 1st Row 1st Column ModifyingAbove FDR With caret Subscript lamda Superscript asterisk Baseline left-parenthesis p right-parenthesis 2nd Column equals 3rd Column StartLayout Enlarged left-brace 1st Row 1st Column StartFraction ModifyingAbove pi With caret Subscript 0 Superscript asterisk Baseline left-parenthesis lamda right-parenthesis p Over max left-parenthesis upper N left-parenthesis p right-parenthesis comma 1 right-parenthesis slash m EndFraction 2nd Column if p less-than-or-equal-to lamda 2nd Row 1st Column 1 2nd Column if p greater-than lamda EndLayout 2nd Row 1st Column ModifyingAbove pFDR With caret Subscript lamda Superscript asterisk Baseline left-parenthesis p right-parenthesis 2nd Column equals 3rd Column StartFraction ModifyingAbove FDR With caret Subscript lamda Superscript asterisk Baseline left-parenthesis p right-parenthesis Over 1 minus left-parenthesis 1 minus p right-parenthesis Superscript m Baseline EndFraction EndLayout

Finally, the adjusted p-values are computed as

p overTilde Subscript i Baseline equals ModifyingAbove q With caret Subscript lamda Baseline left-parenthesis p Subscript i Baseline right-parenthesis equals inf Underscript p greater-than-or-equal-to p Subscript i Baseline Endscripts FDR Subscript lamda Baseline left-parenthesis p right-parenthesis i equals 1 comma ellipsis comma m

This method can produce adjusted p-values that are smaller than the raw p-values. This means that, as defined, controlling the false discovery rate enables you to reject these tests at a level less than the observed p-value. However, by default, this reduction is prevented with an additional restriction: p overTilde Subscript i Baseline equals max left-brace p overTilde Subscript i Baseline comma p Subscript i Baseline right-brace.

Last updated: December 09, 2022