The HPPLS Procedure

Reduced Rank Regression

As discussed in the preceding sections, partial least squares depends on selecting factors bold t equals bold upper X bold w of the predictors and bold u equals bold upper Y bold q of the responses that have maximum covariance, whereas principal components regression effectively ignores bold u and selects bold t to have maximum variance, subject to orthogonality constraints. In contrast, reduced rank regression selects bold u to account for as much variation in the predicted responses as possible, effectively ignoring the predictors for the purposes of factor extraction. In reduced rank regression, the Y-weights bold q Subscript i are the eigenvectors of the covariance matrix ModifyingAbove bold upper Y With caret prime Subscript normal upper L normal upper S Baseline ModifyingAbove bold upper Y With caret Subscript normal upper L normal upper S of the responses that are predicted by ordinary least squares regression, and the X-scores are the projections of the Y-scores bold upper Y bold q Subscript i onto the X space.

Last updated: December 09, 2022