Some Modifications to Calculate Regression Coefficients in Multiple Linear Regression

Document Type : Research Paper

Authors

1 Department of Industrial Engineering, University of Science and Culture, P. O. Box: 13145-871, Tehran

2 Department of Mathematics, Azad University, P. O. Box: 81595-158, Khorasgan, Isfahan, Iran

Abstract

In a multiple linear regression model, there are instances where one has to update the regression parameters. In such models as new data become available, by adding one row to the design matrix, the least-squares estimates for the parameters must be updated to reflect the impact of the new data. We will modify two existing methods of calculating regression coefficients in multiple linear regression to make the computations more efficient. By resorting to an initial solution, we first employ the Sherman-Morrison formula to update the inverse of the transpose of the design matrix multiplied by the design matrix. We then modify the calculation of the product of the transpose of design matrix and the design matrix by the Cholesky decomposition method to solve the system. Finally, we compare these two modifications by several appropriate examples.

Keywords

Main Subjects


[1] Bain L.J., Engelhardt M. (1992), Introduction to Probability and Mathematical Statistics; 2nd edition,
PWS-KENT.
[2] Bartlett M.S. (1951), An Inverse Matrix Adjustment Arising in Discriminant analysis; Ann. Math.
Statist 22; 107-111.
[3] Egidi N., Maponi P. (2006), A Sherman-Morrison Approach to the Solution of Linear Systems;
Journal of Computational and Applied Mathematics 189; 703-718.
[4] Fitzpatrik P., Murphy C.C. (1993), Solution of Linear Systems of Equations in the Presence of Two
Transient Hardware Faults; IEE Proceedings E: Computers and Digital Techniques 140(5); 247-254.
[5] Golub G.H., Von Loan C.F. (1989), Matrix computations, 2nd edition; The Johns Hopkins University
press.
[6] Hager W.W. (1989), updating the Inverse of a Matrix; SIREV 31(2); 221-239.
[7] Hoffman K. (1971), Linear Algebra, 2nd edition, Prentice Hall, Inc.,NJ.
[8] Huang J. Z., Liu N., Pourahmadi M. and Liu L. (2006), Covariance Matrix Selection and Estimation
Via Penalized Normal Likelihood, Biometrika; 93(1); 85-98.
[9] Kaci M., Sadoun T., Moussaceb K. and Akvoune N. (2001), Modelling of degradation of unstabilized
and HALS-stabilized LDPE films under thermo-oxidation and natural weathering conditions; Journal
of Applied Polymer Science; 82(13); 3284-3292.
[10] Khamis H. and Kepler M. (2002), Multivariate Cubic Spline Smoothing in Multiple Prediction;
Computer Methods and Programs in biomedicine; 67(2); 131-136
[11] Kok Song Chua (2003), Efficient Computations for Large Least Square Support Vector Machine
Classifiers; Pattern Recognition Letters; 24(1-3); 75-80.
[12] Lai S. H. and Vemuri B. C. (2003), Sherman-Morrison-Woodbury-Based Algorithms for the Surface
Smoothing Problem; Linear Algebra and its Application; 265(1-3); 203-229.
[13] Neter J., Kutner M. H., Nachtsheim C. J. and Wasserman W. (1996), Applied linear regression
models, 3rd edition, McGraw-Hill; New York
[14] Pourahmadi M., Daniels M. J. and Park T. (2007), Simultaneous Modelling of the Cholesky
Decomposition of Several Covariance Matrices; Journal of Multivariate Analysis; 98(3); 568-587.
[15] Rencher, A., C. (2000), Linear Models in Statistics, John Wiley & Sons, New York.
[16] Stoer J. and Bulirsch R. (1993), Introduction to Numerical analysis, 2nd edition, Springer, New York.
[17] Woodbury M. (1950), Inverting Modified Matrices; Memorandom Rept. 42, Statistical Research
Group, Princeton University, Princeton, NJ.