![]() ![]() ![]() These values are calculated dependent on T (length of the balanced panel-time periods the individuals were surveyed), K (number of regressors) and N (number of individuals in the panel). This statistic can be compared with tabulated rejection values. The inflated t-statistic, may in turn, lead us to incorrectly reject null hypotheses, about population values of the parameters of the regression model more often than we would if the standard errors were correctly estimated. These small standard errors will cause the estimated t-statistic to be inflated, suggesting significance where perhaps there is none. As a consequence, if positive serial correlation is present in the regression, standard linear regression analysis will typically lead us to compute artificially small standard errors for the regression coefficient. Second, positive serial correlation typically causes the ordinary least squares (OLS) standard errors for the regression coefficients to underestimate the true standard errors. First, the F-statistic to test for overall significance of the regression may be inflated under positive serial correlation because the mean squared error (MSE) will tend to underestimate the population error variance. Īlthough serial correlation does not affect the consistency of the estimated regression coefficients, it does affect our ability to conduct valid statistical tests. ![]() ![]() Note that the distribution of this test statistic does not depend on the estimated regression coefficients and the variance of the errors. Later, John Denis Sargan and Alok Bhargava developed several von Neumann–Durbin–Watson type test statistics for the null hypothesis that the errors on a regression model follow a process with a unit root against the alternative hypothesis that the errors follow a stationary first order autoregression (Sargan and Bhargava, 1983). Durbin and Watson (1950, 1951) applied this statistic to the residuals from least squares regressions, and developed bounds tests for the null hypothesis that the errors are serially uncorrelated against the alternative that they follow a first order autoregressive process. The small sample distribution of this ratio was derived by John von Neumann (von Neumann, 1941). It is named after James Durbin and Geoffrey Watson. In statistics, the Durbin–Watson statistic is a test statistic used to detect the presence of autocorrelation at lag 1 in the residuals (prediction errors) from a regression analysis. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |