Tuesday, May 28, 2019
Regression Results :: Research Analysis
3.3.4. ResultsFor the purpose of finding a suitable function for benefits transfer, different meta-regression mock ups become undertake (i) different functional forms (e.g., a simple linear form versus semi-log form) (ii) a fully specified model including all independent variables and a restricted model on grounds of statistical significance or econometric problems (e.g., multicollinearity) (iii) robust consistent standard errors to correct for heteroskedasticity. As shown by the stress for heteroskedasticity (see Table 3.7), a simple linear form has heteroskedasticity. There are several ways to correct for heteroskedasticity (e.g., GLS, WLS, robust consistent errors, and selective information transformation). For this study, robust consistent standard errors and data transformation (e.g., the log transformation of the dependent variable) are utilized. All independent variables initially are considered, even if later dropped on grounds of statistical significance or econometric problems (e.g., multicollinearity). Some variables (e.g., MSW and ACTIV) are dropped because the variables have multicollinearity and/or are statistically insignificant at the 20% level for optimizing the meta-regression transfer model (suggested by Rosenberger and Loomis (2001, 2003).A wide range of diagnostic tests has been conducted on each regression for benefits transfer (suggested by Walton et al. 2006). The R2 for the overall fit of the regression, hypothesis tests (F tests and t tests), and diagnostic works (e.g., skewness-kurtosis normality test, Ramseys RESET test for the specification error bias, heteroskedasticity test, and multicollinearity assessment) are reported.The F test assesses the null hypothesis that all or some coefficients ( ) on the models instructive variables equal zero i.e., H_0 _1= _2== _k=0 for all or some coefficients (Wooldridge 2003). A linear restriction test on some coefficients is useful before dropping the variables when some variables are unr eliable due to multicollinearity (Hamilton 2004). An important issue when handling small samples is the potential for multicollinearity which has a high degree of linear relationships between explanatory variables (Walton et al. 2006). The high correlation between estimated coefficients on explanatory variables in small samples can produce possible concerns (i) substantially higher standard errors with lower t statistics (a greater chance of falsely accepting the null hypothesis in standard significance tests) (ii) unforeseen changes in coefficient magnitudes or signs and (iii) statistically insignificant coefficients despite the high R2 (Hamilton 2004). A number of tests to indicate the presence and severity of multicollinearity exist (e.g., Durbin-Watson tests, VIF, Tolerance, and a correlation matrix between estimated coefficients). One test is the variance inflation factor (VIF) which measures the degree to which the variance and standard error of an estimated coefficient inc rease because of the inclusion of the explanatory variable (i.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.