- underexplored in SEM
- Non-nested tests are specifically designed to test competing models which involve different variables.

**ECVI**

- ECVI (Expected cross-validation index), in its usual variant is equivalent to BCC, and is useful for comparing non-nested models, lower ECVI is better fit. EVIC can be used to compare non-nested models and allows the determination of which model will cross-validate best in anohter sample of the same size and simliarly selected. Choose the model that has the lowest ECVI.
- Measures like the AIC and ECVI are good for comparingnon-nested models.
- If you are interested in comparing non-nested models (where one modelcannot be derived from the other through suitable parameterrestrictions) then you could employ non-nested tests. In essence, criteria such as AIC and ECVI provide a ranking only, they cannottell you whether one model outperforms another in a statistical sense. These measures provide a way of ranking models, the rank is on the basis of cross-validation capacity. use one of the single sample cross-validation indices, such as Akaike's Information Criterion (AIC) or the Expected Cross Validation Index. The idea here is that you have a set of competing models that may or nay not be nested. The AIC and/or ECVI provide a means of comparing models and choosing one that will cross-validate best ina nother simple of the same size and similarly selected. These measures do not make sense in isolation but only comparatively. You would choose the model that has the smallest value of either the AIC or ECVI. I'm naming these two, but there are others. Perhaps the ECVI is the simplest.
- Browne, M. W., & Cudeck, R. (1989). Single sample cross-validation indices for covariance structures. Multivariate Behavioral Research 24(4), 445-455.

**AIC-- Akaike's informational criteria **

- one can compare non-nested competing models. Then one could choose from among them the one with the smallest AIC.
- One value of the AIC is that it allows researchers to choose among competing non-nested models in terms of predictive validity criteria. However, we rarely find outright comparisons of competing models.Typically we examine our own models and modify them according to statistical and stubstantive criteria. In this case, one can observe changes in the AIC as modifications proceed. As long as the AIC is going down, then the addition of a parameter will "cross validate" well in a future sample of the same size.
- As modificationsproceed the AIC's will drop dramatically.
- AIC is fine to use when modelsare non-nested, but only when the same variables are used in competing models. if competing models had the *same number*of manifest variables (but not necessarily the*same* manifest variables) and the same sample size,I could see a case for arguing that AIC-like criteriacould be used to determine "which model best fits itsown data". The problem is that the point estimates of AIC-likecriteria are partially dependent upon the number ofmanifest variables. If different models have differentnumbers of manifest variables, then it would be difficultto meaningfully compare values across models.
- AIC is not a test to test whether model A is a statistically significant improvement over model B. AIC is a decision criterion, not a test.
- "Model AIC" is the value associated withthe model you are estimating. If thecompeting models involve the same observedvariables, then "Independence model AIC"and "Saturated model AIC" will be the samefor all the competing models.
- Akaike, H. (1987). Factor analysis and AIC. Psychometrika 52(3), 317-332.
- Haughton et al. (1997), Information and other criteria in structural equation model selection, in: Communications in Statistics: Simulation and Computation, 26, 1477-1516.
- Kieseppä, I. A. (2003), AIC and Large Samples", Philosophy of Science 70(5), 1265-1276.
- Kumar, A., & Sharma, S. (1999). A metric measure for direct comparison of competing models in covariance structure analysis. Structural Equation Modeling, 6, 169-197.
- Rigdon, E. E. (1999). Using the Friedman method of ranks for model comparison in structural equation modeling. Structural Equation Modeling, 6, 219-232.

**BIC--Bayesian information criterion **

- Bayesian Information Criterion values can be compared between the twomodels as well by determining the difference of each model's BIC. If the difference is 5 points or more, the models are likely to be different - 10 points or more and you have near certainty. Again, if the models are different, favor the model with the lowest BIC. Otherwise, favor the more parsimonious model.
- The BIC can be used for comparing non-nested models. Raftery (1995) presents guidelines for interpreting differences between BIC-values. The smaller the BIC (as a negative number) the better the fit. A difference of at least 6 is taken as strong evidence that the model with the smaller BIC-value fits better than the other model. Use of BIC for hypothesis testing or selecting models may be controversial,because its theoretical rationale is in Bayesian statistics, not inclassical significance testing. In addition, it must be stressed that BIC gives only an approximation. Computing the actual Bayes factor inSEM-context may be too complicated. From a practical point of view, forsmall samples BIC may select models the researcher will consider too simplefrom a theoretical point of view. Raftery, A.E. (1995), Bayesian model selection in social research. In P.V.Marsden (Ed.), Sociological Methodology 1995. Oxford: Blackwell.
- For BIC, Raftery (1993) has suggested a difference of 5 points or more is indicative of a difference whereas 10 or more is near certainty of a difference (favoring the lower BIC).
- Nagin, D. S. (1999).Analyzing developmental trajectories: A semiparametric, group-basedapproach. Psychological Methods, 4(2), 139-157., the author uses theBayesian information criterion (BIC) "as a basis for selecting the optimal model
- Kass and Raftery (1995) and Raftery (1995) have argued that BICcan be used for comparison of both nested and unnested models under fairlygeneral circumstances" (p. 147)
- Haughton et al., 1997, Information and other criteria in structural equation model selection, Communications in Statistics: Simulation and Computation, 26, 1477-1516. The best performer in this simulation study was BIC*,
- Raftery, A. E. (1995). Bayesian model selection in social research. In P. V. Marsden (Ed.), Sociological methodology (pp. 111-163). Cambridge: Basil Blackwell.
- Suppose M1 and M2 are two nonnested competing models. BIC 12 is commonly applied to model comparions. If BIC 12 > 2 and BIC12 > 6, respectively, then support M2. When M1 is nested within M2, it can't give a definite conclusion if BIC12 is within the range between 0 and 2. For example, if M1 is a nonstructurd covariance matrix (a saturated model). M2 is a correlated 4-factor model. If BIC12=687, then we select M2. If M3 is a uncorrelated 4-factor model. BIC32=378. Then we select M2. Thus, a CFA model with correlated 4-factor fits the data much better. For example, if BIC54= 492, then we select M4. If BIC76=401, then we select M6. ---Lee, S.Y., Song, X.Y., Skevington,S., and Hao, Y.T. (2005), Application of structural equation models to quality of life. Structural Equation Modeling, 12 (3), 435-453.

**Econometrics---encompassing principle**

- Econometricians have done much morethan SEM users have in this area. Oneapproach is called the "encompassing principle"(cites below). Here the models to be compared,say A and B, need not be nested. but theremust be some model C such that both A and Bare nested within C. (C may be equal to A or toB.) 1)Mizon, Grayham E. (1984), "The EncompassingApproach in Econometrics," in Econometricsand Quantitative Economics, David F. Hendryand Kenneth F. Wallis, eds. Oxford, UK: BasilBlackwell, pp. 135-72. 2)Jean-Francois Richard (1986),"The Encompassing Principle and ItsApplication to Testing Non-nested Hypotheses,"Econometrika, 54 (May), 657-78. 3)Friedman, Milton (1937), "The Use of Ranks toAvoid the Assumption of Normality Implicit in theAnalysis of Variance," Journal of the AmericanStatistical Association, 32, 675-701. 4) Bollen, K. A., & Ting, K. (1993). Confirmatorytetrad analysis. Sociological Methodology, 23,147-175.

**2SLS (two-stage least squares) Estimator for SEM (Structural Equation Models)**

- Non-nested tests can beadopted for SEM by using Ken Bollen's 2sls estimator, which works onsingle equations http://csusap.csu.edu.au/~eoczkows/home.htm#2sls
- Farrell, M. and Oczkowski, E. (2002)'Are Market Orientation and LearningOrientation Necessary for Superior Organizational Performance?' Journalof Market-Focused Management, 5(3), 197-217.
- Oczkowski, E. (2002) 'Discriminating Between Measurement Scales usingNon-nested Tests and 2SLS: Monte Carlo Evidence.' Structural EquationModeling, 9, 103-125.
- Oczkowski, E. and Farrell, M. (1988) 'Discriminating between MeasurementScales using Non-Nested Tests and Two Stage Estimators: The Case ofMarket Orientation.' International Journal of Research in Marketing, 15(4), 349-366.

Reading list

- Kim, I. J., Zane, N. W. S., & Hong, S. (2002). Protective factors against substance use among Asian American youth: A test of the peer cluster theory. Journal of Community Psychology, 30, 565-584.--- example paper
- Rust, R.T., Lee. C., and E. Valente Jr. (1995) 'Comparing CovarianceStructure Models: A General Methodology.' International Journal ofResearch in Marketing.' vol 12, pp279-291. --The Rust paper uses the BCVL (Bayesian cross-validated likelihood method).This method employs the traditional MLE procedure for estimation butthen employs a Bayesian philosophy for model comparison and requiressuitable samples for cross validation.
- Browne, M. W., & Cudeck, R. (1989). Single sample cross-validation indices for covariance structures. Multivariate Behavioral Research 24(4), 445-455.
- Marcoulides, G. A., & Hershberger, S. L. (1997). Multivariate statistical methods: A first course. Mahwah, NJ: Lawrence Erlbaum Associates
- Maruyama, G. M. (1998). Basics of structural equation modeling. Thousand Oaks, CA: Sage Publications.
- Kline, R. B. (1998). Principles and practice of structural equation modeling. New York: The Guilford Press.
- Kaplan, D. (2000). Structural equation modeling: Foundations andExtensions. Thousand Oaks, CA: Sage.
- Levy, R., & Hancock, G. R. (2004). "A framework of statistical tests forcomparing covariance structure models." Paper presented at the AnnualMeeting of the American Educational Research Association, San Diego, CA,April, 2004.

## 1 comment:

I would like to add my blog to the list and would love to make any changes that Heather and company may suggest. Because we are all One here is welcome nay suggestions to add my blog to the cause"custom article writing"

Post a Comment