## Tuesday, May 15, 2007

### standards in CFA

• You want to see standardized loadings with absolute values in the range of of 0.7 to 1.0, which imply squared multiple correlations between 0.5 and 1.0 (for measures that load
on only one factor).

### single indicator for a construct

• to take the measurement error in each scale into account-- using alpha for setting values for measurement error
(1-alpha) * var(measured variable)
• using alpha for setting values for (measurement) error terms is not really a great way to proceed, since alpha might not reflect the true proportion of valid variance in the items (i.e., the variance attributable to your latent construct), as alpha is essentially just the
average correlation among the items. In other words it could be influenced by correlated errors among certain items (i.e., correlations not due to your latent construct).
• If we would not estimate the reliability of the item, we can fix its factor loading equal to 1 and its residual variance equal to zero. --- Millard, A.G.C., and Lance, C.E. (1998). Development and evaluation of a parent-employee interrole conflict scale. Social Indicators Research, 45, 343-370

## Saturday, May 12, 2007

### Heywood cases (negative error variances or squared multiple correlation that is greater than 1)

• Negative or near zero variance estimates can be due to: 1) Identification problems, 2) Outliers or influential cases, 3) Sampling fluctuations, or 4) model misspecifications.
• If R2 is over 1, this usually means that error variance is negative, and a negative variance will usually produce a "not positive definite" matrix. It could be that you are
overcorrecting for unreliability, leaving too little variance to be explained in the construct, and thus forcing structural error variance to be negative.
• The sampling fluctuations means, if your Heywood case is statistically insignificantly different from zero, then you don't really have to worry. If the 95% confidence interval about the negative error variance contains 0, then the negative is as likely to be due to sampling error as a deficiency in the model and not have to worry. Check the SE (standard error) of the variance term, ie., the variance estimate also has standard error. Does the CI include 0 or some positive value? If so, you're probably OK. If the offending estimates are really due to random sampling error (where a small but positive population value becomes a negativce sample estimate, due to an unusual random sample), then fixing the error variance of the indicator to acceptable values, such as zero or a small positive value, eg., 0.02 , .0001.
• Finally, if you rule every other reason out, then your model must be misspecified. In CFA framework, it probably means you have some unforseen measurement error correlations or missing links in the model.
• if you found a statistically significant negative error variance, then this is an evidence of misspecification
• It might be good to do a sensitivity analysis--try assigning a range of values to measurement error, and observing the impact on your results. If the specification of error variance is crucial to the outcome, that's worth noting. In sensitivity analysis, try some different reliabilities, like 100%, 80%, 60%, and observe the impact on your SEM results. If one of those values causes things like negative error variances, then rule it out. But you may be left with multiple solutions
• Dillon, W. R., Kumar, A., & Mulani, N. (1987). Offending estimates incovariance structure analysis: Comments on the causes of and solutions to Heywood cases. Psychological Bulletin, 101(1), 126-135.
• Gerbing, D. W., & Anderson, J. C. (1987). Improper solutions in the analysis of covariance structures: Their interpretability and a comparision of alternate respecifications. Psychometrika, 52, 99-111.

### Content validity (face validity)

• Content validity, also called face validity, items seeming to measure what they claim to (studies can be internally valid and statistically valid, yet use measures lacking face validity).
• In content validity one is also concerned with whether the items measure the full domain implied by their label. Though derogated by some psychometricians as too subjective, failure of the researcher to establish credible content validity may easily lead to rejection of his or her findings.
• Use of surveys of panels of content experts or focus groups of representative subjects are ways in which content validity may be established, albeit using subjective judgments.
• Are the measures which operationalize concepts ones which seem by common sense to have to do with the concept? Or could there be a naming fallacy? Indicators may display construct validity, yet the label attached to the concept may be inappropriate.

### Convergent validity

• Converent validity also requires that SMCs be equal to or greater than .5 along with pattern coeffieicnts equal to or greater than .7. Other useful assessments are composite reliability and Average Variance Extracted. Composite reliability should be equal to or greater than .7 and AVE should be greater than .5.
• Convergent validity is assessed by the correlation among items which make up the scale or instrument measuring a construct (internal consistency validity)
• Internal consistency is a type of convergent validity which seeks to assure there is at least moderate correlation among the indicators for a construct
• Cronbach's alpha is commonly used to establish internal consistency construct validity, with .60 considered acceptable for exploratory purposes, .70 considered adequate for confirmatory purposes, and .80 considered good for confirmatory purposes.
• Average variance extracted (AVE) s above 0.5 are treated as indications of convergent validity.
• Convergent validity also requires that SMCs be equal to or greater than .5 along with pattern coeffieicnts equal to or greater than .7.

### Discriminant validity

• Discriminant validity analysis refers to testing statistically whether two constructs differ; Convergent validity test through measuring the internal consistency within one construct, as Cronbach's alpha does
• indicators for different constructs should not be so highly correlated as to lead one to conclude that they measure the same thing. This would happen if there is definitional overlap between constructs.
Methods to test discriminant validity
correlation method, checking the correlation between construct directly
• less stringent test of discriminant validity, researchers often reject an indicator if it correlates more highly with a construct different from the one which it was intended to measure. Some researchers use r = .85 as a rule-of-thumb cutoff for this assessment, fearing that correlations above this level signal definitional overlap of concepts--- correlation among indicators of different constructs
correlaiton method, individual testing the correlation between two constructs
• the true correlation between the constructs when purged of errors of measurement). use bootrapping method to obtain confidence intervals of the correlation, If it is
significantly less than 1.0 (which you can find out with bootstrapping or by examining the standard error of the estimate, the parameter estimate times minus/plus twice the standard error gives you the 95% confidenceinterval around the parameter estimate), the confidence interval doesn't include "one", then we can say that the correlations among two constructs were significantly (p < .05 or 0.001) less than 1.00, thus, the two constructs are distinct, not indentical. -- testing whether a factor correlation is less than 1.0
• First, we have paired correlations among latent variables. Second, we can compute the confidence interval of the paired correlations. If the value of 1 is not included within the computed confidence interval---we can say discriminant validity is supported--from Torkzadeh, Koufteros, pflughoeft (2003)
• When you have a hypothesis that the parameter is equal to a constant, e.g., the factor correlation is equal to unity (1), and a type-I error rate, i.e., alpha=.05, you will reject the
null hypothesis if the constant is outside the confidence interval and failto reject it if it falls inside the confidence interval.
• Constraints and set up a confidence interval around the factor correlation. If it includes unity (1), then it runs against your model that these two are different factors. If the interval excludes unity, than we cannot claim that they are the same factor.
• The degree of correlation between two constrructs has nothing whatsoever to do with the validity of either construct. They could be very highly correlated, say at the .85 level, and still be absolutely distinct, with no overlap in meaning or content.
• The coefficient between two constructs was also significantly less than 1 (i.e., the confidence interval, plus or minus two standard errors, (confidence intevals can also be obtained through bootstrap method), did not contain a value of 1 , which also offers support for the discriminant validity between the two constructs (Anderson & Gerbing,
1988;Bagozzi, 1980; Netemeyer, Johnston,& Burton,1990) Examing the confidence intervals of the paired correlaitons among the latent variables, if the confidence interval of the paired correlation doesn't include the value of 1, --- evidence of discriminant validity (Torkzadeh, Koufteros, & Pflughoeft,2003)
• we compute the 95 % confidence interval for the "correlation" between two latent variables, whether the confidence interval encompass 1 demonstrates discriminant validity or not, the smaller the value of the correlation, the greater the degree of discriminant validity
Factor Method
• Some researchers conclude that constructs are different if their respective indicators load most heavily on different factors in principal components factor analysis (see Straub, 1989). In one version of this approach, all items for all constructs are factored. In a more stringent version, indicator items for each pair of constructs are factored separately.
AVE, average variance extracted method
• obtain the AVE value (see composite reliability to get the formula to calculate the AVE), compare the AVE to the squared correlation between the two constructs of interest, The avg. variance extracted should be greater than the squared correlation in order to demonstrate satisfactory discriminant validity
• If the average variances extracted by the correlated latent variables is greater than the square of the correlation between the latent variables then discriminant validity obtains. (Fornell and Larcker, 1981)
• The correlation between a construct and any other construct in your model should be larger than the AVE for both of the constructs.
• AVE (average variance extracted) for the constructs should be greater than their squared correlation (shared variance).
• Fornell and Larker’s (1981) criterion. evidence of discriminant validity is shown if the average variance extracted (AVE) is greater than the square of the construct’s correlations with the other factors.
• compares the variance extracted estimates of constructs with the square of the parameter estimate between constructs. If the variance extracted estimates of construct A and construct B are greater than the square of the correlation between the two constructs, evidenceof discriminant validity exists (Fornell & Larcker, 1981). Netemeyer, R. G., Johnston, M. W., & Burton, S. (1990). Analysis of role conflict and
role ambiguity in a structural equations framework. Journal of Applied Psychology, 75 (2), 148-157.
• For example, if I have 3 constructs, using confirmatory factor analysis, I got the value of "correlation" of paired constructs, 0.271, 0.435, 0.431; The variance extracted estimates were 0.719, 0.534, and 0.28. All of the average variance extracted exceed the square of the correlation between the construct, (0.271)2 =0.0734, (0.435)2= 0.189, (0.431)2=0.186. Thus, discriminant validity exists.
Fit index method
• I have a factorial comprising tree correlated factors, each measured with 4 items.
Is there a way to test the discriminant validity of the three factors ?
• One way to test whether the factors discriminate from each other is to determine whether the manifest indicators are best represented by three, two, or one latent. You can then compare ECVI and AIC indexes (because the models are nonnested). The lower the values of ECVI and AIC, the better the model (taking into account chi-square and other fit indexes). If you get a better fit with three factors, as compared to two or one, then you have some evidence of discriminant validity.
• You can use the AIC or ECVI to determine which model fits the data better (lower values indicate better fit).
SEM method
• Do Not restrict the covariances between two constructs to be 1. Restrict the correlation between two constructs to be 1.
• In Amos, object properties can be used to constrain covariances of exogeneous variables to a particular value, but have found no mention of how to do this for correlations, AMOS only allows us to fix the COVARIANCE of the path estimate and not CORRELATION,
---If you scale your latent variable by fixing the variance of your exogenous latent variables to 1, then your are in effect dealing with correlations among your exogenous
variables. Scale each latent variable by fixing it's variance to 1. Do not fix the loadings of any of the indicators to 1. Because the variance of the latent variables is fixed to 1, this covariance is essentially a correlation.
• In Amos, You need to standardized the variance of the latents by setting their variance to 1.0 (instead of fixing one of the factor loadings). Thereafter, the constraint you impose on the correlation between the latents will be reflected in the estimation. This assumes that you don't have multiple groups, in which case you should estimate the model for each group separetely.
• If you use AMOS or MPlus, you can only fix the covariance. To put the software in a "corelation mode" you must do this:
1. fix the variance of the latent to 1.
2. free the "1 constraint" of the observed to the latent (the scale observed).
3. Fix the covariance ( now correlation because 1 and 2 ) between the two latents to 1.
• You can do a chi-square difference test between the constrained model (i.e., where the correlation is fixed to 1) to the unconstrained model (i.e., where the correlation is released). If the difference is significant then the correlation is different from 1 and you reject the constrained model. If the difference is not significant then the correlation is not different from 1 and you accept the constrained model.
• Comparing the fit statistics between this model (fix the correlation to 1) and the model without this constraint, you can test whether the correlations is significantly different from one.
• Nested models. A more rigorous (and more widely accepted) SEM-based alternative approach to discriminant validity is to run the model unconstrained (the correlation between two constructs is free) and also constraining the correlation between constructs to 1.0. If the two models do not differ significantly on a chi-square difference test, the researcher fails to conclude that the constructs differ (see Bagozzi et al., 1991). In this procedure, if there are more than two constructs, one must employ a similar analysis on each pair of constructs, constraining the constructs to be perfectly correlated and then freeing the constraints. This method is considered more rigorous than either the SEM measurement model approach or the AVE method.
• You can test the discriminant validity of the factors by constraining the correlation to unity (1, restricted model) and then test whether the model is significantly worse fitting than the one where the correlation is freely estimated (unrestricted model). If the constrained model is significantly worse fitting than then unconstrained model (using the chi-squared difference test) you can reject the hypothesis that the correlation is 1.0, thus providing some evidence for discriminant validity.
• Example. In a study of industrial relations, Deery, Erwin, & Iverson (1999) wrote, "The discriminant validity was tested by calculating the difference between one model, which allowed the correlation between the constructs (with multiple indicators) to be constrained to unity (i.e., perfectly correlated), and another model, which allowed the correlations to be free. This was carried out for one pair of constructs at a time. For example, in testing organizational commitment and union loyalty, the chi-square difference test between the two models (p<.001) affirmed the discriminant validity of the constructs. " See Deery, S., Erwin, P., & Iverson, R. (1999). Industrial relations climate, attendance behaviour, and the role of trade unions. British Journal of Industrial Relations 37(4): 533-558.
• fixing the path correlation between two factors at 1.0. (the more constrained model) , this setting up a null hypothesis of non-discriminance between the two factors. If the more constrained model has a significantly poorer fit, then the null hypothesis would be rejected (evidence of two separate factors).
• nested models. one model with the correlation between the two latent constructs fixed to one, one model with the correlation allowed to vary freely. Then look for the result of hierarchical Chi-square.
• The one-factor model is nested within the two-factor model (i.e., the former effectively constraints the correlation between the factors to unity), and therefore the two models can be comapred using a chi-square difference test.
• Anderson J.C. and Gerbing D.W. (1988), The authors state (p.416) "Discriminant validity can be assessed for two estimated constructs by constraining the estimated correlation parameter [...] between them to 1.0 and then performing a chi-square difference test
on the values obtained for the constrained and unconstrained models
(Joreskog, 1971). Anderson and Gerbing (1988) suggest that discriminant validity could be assessed by constraining the correlation between each pair of factors to unity. If the Chi2 difference between the constrained and unconstrained models is statistically significant, it is likely that the correlation for the given pair of factors is indeed not one.
• A significantly lower chi-square value for the model in which the trait correlations are not constrained to unity would indicate that the traits are not perfectly correlated and that discriminant validityis achieved (Bagozzi & Philips,1982,p.476).
• Comparison between the unrestricted model (i.e., in which the correlations among all constructs were freely estimated) and each of the models in which the correlation between one pair of constructs was restricted equal to 1 constituted a test of the discriminant validity of the two constructs. -- Mallard, A. G.C., and Lance, C.E., (1998) Development and evaluation of a parent-employee interrole conflict scale, Social Indicator Research, 45, 343-370
Other important information:
• Logically, if two factors correlate perfectly, their correlations with other factors should be equal. However, constraining a factor correlation to unity does not generally produce estimates of equal magnitude for the correlations of the two factors with other factors in the model (this can be demonstrated empirically)
• Anderson J.C. and Gerbing D.W. (1988), Structural Equation Modeling in practice: a Review and Reccomended Two-Step Approach, in "Psychological Bulletin", 103, 3.
• Ajay K. Kohli; Bernard J. Jaworski; Ajith Kumar, 1993, MARKOR: A Measure of Market Orientation, Journal of Marketing Research, Vol. 30, No. 4. (Nov., 1993), pp. 467-477.
• Bagozzi, Richard P., Youjae Yi, and Lynn W. Phillips. 1991. "Assessing Construct Validity in Organizational Research." Administrative Science Quarterly 36 (3): 421.
• Borsboom, D., Mellenbergh, G. J., & van Heerden, J. (2004). The concept of validity. Psychological Review, 111 (4). 1061-1071.
• Fornell and Larcker (1981), “Evaluating Structural Equation Models with Unobservable Variables and Measurement Error,” Journal of Marketing Research, 18 (February), 39-50.
• Torkzadeh, Koufteros,& Pflughoeft (2003) Confirmatory analysis of computer self-efficacy. Structural Equation Modeling, 10(2): 263-275.
• Ware, Galassi, & Dew (1990) The Test Anxiety Inventory: A Confirmatory Factor Analysis. Anxiety Research 3:205-212.

### Halo effect

• Respondents rate people (or other things) whom they like or respect high on all scales, regardless of the person's actual performance. This is the Halo effect

### power analysis

• MacCallum, R. C., Browne, M. W., & Sugawara, H. M. (1996). Power analysis and determination of sample size for covariance modeling. Psychological Methods, 1, 130-149.
• Muthén, L. K., & Muthén, B. O. (2002). How to use Monte Carlo study to decide on sample size and determine power. Structural Equation Modeling, 4, 599-620.

Power for multilevel SEM

### second order factor

• second-order factors should be identified by fixing the variance to 1-- this is not correct for group-based analyses
• It would be best to scale the second order latent by fixing one of the loadings to 1 instead of constraining the factor variance when estimating multigroup models (fixing the variances would lead to erroneous results) .

### multitrait-multimethod

• There are simpler types of models for MTMM data, including ones with multiple but uncorrelated method factors, a single method factor specified to affect all the indicators, and a model that is sometimes called acorrelated uniquenesses (CU) model, which uses measurement error covariances among indicators based on the same method to estimate common method effects.These kinds of models are not only more parsimonious than CTCM models, theymay also be less subject to problems of identification.
• CTCU--correlated-trait-correlated-uniqueness model (CTCU model; Marsh and Bailey,1991; Marsh et al., 1992). The standard CTCU model explicitly incorporates the traits, but not the method factors. However, the EFFECTS of the method factors are included in the model through the correlated uniquenesses of the relevant indicators.
• The expression of "correlated-uniqueness" gives thei mpression that the method effect is not defined as common factors in the CTCU. In fact, they are defined as common factors.
• The structure of the method factors is not incorporated in the CTCU model. There is NO assumption that the method factor is the same for all measures employing the SAME method.Rather, the CTCU model assumes that each observed indicator has a unique method effect, and that the degree of covariation between measures using the same method suggests the extent to which a common method factor is plausible.
• while you might say that the "expression of 'correlated-uniqueness'gives the impression that the method effect is not defined as commonfactors in the CTCU" it is probably more accurate to note that the CTCUmodel doesn't rule out the possibility.
• The formal modeling of the latentstructure of method factors in an ordinary MTMM analysis may be too restrictive in some instances, and result in Heywood cases, poor fit, ornon-convergence. I believe that this is why Marsh developed the CTCU model in the first place.
• If you believe there are three method factors, and that these method factors are correlated, the CTCU model doesn't help you test that proposition. The model won't enable you to assess whether one factor underpins one of the set of method items or whether their are x method factors for that set of items. It also won't enable you to assess whether the 3 method factors are correlated (although the modification index may suggest this).
• what MTMM model is appropriate in what domain. Your CUCU model may well be useful when researchers (for some reason) are happy to accept the limitations it places on the interpretation of the structure of either thetraits or the method factors. So, by moving toward CU-type models we are probably giving something up since less is being posited. As I said, thebiggest issue I see arising in terms of model selection for the analysis of MTMM matrices (beyond identification) is consistency with theory within thedomain of application.

• Kenny, D. A., & Kashy, D. A. (1992). Analysis of themultitrait-multimethod matrix by confirmatory factor analysis.Psychological Bulletin, 112, 165-172.
• Marsh, H. W. (1989). Confirmatory factor analysis ofmultitrait-multimethod data: Many problems and a few solutions. Applied Psychological Measurement, 13, 335-361.
• Marsh, H. W. & Bailey, M. (1991). Confirmatory factor analysis of multitrait-multimethod data: A comparison of alternative models. Applied Psychological Measurement, 15, 47-70.
• Marsh, H. W., Byrne, B. M., & Craven, R. (1992). Overcoming problems inconfirmatory factor analyses of MTMM data: The correlated uniqueness modeland factorial invariance. Multivariate Behavioral Research, 27, 489-507.
• Marsh, H. W., & Grayson, D. (1995). Latent variable models of multitrait-multimethod data. In R. H. Hoyle (Ed.), Structural equationmodeling (pp. 177-198). Thousand Oaks, CA: Sage Publications.
• Saris, W. E., & Aalberts, C. (2003). Different Explanations for correlated disturbance terms in MTMM studies. Structural Equationmodeling, 10(2), - 193-213.
• Saris,W.E.& Andrew, F.M. (1991,2004) evaluation of measurement instruments using a structural modeling approach. In Paul P. Biemer, Robert M. Groves, Lars E. Lyberg, and Nancy A. Mathiowetz, Measurement Errors in Surveys (Wiley Series in Probability and Statistics)
• Marsh, H. W., & Grayson, D. (1995). Latent-variable models ofmultitrait-multimethod data. In R. H. Hoyle (Ed.), Structural equationmodeling: Issues and applications (pp. 177-198). Newbury, CA,. Sage.
• Marsh, H. W., & Bailey, M. (1991). Confirmatory factor analysis ofmultitrait-multimethod data: A comparison of the behavior of alternativemodels. Applied Psychological Measurement, 15, 47-70.

## Saturday, May 05, 2007

### uniqueness or error variance, (un) reliability

• Error variance of hte measured variable reflects score unreliability
• random error variance is defined as one minus the reliability of the measure used, for example, if reliability of the measure is 0.7, then random error variance for the measure is 1-0.7=0.3
• R2 associated with an indicator is interpreted as reliability coefficients
• if one variable has a pattern/structure coefficient of o.7--this variable has 49% of its variance in common with the factor
• if the variance of the measured variable is 1.847 and the estimated error variance of this variable is 0.178---the unreliability of this measured variable is 0.178/1.847=0.096(9.6%). The corresponding reliability of this measured variable is (1.847-0.178)/1.847=1.669/1.847=.904 (90.4%).
• measurement error variances are usually taken as being random error
• sometimes the error variance may be correlated--correlated error(variance), eg. common method; one measured in 1998, the other measured in 1999; using negatively words;

### EFA

• In EFA, orthogonal solutions almost always provide simple structure
• Pearson product-moment bivariate correlation matrix is the matrix of associations most commonly used in EFA
• CFA uses covariance matrix. EFA uses correlation matrix; the reason for this standardization is that scales for tests used in educational, sociological, and psychological research are usually arbitrary and thus those scales are commensurable. The components obtained from correlation matrix in EFA and covariance matrix in EFA are not the same.
• In EFA, we almost always use only standardized factor pattern coefficients.
• In CFA, both unstandardized pattern coefficient and standardized pattern coefficients are computed.
• When factors are uncorrelated, the CFA standardized factor pattern coefficients and the structure coefficients for given variables on given factors exactly equal each other
• If using oblique (correlated) rotation, pattern coefficient and struture coefficient are different. Both of them should be reported.
• factor score, latent variable score=composite (variable) score for each person on each factor, use "factor score matrix output" in SPSS, factor scores can be used in subsequent analyses instead of the measured variable scores, different factor score methods (regression, Bartlett, Anderson-Rubin), when principal component analysis is conducted, regression, Bartlett, and Anderson-Rubin factor scores for a give person on a given factor will all be the same; using other extraction method, they will differ

Factor extraction method: principal components analysis vs. factor analysis

• Principal components analysis is the default factor extration method. Generall, we prefer principal components analysis because 1) a psychometrically sound procedure, 2) mathematically simpler than factor analysis, 3) common factor analysis might have factor indeterminacy problem, which is troublesome. Principal components analysis partitions the total variance in the original set of variables by finding variables that accounts for the largest amount of variance. The most amount of variance means about 75% or more, and often this can be accomplished with 5 or less components.
• principal axes factor analysis

Factor rotation, In some cases, simple structure can't be obtained using orghogonal rotation. The researcher then must turn to oblique rotation to pursue simple structure.

Orthogonal rotation(uncorrelated factors), when factors are rotated orthogonally, the patten coefficient and structure coefficient matrices will contain exactly the same numbers, in a orthogonal rotation, if one variable has a pattern/structure coefficient of o.7--this variable has 49% of its variance in common with the factor; it another variable has a pattern/structure coefficient of 0.5--this variable has only 25% common variance with the factor, the first variable have double influence on the factor, compared with the second variable.

• varimax (most commonly used)--to clean up the factors, ie, each factor tends to load high on a smaller number variables and low or very low on the other variables, this will make interpretation of the resulting factors easiler. When we use varimx rotation, the first rotated factor will no longer necessarily account for the largest amoung of variance.
• Quartimax---to clean up the variables in order that each variable loads mainly on one factor).

Oblique rotation (correlated factors), If simple structure can't be obtained with orthogonal rotation, reflected in many variables having pattern/structure coefficients that are large in magnitude on two or more factors, oblique rotataion may be needed to derive interpretable factors. However,more parameters must be estimated in oblique rotation. Thus, oblique rotation results not only tend to fit sample data better but also tend to yield factors that replicate less well. --- unless oblique results are substantially better than orthogonal results, orthogonal results may be preferred. Whenever oblique rotation is performed, higher-order factors are implied, ie, the higher order factor explain the correlation among first-order factors, and should be derived and interpreted. When factors are rotated obliquely, the pattern coefficients and structural coefficients are different. Many reseachers suggest that correlated factors are much more reasonable to assume in most cases. There is no such thing as a "best" oblique rotation.

• Promax(most commonly used)
• direct oblimin, oblimax, quartimin, maxplane, onlimin, orthoblique

For oblique rotation, we will have two resulting matrix. The two matrices are different. Some researchers suggest that we need to report both matrices. ps, for orthogonal rotation, the two matrices are the same

• factor pattern matrix---the elements are analogous to standardized regression coefficients from a multiple regression analysis, indicating the importance of that variable to the factor with the influence of the other variables partialled out
• factor structure matrix--- the elements are simple correlations of the variables with the factors, that is, they are factor loadings. ps, loading is simply the Pearson correlation between the variable and the factor (linear combination of variables)

How many factors to select out?

• choose component whose eigenvalue greater than 1
• scree test
• retain factors that would account for at least 70% of the total variance in the original variables

Subject & variable ratio

• five subjects per variable as the minimum

Bartlett's sphericity test

• this tests the null hypothesis that the variables in the population correlation matrix are uncorrelated. If one fails to reject null hypothesis, using this test, then there is no reason to do the components analysis since the variables are already uncorrelated.

• James Stevens, 2002, EFA and CFA, chapter 11, in Applied Multivariate Statistics for the Social Science, 4th ed. Lawrence. provide LISREL & EQS example to run CFA

## Thursday, May 03, 2007

### chi square difference test for nested SEM models

• Smaller χ2 values indicate better fitting model, an insignificant χ2 (p>.05, p>.01) is desirable
• If the difference between two nested SEM models is significant, this implies that the model with more paths explains the data better. If there is no significant difference between two nested models, this implies that the more parsimonious model explains the data equally well compared to the fuller model and is preferred.
• If the fit of the more restricted model (model with fewer free parameters) is about as good as that of the more general model (model with more free parameters), the restrictions can probably be accetpted, i.e, the simpler model is chosen and the more complex model is rejected.
• conducted a chi-square difference where the chi-square1 - chi-square 2 is the chi-square difference. Chi-square difference isexamined with the df of df1-df2. If the difference is significant,favor the model with the smaller chi-square. If not, favor the more parsimonious model.
• (1) Subtract the chi-square for the less restricive model from the chi-square for the more restrictive model.(2) Subtract the degrees of freedom for the less restrictive model from the degrees of freedom for the more restrictive model.(3) Refer the chi-square difference obtained in (1) to the chi-square distribution using the df obtained in (2).
• Yuan, K.-H., & Bentler, P.M. (2004). On chi-square difference and z-tests in mean and covariance structure analysis when the base model is misspecified. Educational and Psychological Measurement, 64, 737-757.
• Model trimming

Model 1 has more free parameters, better fit, lower chi-squre value, lower degree of freedom, χ2=2.28 df=1

Model 2 had additional restrictions, fewer free parameters (model 2 is the more restricted model, more parsimonious), worse fit, higher chi-squre value, higher degree of freedom, χ2=15.96, df=3

Model 2 is nested within model 1 (model 2's free parameters is a subset of model 1's free parameters)

The difference between the two χ2 is also distributed as a χ2 with degree of freedom equal to
the difference between the degrees of freedom for the two models. χ2= 15.96-2.28= 13.68
df=3-1=2
Whether the additional constraints in model 2 have significantly reduced the model's ability to fit the data (whether the increase in χ2 value,reducing model fit, is significant )?
After checking the χ2 table, we find p< .01 (significant)--the fit of the model has been significantly hindered by introducing the additonal constraints (the imposed restrictions in the more constrained model result in a significant decrement in model fit; the increase in χ2 value is siginicant in reducing model fit) ---- so we choose model 1, although model 2 is more parsimonious. However, if after checking χ2 table, we find p> .01 (insignificant)--the fit of the model has not been significantly hindered by introducing the additional constraints (chi-square difference is not significant, one can retain the hypothesis of validity of the imposed constraints; the increase in χ2 value is not significant in reduing model fit)--so we choose model 2, because model 2's added constraints doesn't significantly reduce model fit, but model 2 is more parsiminous than model 1

Model builidng

1. If one has an initially poor model with high χ2, the next step is to free one or more parameters, adding paths, reducing the number of model constraints, to improve the model's fit. If the reduction in χ2 (the χ2 difference) is larger relative to the difference if d.f. between the two models, we have achieved a significant improvement in fit.
2. Models can have parameters sequentially freed,ie., adding extra paths, and the resulting successive models can be compared to assess whether freeing a particular parameter led to a significant improvement in model fit. (more parameters--better fit--lower chi-square value)

• Kumar, A., & Sharma, S. (1999). A metric measure for direct comparison of competing models in covariance structure analysis. Structural Equation Modeling, 6, 169-197.
• Rigdon, E. E. (1999). Using the Friedman method of ranks for model comparison in structural equation modeling. Structural Equation Modeling, 6, 219-232.
• Steiger, J. H., Shapiro, A., & Browne, M. W. (1985). On the multivariate asymptotic distribution of sequential chi-square statistics. Psychometrika, 50, 253-264.
• Cheung & Rensvold (2002), Evaluating goodness of fit indexes for testingmeasurement invariance. SEM 9(2) 233-255
• Kelloway (1995)Structural equation modeling in perspective. Journal ofOrganizational Behavior Vol 16, 215-224.
• Brannick (1995) Critical comments on applying covariance structuremodeling. Journal of Organizational Behavior Vol 16, 201-213.