- maximum likelihood (ML) is one of the most frequently used methods for model evaluation. The ML method is based on the assumption that data are continuous
and normally distributed, an assumption which is frequently violated
especially when categorical data are analyzed. Furthermore, ML cannot provide
a reliable inference when the number of variables in an analysis becomes
- robustness of normal theory estimators to violations of normality, in the presence of nonnormality, parameter estimates are typically unbiased, but values of the chi-square test statistic and other fit indexes are adversely affected, and standard errors become attenuated. Under coarse categorization, chi-square values are typically found to be inflated when only two response categories are used, but this bias decreases with increasing numbers of categories. This bias is exacerbated for situations in which the distributions of the categorized variables are nonnormal, with opposite skew producing the worst results.
- Although the measurement parameter, structural disturbance, and coefficient estimates, including λs (factor loadings for exogenous λx ,and endogenous variables λy), θs (measurement errors for exogenous variables θε ,and endogenous variables θδ), φs (covariances among latent exogenous variables ξ ξ), ψs (covariances among structural disturbances ζ ζ), γs (causal path, structural parameters relating a latent exogenous to a latent endogenous variable), βs (causal path, structural parameters relating a latent endogenous variable to another latent endogenous variable), produced by maximum likelihood (ML) estimation approach are robust to variables with departures from normality (Bollen, 1989, Chou, Bentler& Satorra,1991), the chi-square and standard errors for significance test statistics from ML may not be robust to departures from normality (Bollen, 1989).