Personality is often conceptualized in dimensional terms. Researchers have previously attempted to measure response style (i.e., the systematic preference or avoidance of specific response categories in rating attitudinal and personality items; Paulhus, 1991): acquiescence (i.e., the tendency for participants to agree with both regular and reversed items on measures; Plieninger & Heck, 2018) and evaluation (i.e., the gauging of a measure as good or bad by participants; B?ckstr?m & Bj?rklund, 2014; Saucier, 1994). Recent literature (e.g., Biderman et al., 2018, 2019) incorporated acquiescence and evaluation into traditional factor analysis, creating bifactor personality models (i.e., a single general factor reflecting common variance amongst all scale items and group factors that reflect additional common variance among item clusters that typically have similar content; Reise, 2012). Since recent studies (e.g., Biderman et al., 2018, 2019) demonstrated how evaluation and acquiescence factors are implemented into bifactor models within personality measurement, and a new type of network model analysis allows for the inclusion of latent factors within a network, this project created two latent network models (LNMs) of the Five Factor Model with one LNM not accounting for response styles (i.e., acquiescence and evaluation) and the other LNM accounting for response styles, two residual network models (RNMs) of the Five Factor Model with one RNM not accounting for response styles and the other RNM accounting for response styles, and two factor models (i.e., confirmatory factor analysis) of the Five Factor Model with one factor model not accounting for response styles and the other factor model accounting for response styles. All models were examined to determine which model fit provided the most accurate structure of personality measurement as measured through fit statistics. Models were estimated with precollected online IPIP data. I hypothesized that network and bifactor models accounting for response styles would have better fit statistics than models that did not include response styles, suggesting that the bifactor models provided more accurate interpretations of personality structures than models that did not account for these factors. Results showed that LNM and factor models that accounted for response styles were marginally better than LNM and factor models that did not account for response styles. RNMs were the best fitting models out of all estimated models. This suggests that accounting for response styles marginally improves model fit and that RNMs provide more accurate measurement of personality structures in cross-sectional personality data utilizing the IPIP. Implications are discussed.
展开▼