ESRA 2019: CfA "Contemporary issues in the assessment of measurement invariance" (Seddig, Daniel)
Dear Colleagues,
we would like to invite you to submit an abstract for the session
"Contemporary issues in the assessment of measurement invariance"
at the 8th Conference of the European Survey Research Association, 15th-19th July 2019 in Zagreb, Croatia.
To submit an abstract you must login to your ESRA account (or create a new account if you do not already have one) and then follow the instructions provided. Please note that it is only possible to submit two abstracts as the first author/presenter. Deadline for the submission is 18th November 2018.
https://www.europeansurveyresearch.org/conferences/call_for_abstracts
https://www.europeansurveyresearch.org/conferences/register
Best regards
Daniel Seddig (Universtity of Cologne & University of Zurich)
Eldad Davidov (Universtity of Cologne & University of Zurich)
Peter Schmidt (Universtity of Giessen)
Session abstract:
The assessment of the comparability of cross-national and longitudinal survey data is a prerequisite for meaningful and valid comparisons of substantive constructs across contexts and time. A powerful tool to test the equivalence of measurements is multiple-group confirmatory factor analysis (MGCFA). Although the procedures of measurement invariance (MI) testing seem to become increasingly used by applied researchers, several issues remain under discussion and are not yet solved. For example:
(1) Can we trust models with small deviations (approximate MI)? Is partial MI sufficient? How should one deal with the lack of scalar MI, as is the case in many large-scale cross-national surveys?
(2) How to decide whether a model with a high level of MI should be preferred over a model with a lower level of MI? Which fit indices should be used?
(3) Is MI needed anyway and would it be best to start firstly with a robustness calculation?
Recent approaches have tackled the issues subsumed under (1) and aimed at relaxing certain requirements when testing for measurement invariance (Bayesian approximate MI, Muthén and Asparouhov 2012; van de Schoot et al 203) or using the alignment method (Asparouhov and Muthén 2014). Furthermore, researchers addressed the issues subsumed under (2) and recommended the use of particular fit statistics (e.g., CFI, RMSEA, SRMR) to decide among competing models (Chen 2007). The question raised under (3) is a more general one and raises concerns about the contemporary uses of the concept of MI. Researchers (Welzel and Inglehart 2016) have argued that variations in measurements across context can be ignored, for example in the presence of theoretically reasonable associations of a construct with external criteria.
This session aims at presenting studies that assess measurement invariance and/or address one of the issues listed above or related ones. We welcome (1) presentations that are applied and make use of empirical survey data, and/or that (2) take a methodological approach to address and examine measurement invariance testing and use for example Monte-Carlo simulations to study the above mentioned issues.
--
Dr. Daniel Seddig
University of Cologne | Institute of Sociology and Social Psychology &
University of Zurich | Department of Psychology (Statistical Consulting)
mailto:dseddig@uni-koeln.de dseddig@uni-koeln.de
http://www.iss-wiso.uni-koeln.de/en/institute/staff/s/dr-daniel-seddig/ http://www.iss-wiso.uni-koeln.de/en/institute/staff/s/dr-daniel-seddig/