***Apologies for cross-posting
Dear colleagues,
We are excited to invite submissions for our upcoming session, "Exploring Agent-based Interviewing in Web Surveyshttps://www.europeansurveyresearch.org/conf2025/sessions.php?sess=26", at the 11th conference of the European Survey Research Association (ESRA), which takes place at Utrecht University, The Netherlands, from July 14 to July 18, 2025.
The session is organized by Jan Karem Höhne (DZHW, Leibniz University Hannover), Marco Angrisani (University of Southern California), Frederick Conrad (University of Michigan), Arie Kapteyn (University of Southern California), and Florian Keusch (University of Mannheim).
Please submit your abstract (max. 300 words) via the ESRA conference management systemhttps://www.europeansurveyresearch.org/conf2025/ by December 20, 2024.
Session Description: Web surveys continue to replace other survey modes, especially in-person interviews. Even large-scale surveys, such as the European Social Survey and the National Longitudinal Study of Adolescent to Adult Health, now routinely collect data via web surveys. However, the absence of interviewers complicates the provision of assistance to respondents and the creation of trust and motivation. This absence raises concern about answer quality. The advent of Generative Artificial Intelligence makes it possible to build interviewing agents that are visually realistic and conversationally responsive, deriving the latter ability from Large Language Models. Embedding such agents in web surveys promises to restore some of the quality-enhancing contributions of human interviews. Intelligent agents can clarify questions and provide feedback beyond what is typical in text-based web surveys and their mere presence can reduce speeding and non-differentiation but may introduce social desirability. Because respondents can choose an agent this may foster rapport helping to overcome social desirability. These innovations do not only give web surveys a human touch but make them more inclusive. Individuals with low literacy and education or who are not skilled speakers and/or readers of the survey language (e.g., immigrants and refugees) may be more likely to participate if they see (or hear) an agent that looks (or sounds) like them. Similarly, those with sensory challenges, especially the elderly, may favor verbal communication with a realistic looking, conversational agent, over text-based communication. In this session, we invite studies on all kinds of interviewing agents, not just those we have described here. This can be in various settings (lab or field) and with different study designs (cross-sectional or longitudinal). Contributions on legal and ethical considerations when using agent-based interviewing are also welcome. This similarly applies to studies that are work in progress.
If you have any questions about the session, please do not hesitate to reach out.
We are looking forward to your submissions.
Best regards Jan Karem Höhne
German Centre for Higher Education Research and Science Studies (DZHW) Lange Laube 12 | 30159 Hannover | www.dzhw.euhttps://www.dzhw.eu/ | Germany
Prof. Dr. Jan Karem Höhne Head of CS3 Lab for Computational Survey and Social Science Leibniz University Hannover German Centre for Higher Education Research and Science Studies (DZHW) Research Area 4: Research Infrastructure and Methods Tel. +49 511 450670-458 Fax +49 511 450670-960 www.jkhoehne.euhttp://www.jkhoehne.eu/
Upcoming and most recent publications: - Höhne, J.K., Lenzner, T., & Claassen, J. (in press). Automatic speech-to-text transcription: Evidence from a smartphone survey with voice answers. International Journal of Social Research Methodology. - Höhne, J.K., Claassen, J., Shahania, S., & Broneske, D. (2024). Bots in web survey interviews: A showcase. International Journal of Market Research. DOI: 10.1177/14707853241297009 - Salvatore, C., & Höhne, J.K. (in press). Explaining item-nonresponse in open questions with requests for voice responses. Conference Proceeding of the Italian Statistical Society. - Höhne, J.K. & Claassen, J. (2024). Examining final comment questions with requests for written and oral answers. International Journal of Market Research. DOI: 10.1177/14707853241229329 - Höhne, J.K., Gavras, K., & Claassen, J. (2024). Typing or speaking? Comparing text and voice answers to open questions on sensitive topics in smartphone surveys. Social Science Computer Review. DOI: 10.1177/08944393231160961 - Lenzner, T., Höhne, J.K., & Gavras, K. (2024). Innovating web probing: Comparing written and oral answers to open-ended probing questions in a smartphone survey. Journal of Survey Statistics and Methodology. DOI: 10.1093/jssam/smae031
***Apologies for cross-posting
Dear colleagues,
we are excited to invite submissions for our upcoming session, _“Number of hours usually worked? Methodological challenges in accurately measuring working time” https://www.europeansurveyresearch.org/conf2025/sessions.php?sess=96_, at the 11th conference of the European Survey Research Association (ESRA) https://www.europeansurveyresearch.org/conference/utrecht-2025/call-for-abstracts/, which takes place at Utrecht University, The Netherlands, from *July 14 to July 18, 2025*. ** The session is organized by Carolin Deuflhard (Humboldt-Universität zu Berlin) and Lena Hipp (University of Potsdam/ WZB Berlin Social Sciences Center). Please submit your abstract (max. 300 words) via the ESRA conference management system https://www.europeansurveyresearch.org/conf2025/ by *December 20, 2024*. *Session Details* In recent years, working time has become increasingly polarized in terms of who works how much, when, and where. This shift is driven by structural, institutional, and demographic changes, as well as exogenous shocks—most recently the COVID-19 pandemic. Against this backdrop, the session aims to stimulate a discussion on the methodological challenges and promises of old and new measurements for working hours.
How accurate are standardized survey questions on “hours usually worked” when employees work remotely, flexible hours, have zero-hour or multiple contracts, or are paid based on output rather than hours? For which groups of workers do standard survey questions produce more and for which groups less reliable results? How can these challenges be overcome? Can digital trace data and alternative survey questions help to accurately measure the time people spend on paid (and unpaid) work? What potential do survey experiments have for informing measurement strategies?
Session presentations can cover a broad range of issues in the field of measuring working time. Priority will be given to contributions that a) compare the advantages and shortcomings of different measurement strategies, b) focus on innovative approaches for measuring working time, c) address the peculiarities and challenges of measuring the working time of (specific groups of) nonstandard employees, and d) discuss the potential and problems of different measurement strategies for uncovering inequalities in working time based on gender, class, and race.
If you have any questions about the session, please do not hesitate to reach out. We are looking forward to your submissions. Best regards, Lena Hipp and Carolin Deuflhard
methoden@mailman.uni-konstanz.de