Title: Modeling Process Data with Explanatory Item Response Models to Understand Faking in Questionnaires Authors: Susanne Frick, Miriam Fuechtenhans, Anna Brown Abstract: Process data are collected along with the responses of primary interest and provide information about the response process. Modelling approaches for process data are often tailored to the specific type of data collection. In this study, we model process data simply with explanatory item response models - or, equivalently, cross-classified multilevel models - with appropriate distributions for the responses. We illustrate how these models can be used to better inform test construction by applying the approach to the case of impression management (also known as faking) in high-stakes situations. Previous research has shown that faking resistance depends on the item desirability and on the desirability matching. Process data, such as response times or changes made to the initial response, can help to understand the process of faking. The empirical objective of this study is to to investigate how item and person characteristics related to faking manifest in response editing in questionnaires. We conducted a re-analysis of 6 datasets, all of which represent responses to rating scale and forced choice questionnaires and contain a manipulation of stakes, with process data such as response latency and number of clicks. We derived item predictors from desirability ratings obtained from separate samples. To each outcome variable, we fitted models that account both for the variance due to items or blocks and persons, with log-normal distributions for response latencies and Conway-Maxwell-Poisson distributions for number of clicks. All models were of the Rasch-type, i.e., we assume that all items are equally discriminating. Block or item level predictors were derived from the desirability ratings. We found shorter response times for forced-choice blocks of items that were less well matched in desirability, although this was significant in only one study. The effect of ambiguous items differed between rating scale and forced-choice. However, most interactions of the item covariates with stakes were not significant. From a psychometric perspective, this study can inform further psychometric developments for the analysis of process data. From a practical perspective, the results of this research can inform the development of fake-resistant assessments and facilitate the evaluation of the impact of faking on current assessments.