Title: Quantifying Effects of Rating-Scale Response Bias In Correlational Analyses Authors: Max Welz, Aurore Archimbaud, Andreas Alfons Abstract: Responses to rating-scale items are often plagued by biases stemming from content-responsive faking (such as malingering or socially desirable responding) or content nonresponsitivity (particularly careless responding). While there is consensus that response biases can jeopardize the validity of survey measures through a variety of psychometric issues, their exact effects are yet to be statistically quantified. Leveraging robustness theory, we study the statistical properties of response biases in survey data. In particular, we derive bias curves and breakdown values of survey measures, with a focus on correlational measures due to their key role in factor analyses and structural equation models. Furthermore, we study how the adverse effects of response biases can be mitigated by survey design, for instance through the number of answer categories, number of items in a measure, and construct reliability. We find that already low prevalence of response biases can render survey measures fundamentally invalid. In addition, we show how comparatively short survey measures with a balanced number of negatively-worded items can enhance the robustness of survey measures against response biases. Furthermore, we provide freely available software in R for computation and visualization of bias curves in survey measures.