jump to main area
:::
A- A A+

Seminars

Studying Response Sets and DIF With Direct Questions and Randomized Response Methods

Abstract

How can we collect and analyze data on topics that most people are likely to regard as private? In most studies, it is assumed that respondents provide honest information. This assumption becomes questionable, however, when researchers ask questions that most would be reluctant to answer publicly, for example, about illegal or stigmatized activities. In this talk, I will discuss IRT models for the analysis of personal data that take into account response sets in the form of cheating and positive self-representations. Two studies are presented, one in the context of compliance investigations with social benefit regulations and a second one on illegal downloading and file-sharing behavior. The social-benefit application is based on a 2 x 2 factorial design varying modes of administration (computer or face-to-face interview) and questioning techniques (randomized response or direct question). In the second study on downloading/filesharing behavior, we use the same response methods but vary the modes of administration by comparing online and onsite responses with respect to response times, DIF, and response biases. Both studies show that computer-assisted randomized-response methods yield the highest incidence reports and the lowest incidence of response biases.

Update:
scroll to top