Survey researchers inevitably face the problem of item non-response; that is, some respondents provide valid answers to some questions, but provide invalid answers to others, such as “don’t know,” “no opinion,” or “hard to say.” However, including only those respondents who provide valid answers for each question may bias the estimates, because non-response does not occur randomly. This study applies a selection bias model to the problem of item non-response and tries to answer the following questions: in what situations might item non-response cause estimation bias, and in which ways might we correct for this bias?
By employing survey data collected by Taiwan’s Elections and Democratization Studies Project on the 2004 Presidential Election (TEDS 2004P), this research applies selection bias models to three topics of interest in the survey-vote choice, political efficacy, and position on the independence-unification issue. The research findings show that the vote choice model estimates may suffer from serious selection bias because non-respondents tended to be pan-blue voters and to vote for the pan-blue candidate. In the political efficacy model, non-respondents tended to be pan-green voters and also to have higher levels of political efficacy. Estimates for that model may therefore also suffer from selection bias. In the independence-unification issue model, by contrast, the factors affecting rates of item non-response were not systematically related to the respondents’ positions on the issue. Therefore, the model estimates do not suffer from selection bias. The research findings demonstrate that the seriousness of selection bias is not determined by the non-response rate per se, but rather by the degree to which response rates are systematically related to particular answers to a given survey question. The selection bias model can provide a satisfactory way to correct for estimation bias introduced by item non-response.