Nonresponse bias is the type of survey error that stems from respondents not taking the survey. Factors including lifestyle, personal values, demographics, etc. can be noticeably different among survey respondents and all other potential respondents who chose not take the survey, and these differences bias the survey data.
This type of bias can occur in any survey, so ensuring a pleasant respondent experience and using good field methodology are essential in mitigating the effects of this bias.
A challenging survey experience can exaggerate nonresponse bias. For example, a Research Manager for a baby food company wants to learn about how young mothers shop for baby food and how they perceive the different baby food brands. There is pressure to get as much data as possible out of the survey, so the team designs a survey that is over 60 questions long and takes over 30 minutes to complete. While the survey is in field, they see high drop-out rates and the survey takes nearly a month to finish. When they get the data back, they see that respondents have a poor perception of their brand, so they do a large overhaul of their brand imagery and messaging. However, once this is rolled out publicly, they receive negative feedback, and discover that mothers actually had positive associations before the update. When the team goes back to the data and they look at the drop-outs, they discover that mothers matching their target consumer profile made up a large portion of the drop-outs, as they were too busy to complete the survey.
The 1936 Presidential Election is a well-known example of how field methodology can contribute to this bias. A popular publication of the time, The Literary Digest, was running a poll that had correctly predicted the outcomes of previous presidential elections. In the 1936 Election between Alf Landon and Franklin D. Roosevelt, The Literary Digest predicted that Alf Landon would win by a landslide. However, they were so wrong that Franklin D. Roosevelt actually won by a landslide, with Alf Landon losing both the electoral vote and popular vote. Upon review of the field methodology used by The Literary Digest, it was discovered that out of the 10 million questionnaires that were sent out, only 2.3 million were returned. The 7.7 million people who chose not to submit their responses were significantly different in their political preferences from those who did submit their responses, so nonresponse bias was a contributing factor to the failure of this poll.