Skip to Main Content
Amy Obendorf
Amy Obendorf | Survey Scientist, Civis Analytics

One of the biggest challenges in survey research is ensuring respondents provide accurate answers to survey questions. While it would be nice to believe people are careful and truthful in their survey responses, studies have shown this isn’t always the case. 

People treat surveys like a conversation with an interviewer (even if they are taking the survey online), and therefore are prone to responding in a manner that paints themselves in a positive light. This phenomenon, called social desirability response bias, means that respondents over-report their good behavior and under-report their bad behavior, which can lead to inaccurate survey results. Think about the last time your dentist asked if you floss every day. Were you tempted to say “yes” even though you may not actually floss every single day? Many people will say yes, because they know it’s the “right” answer, even if it’s not the truth.

As researchers, our goal is to measure the opinions and behaviors of the population we are studying. We want respondents to answer our survey questions truthfully and accurately. Using survey experimentation, Civis researchers are taking steps to identify and mitigate social desirability bias. Read on to learn about their approaches and how it improves our data collection methods.

Identifying incidents of social desirability bias

In early April of 2020, the Civis Survey team used our Omnibus survey (our ongoing national survey of adult Americans that collects and analyzes data on current events) to field a question about Census-taking behavior. In this survey, 58 percent of respondents reported that as of Apr. 4, they had completed the Census.

Bar chart showing the initial round of results, which were skewed by social desirability

Because completing the Census is widely known as a civic duty for Americans, the Civis team expected some social desirability bias among these survey respondents — and there was! We compared the Omnibus results to data on the Census Bureau’s website, which showed that as of Apr. 4, 2020, only 44.5 percent of Americans had actually completed the Census. Our survey overestimated the rate of Census completion by 14 points. 

We wanted respondents to feel comfortable answering the questions truthfully, so we attempted to minimize the pressure for respondents to misreport by designing and testing new questions. 

Modifying and testing survey questions to reduce bias

In September 2020, we tested two alternative Omnibus survey questions asking about Census-taking behavior. The first question format, which gave respondents the option to select multiple answers, showed respondents a list of four activities and asked them to select all of the activities in which they had participated over a period of eight months. 

For the second question format, we used a method called a list experiment. List experiments use a split ballot design, where respondents are randomly assigned to see Ballot A or Ballot B.  Both ballots asked respondents to mark the total number of activities on the list in which they participated over a period of eight months.

Ballot A showed respondents a list of three activities — not  including “Completed the 2020 Census.”  Ballot B showed the same list, this time including “Completed the 2020 Census.” 

The questions are as follows:

Original question

Question example: Have you or has someone in your household completed the 2020 Census? 
Yes, No, or Don't Know

New Question 1: Multi-select

Sample question: Which of the following activities have you or someone in your household completed in teh last eight months?
Changed jobs or careers, Took an online class, Applied for a home loan, Completed the 2020 Census, None of the above

New Question 2: List experiment

How many of the following activities have you or someone in your household completed in the last eight months?

Ballot A (shown to half of respondents):

Sample question: How many of the following activities have you or someone in your household completed in the last eight months?
Please enter a number between 1 and 3: 
1. Bought Groceries
2. Listened to music, podcasts, or the radio
3. Received a Covid-19 (Coronavirus) test

Ballot B (shown to half of respondents):

Sample question: How many of the following activities have you or someone in your household completed in the last eight months?
Please enter a number between 1 and 3: 
1. Bought Groceries
2. Listened to music, podcasts, or the radio
3. Received a Covid-19 (Coronavirus) test
4. Completed the 2020 Census

To determine the percentage of Americans who completed the Census, we calculated the mean number of selections with and without the Census question.

Since both Ballots included the same first three options and only Ballot B had the additional Census option, we compared the mean responses from Ballot A to the mean of responses from Ballot B to estimate the un-biased rate of Census completion.

The difference between the two Ballot means gives us the percentage of respondents who completed the Census because the only difference between the two Ballots was the inclusion of the Census item in the list of activities.

Experiment results

As of late August 2020, just before we conducted this experiment, the benchmark Census completion rate from the Census Bureau was 64.9 percent. We wanted to determine which question yielded a completion rate closest to this benchmark, which represents the true value of Census completion. 

The experiment shows that Question 1 (direct question format) resulted in respondents over-reporting the Census completion rate (70 percent), and Question 2 (multiple response format) resulted in respondents under-reporting the Census completion rate (52 percent). Question 3 (list experiment) yielded results that were extremely close to the true population (64.5 percent).

What does this mean for survey design?

For questions that are sensitive, or ask about socially desirable (or undesirable) behaviors, a list experiment format, such as Question 3, may be a good option. Because respondents are only asked to report how many of the behaviors in which they engaged, this question format adds a sense of anonymity; respondents face less pressure to misreport their behavior, because they aren’t actually telling us which activities they have done. 

From a research perspective, this question format is also beneficial because it allows us to understand sensitive behaviors without directly asking respondents about those behaviors. 

This experiment demonstrates the influence of question format on how respondents answer surveys. When addressing a research question, it is important to consider which question types are best equipped to yield accurate results.