We’re proud of the many great organizations we get to work with, such as our Ask American Anything work with the National Domestic Violence Hotline (The Hotline). As a member of Civis’s Survey Research team, I’m also proud of our commitment to best-in-class public opinion research, supported by rigorous scientific research. In addition to the data science work we do and software we build, our surveys team designs and conducts research that results in thousands of phone and online interviews per week, from national surveys on public policy to precision targeted surveys of consumer habits in the markets our clients care about most.
Before these projects even begin, my role often involves working with clients to help them find the most effective way to find out what they want to know from the people they are most interested in. The work we did with The Hotline is a good case in point.
As Ola discussed, The Hotline wanted to know a few things, including:
- How much does the public support restricting gun access for individuals convicted of stalking?
- Currently, federal law bars a person convicted of domestic violence from gun ownership if they are married to their victim, but not for those who are unmarried. Does public opinion support this ‘boyfriend gap’?
What they wanted to ask were great questions – which is why they were chosen – but there was still a little more work that needed to be done to get the most value for The Hotline from this project.
Problem 1: Question framing matters
A researcher’s first impulse might be to simply restate the situation and ask how much the respondent agrees:
“Currently, federal law bars a person convicted of domestic violence from gun ownership if they are married to their victim, but not for those who are unmarried. Do you support or oppose this discrepancy in the law?”
Maybe you can see the problem: the question itself points out a ‘discrepancy,’ which suggests what the ‘right’ answer is – that is, what answer the person posing the question might expect. Decades of research in survey methodology have shown that, while people do have their own opinions, leading or loaded questions can influence the way they end up responding, in essence putting a thumb on the scale.
Problem 2: Question order matters
One improvement could be to ask about several scenarios separately, and ask how much the respondent supports the person in question being legally allowed to purchase a gun:
A. A person is convicted of domestic abuse, after repeated incidents of threatening and physically striking their spouse.
B. A person is convicted of domestic abuse, after repeated incidents of threatening and physically striking the person they are dating but not living with.
C. A person is convicted of misdemeanor stalking, after repeatedly following a stranger home and calling them at work.
And yet, this is also potentially problematic. We could only allocate two questions to The Hotline and more survey time means greater expense. We also have to remember the respondent might just get tired of answering what seems like the same question over and over, paying less attention to the task or drop out altogether. But maybe the biggest problem, as prior research has shown, is that people’s answers to earlier questions may influence their response to later questions. If I agree to the first of four similar questions, I may be more likely to say I agree to later questions, even if they are slightly different.
Solution: A randomized controlled experiment
To resolve both of these problems, we decided to randomly pick one of the three questions above to ask, or a more ‘neutral’, non-violent scenario:
D. A person and their spouse get into repeated verbal arguments that are loud enough to be overheard by neighbors.
Instead of posing these questions to everyone, each person was randomly given only one of these four (either A, B, C, or D). This resolves the problem of question order because each person only responds to one question. It also resolves the problem of question framing because it allows us to compare directly those people who received a question about an abusive spouse and those who received a question about an abusive unmarried partner. Because the questions were randomly assigned to each person, we can be sure that there are no significant differences between these groups of people, which allows us to make an ‘apples-to-apples’ comparison. And because we included a ‘neutral’ control group, we can measure how much more or less a person supports restricting gun rights relative to their existing preferences about gun rights.
- No boyfriend gap in public opinion: Results for attitudes about gun rights of domestic abusers are identical whether married or unmarried.
- The chart above show our estimates of support for gun rights for each message read the respondents. All messages with domestic violence show lower support for perpetrators’ gun rights than the control (non-violent) message.
Now that we’ve conducted the survey, asking the right questions, I’ll provide a bit more insight into how we analyzed it.
To analyze the data we collected, we divided the respondents into two groups: those who said they were opposed to the individual being allowed to buy a gun and those who did not (eg. those who said they were supportive or had no opinion). Then, controlling for other relevant variables, like individual’s gender, age, geographic location, and race, we calculated the overall impact of each scenario on an individual’s likelihood to say they were opposed. We also apply data science, using Bayesian techniques, to recover the true impact of each scenario—exactly how much of an impact it had on opinion.
In other words, we are able say that, relative to the baseline of the verbal fights question, people are 13.7% less likely to say they support a person who is convicted of stalking from being allowed to purchase a gun and 17.9% less likely to say they support a person convicted of abuse. Without this design we would have been unable to draw this conclusion.
So instead of just asking a question, it’s critical to think about what you are trying to understand, have a point of comparison, and use data science to understand the true impact.
- These results are based on a national landline telephone survey of adults in the United States that Civis Analytics conducted from December 28, 2015 through January 8, 2016 with 3,953 respondents. Respondents are sampled from nationally representative voter and consumer files. Results were weighted to the national adult population of the United States. ↩