Total Survey Design

Writing Good Survey Questions - Part 2. Response Options

Azdren Coma and Seon Yup Lee Season 1 Episode 10

This episode is the second in a series of writing good survey questions, where we dive deep into the intricacies of response options for survey questions, a crucial component often overlooked in survey design. We explore the significance of well-constructed response options in enhancing data quality and the pitfalls of poorly designed ones that can lead to confusion. Our focus will primarily be on closed-ended questions, discussing the different types of scales like categorical and continuous, and how to balance the resolution of your options to capture necessary details without overwhelming respondents. Join us as we unpack strategies to make your survey options exhaustive, exclusive, and free from bias.

Support the show

Find us online at: instagram.com/totalsurveydesign/
https://taplink.cc/totalsurveydesign
Contact us at: totalsurveydesign@gmail.com

SYL: In this episode we focus on the response options to survey questions. 

SYL: While Response options are usually developed alongside the base question, we wanted to dedicate an episode solely to response options because there are many important principles to consider.

Response options are critical for the quality of survey data. They play a key role in how respondents interpret and answer questions. Well-designed response options can significantly improve respondent understanding and the accuracy of the data collected. Poorly designed options, on the other hand, can lead to confusion and unreliable data. It's essential to give them careful thought and attention.

Response options generally fall into two categories: closed-ended and open-ended. Closed-ended responses include options like multiple choice and rating scales, where respondents select from predefined answers. On the other hand, open-ended responses allow respondents to provide their own answers in their own words. Closed-ended questions are great for quantitative data and easy analysis, while open-ended questions can provide richer, more detailed qualitative insights. The choice between them depends on your survey's goals and the nature of the information you're seeking. In this episode, we will focus mainly on closed-ended questions.

AC: When it comes to designing survey questions, one of the fundamental aspects to consider is the scale of your response options. Let's break down what that means and how you can ensure you're getting the most useful data possible from your respondents.

First, you'll need to decide if your question should use a categorical or continuous scale.

Categorical scales are used for questions where the responses are distinct categories. For instance, if you're asking about a respondent's favorite type of cuisine, you might list options like Italian, Chinese, Mexican, and so on. Each category is separate and doesn't imply any order or ranking.

On the other hand, continuous scales are used when you want to capture a range of values, like age or income. These scales allow for a spectrum of responses, providing more detailed information.

Speaking of detail, that brings us to our next point: ensuring your options are realistic. This means providing choices that respondents can easily relate to and select. For example, if you're asking about annual income, it's important to offer income brackets that reflect the population you're surveying. Too narrow or unrealistic options can frustrate respondents or lead to inaccurate data.

Next, let's talk about resolution. When we say 'resolution' in this context, we're referring to the level of detail in your response options. For example, if you're asking about age, you might wonder if you have high enough resolution. Are you offering only three age categories: 18-29, 30-49, and 50+? This might not be detailed enough for your needs. More specific age ranges could provide better insights, like breaking it down into smaller increments such as 18-24, 25-34, and so on.

However, there's a flip side to this. You can also have too high a resolution. Sometimes offering too much detail, especially with sensitive topics, can overwhelm respondents or make them uncomfortable. This can lead to higher dropout rates or people skipping questions entirely. For instance, asking for precise income down to the dollar might feel too intrusive, whereas broader brackets might encourage more honest and complete responses.

So, when designing your survey questions, aim for a balance. It's all about finding that sweet spot where you're capturing the necessary information while keeping your survey user-friendly and considerate of your audience's comfort.

SYL: One of the most important principles to keep in mind when you have categorical options is that the options need to be exhaustive and exclusive. Exhaustive simply means that every possible response option exists. So, for age, you shouldn’t have categories that only go up to 80 years old, because you know that there might be people older than 80. For gender, there are people that fall outside of the man/woman binary, so you want to at least give the option of “other”. 

Exclusivity on the other hand means that a respondent shouldn’t be confused as to which response options they should select. Let’s talk about age as an example. Let’s say that you are 25 years old. But if the response categories are “18 to 25” and “25 to 32”, you wouldn’t know if you should select the first or the second category.

AC: The order in which you present the response options can introduce bias. To minimize this, it's often best to randomize the order of options, especially in online surveys. This helps prevent any systematic bias that might arise from respondents consistently choosing options at the beginning or end of a list. However, in some cases, a fixed order might be necessary. If so, ensure the order is logical and consider varying the order in different versions of the survey to check for order effects.

SYL: Rating scales, such as Likert scales, are commonly used to measure attitudes or perceptions. An effective rating scale should be consistent in direction, usually from negative to positive or vice versa. This helps respondents understand the scale intuitively and reduces errors. 

From a design perspective, the visual presentation of response options can greatly impact readability and comprehension. Use clear spacing and alignment to make the options easy to scan. Avoid clutter and ensure that the options are clearly separated from each other. Consistent formatting throughout the survey helps respondents know what to expect and reduces cognitive load.

AC: Related is the concept of priming, or asking a question about something that was asked about. There are two potential issues that come with priming. One is recency bias, or how people’s response might hinge upon what they have seen, heard, or experienced most recently. For example, if a survey asks respondents, “Which of the following activities do you enjoy the most during your leisure time? Please select one.” Then lists as response options “Reading, Watching TV, and Listening to Music,” respondents may be more likely to select “Listening to music” because it is the last option they read. One way to fight recency bias is to rotate or shuffle option responses in questions like these.

SYL: The opposite of recency bias would be the primacy effect, which is the tendency of respondents to select the first few options presented to them, as these are more likely to be remembered and perceived as more important or relevant. For example, in a question that asks, “Which of the following news sources do you trust the most? Please select one.” And the options are listed as “CNN, BCC, Reuters, and Bloomberg.” In this list, respondents might be more likely to choose "CNN" or "BBC" because they are the first options they see, therefore demonstrating the primacy effect. Like with recency, questions with the potential of the primacy effect, need to have response options that are rotated or shuffled in the survey.

AC: When creating open-ended responses, the size of the text box you provide should match the length of responses you're expecting. If you want detailed feedback, offer a larger text box. For brief inputs like name or a number, a smaller box will do. Offering a large text box when you are expecting just a two-digit number will confuse respondents and make them doubt if they are understanding the survey question properly.

Finally, watch out for unbalanced scales. An example of an unbalanced scale is: "Strongly agree, agree, somewhat agree, neither agree nor disagree, and strongly disagree." This can skew your data, because people naturally look at options and arrange them on a balanced scale. 

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

POP: Public Opinion Podcast Artwork

POP: Public Opinion Podcast

American Association for Public Opinion Research
NN/g UX Podcast Artwork

NN/g UX Podcast

Nielsen Norman Group
Scope Conditions Podcast Artwork

Scope Conditions Podcast

Alan Jacobs and Yang-Yang Zhou
FiveThirtyEight Politics Artwork

FiveThirtyEight Politics

ABC News, 538, FiveThirtyEight, Galen Druke