Mobile Survey Design Tips: Long Answer Lists Compromise Data Quality. What Can You Do About It?

Mobile Survey Design Tips: Long Answer Lists Compromises Data Quality. What Can You Do About It?

This blog is an excerpt from research presented at Insights Association’s June 2019 NEXT conference. For the full version of the whitepaper, click here. A webinar of this research can be viewed here.

Smartphones are an integral part of the way we live, an indispensable tool for connecting with others, entertainment, business applications, news, education, and shopping. According to the Ericsson Mobility Report, smartphone subscriptions have reached 5 billion worldwide this year (1). The ubiquity of smartphones is having implications for insights professionals conducting research and surveys to gather customer insights. Online surveys are increasingly being taken on mobile devices rather than desktop computers. Estimates range anywhere from 30-60 percent of all survey takers are currently using a smartphone or tablet to access a survey (2). This means online surveys must be properly designed to render and function well on these mobile devices, which use smaller screens and touch input.

Figure 1. A survey must be user-friendly for both desktop and mobile users

Over the years, Dynata, FocusVision, and MaritzCX have jointly conducted a series of research on research studies to understand survey preferences among mobile users and best practices for designing mobile-friendly surveys. One area we’ve investigated is the use of long answer options lists in mobile surveys.

Prior research conducted by MaritzCX in 2014 has shown that lengthy answer option lists can compromise mobile data quality. (Figure 2). The longer the list, the more likely the response provided by the survey participant will be from the top of the list (primacy effect). This is problematic where an answer list (e.g. categories for time, age, the amount spent) has a natural sequential ordering and cannot be randomized).

Figure 2. Primacy effects become more pronounced with long answer option lists.

One way to address primacy effects is to use an open-end text box question. An answer must be typed in and there is no aided list to bias the respondent. Figure 3, for example, shows two examples of the same question being asked. However, one includes a standard answer list of car brands to choose from. The other includes a number of empty text boxes for which to type in an answer. Can this be a suitable alternative to eliminate primacy effects and improve data quality on mobile surveys? We examined this possibility in a survey of 2,280 participants. Using a multi-cell design, we tested each of the presentation formats, evaluating the pros and cons of each.

Figure 3. In this test, we compared the use of a standard closed-end list (left) with that of a text box question type (right)

Results

Figures 4 and 5 show the results of our test. The text box can be a useful alternative to an aided list in order to reduce primacy effects. It holds up reasonably well for a fact-based question. Participants submit a similar number of responses using a text box as a standard closed-end list. The text box question type also allows for additional discovery of new ideas and answer options not considered by the researcher. However, it may suffer in opinion-based questions. In this case, many fewer items will be submitted compared to a standard list.

Figure 4. Fact-based question: Comparison of the number of brands selected/mentioned for each question design. The number of brands submitted when a text box was used nearly matched the number of brands selected in a standard aided list. The text box design also captured a few additional brands in “new category.”

Figure 5. Opinion-based question: Comparison of the number of features desired for each question design. Many more items are submitted when the standard aided list is used. The advantage of a text box design is to allow participants to type in new ideas and categories not considered by the researcher. This, in fact, occurred for roughly half of the features that participants submitted in the text boxes.

In conclusion, the choice for an open text box or closed-end question type depends: how long is the aided list and are primacy effects a real concern? What is the need for the additional discovery of brands not on the aided list? Is it a situation where many brands or features would be selected? The standard closed-end list and text box each have their strengths and drawbacks, and whether to use one or the other would hinge on the specific situation and priorities for the researcher.

References

  1. “Ericsson Mobility Report June 2019,” Ericsson, June 2019: https://www.ericsson.com/en/mobility-report/reports/june-2019
  2. Internal statistics from Dynata, MaritzCX and FocusVision Decipher
Share this article:

Want to talk? Have a question? We’d love to hear from you. E-mail bloggers@focusvision.com

See for yourself: Begin your journey to better customer insights

Request a demo

Sign up to receive news from FocusVision