This paper is a condensed highlight of research presented at Insights Association’s June 2019 NEXT conference. For the full version of the whitepaper, click here. A webinar of this research can be viewed here
It’s been over a decade since Apple launched the iPhone. Since its inception, smartphone subscriptions have reached 5 billion worldwide, and are expected to grow to close to over 7 billion by 2024 (1). Likewise, the number of people choosing to take surveys on their smartphone or tablet has risen dramatically. Depending on the sample source and target, estimates range anywhere from 30-60 percent of all survey takers are currently using a smartphone or tablet to access a survey (2).
A survey must be user-friendly for both desktop and mobile users.
This rapid adoption of smartphones and tablets have accelerated the need to understand the impact and best practices for designing and collecting survey data from mobile users. Mobile devices are inherently different. Beyond how to best present questions on a smaller screen, mobile devices enable alternative forms of user input (photo / video capture, voice to text, touch tap).
In order to better understand survey preferences from mobile users, Dynata, FocusVision and MaritzCX jointly conducted a research on research study to explore the survey presentation of three commonly used question types: grids, Net Promoter Score, and lists. Using a multi-cell design, we tested various presentation conditions in a survey of 2,280 participants. Our goal was to discover the optimal designs for improving participation, engagement, and data quality within surveys.
Grid formatted question (desktop view).
Unless altered, traditional grid questions do not render properly on the smaller screens of mobile phones. Mobile phones are most commonly held in portrait mode position, so without proper resizing / reformatting, the grid formatted question type doesn’t fit within the viewable screen of a mobile device. (Figure 3).
The grid-formatted question is not mobile-friendly.
We tested two other alternative mobile friendly designs. (Figure 4) The first, what we call “grid bars” breaks up the grid so that the attribute statement appears above the scale, providing more screen real-estate for the input areas. The second alternative, what we call “progressive grids” (also known as Card Sort within FocusVision Decipher) shows a single static scale. The attribute statement appears over the scale. When one attribute is rated, the next one slides across the screen for the respondent to rate.
The standard grid-formatted question and two other alternatives were included in our experimental test: Grid Bars and Progressive. The Progressive design showed the most promise in our test.
The Progressive design showed the most promise in our test. (Figure 5). There was almost no difference between mobile and desktop data when the Progressive question was used. It also had strong completion rates and the lowest amount of straight-lining. With a standard grid, completion rates and data quality for mobile users suffer, so this format should be avoided. Even on desktop, the standard grid was outperformed by both of our alternative options.
* Red asterisk flags significant difference (at 95% confidence) with standard design.
Figure 5. The Progressive design showed the most promise in our test. It showed the most data consistency between device types, had strong completion rates and lowest amount of straight-lining.
The NPS question uses an 11-point scale, and the horizontal length of the scale poses a design challenge on a mobile phone. In order to fit within the viewable area of the screen, the NPS scale can be arranged vertically. FocusVision has designed its own version of the NPS scale, which employs button boxes instead of radio buttons. (Figure 6) The scale point value labels display inside the button boxes and the scale end point descriptions are placed above that. This provides a touch input friendly design for mobile users, and there is enough room to keep the scale in horizontal orientation on both desktop and mobile devices.
A standard NPS format uses radio button forms. The FocusVision design employs button boxes which re-sizes nicely as a horizontal scale on mobile devices. This results in improved data quality over the standard NPS question format.
We put both NPS designs to the test. Having consistency in the scale arrangement clearly makes a difference. With the standard desktop (horizontal scale) and standard mobile (vertical scale), NPS ratings were notably different. However, with the FocusVision NPS design, there was data consistency between desktop and mobile users, making this the preferred design choice. (Figure 7).
* Red asterisk flags significant difference (at 95% confidence) with the three other test treatments.
Figure 7. The FocusVision NPS design showed better data consistency between desktop and mobile users.
Prior research conducted by MaritzCX in 2014 has shown the length of an answer option list for a closed end question can impact response on mobile phone. (Figure 8). The longer the list, the more likely the response provided by the survey participant will be from the top of the list (primacy effect). This is problematic where an answer list (e.g. categories for time, age, amount spent) has a natural sequential ordering and cannot be randomized.
Figure 8. Primacy effects become more pronounced with long answer option lists.
One way to address primacy effects is to use an open-end text box question. An answer must be typed in and there is no aided list to bias the respondent. In this test, we compare the use of text box to standard closed end list and evaluate the pros and cons for each. (Figure 9)
Figure 9. Primacy effects become more pronounced with long answer option lists.
For questions requiring a fact-based answer, we saw just a tad fewer brands submitted when a text box was used, compared to a standard aided list in a multi-select question type. (Figure 10). The text box design captured a few additional brands in “new category,” and these were mostly luxury or older brands that no longer exist (e.g. Lamborghini and Ferrari or Plymouth and Oldsmobile). We did include an “other (specify)” option in the multi-select question, however the discovery of additional brands was much less likely this way.
Figure 10. Fact based question: Comparison of the number of brands selected / mentioned for each question design.
When we moved to an opinion-based question, the distinction between the two question designs was much more pronounced. (Figure 11). Here, people selected around ten items when using the closed end, multi-select question type but submitted only 3 items in the text box version of the question. The advantage of a text box designs is to allow participants to type in new ideas / categories not considered by the researcher. This in fact, occurred for roughly half of the features that participants submitted in the text boxes.
Figure 11. Opinion-based question: Comparison of the number of features desired for each question design.
The standard closed-end list and text box each have their strengths and draw backs. The text box can be a useful alternative to an aided list in order to reduce primacy effects. It held up reasonably well for our fact-based question, and allows for additional discovery of new ideas and answer options not considered by the researcher. However, it may suffer in opinion-based questions, since fewer items will be submitted compared to a standard list. So, the choice for an open text box or closed-end question type would hinge on the specific situation and priorities for the researcher.
As survey taking habits change and new technologies are introduced, investigation on survey design strategies and establishing best practices is essential. The current study provides guidance on grid design, NPS presentation, and list question alternatives for the mobile (and non-mobile) audience. We support and encourage the efforts of others to test alternative approaches. Best practices in survey design, like research methodologies themselves, evolve over time and should continually be revisited.
- “Ericsson Mobility Report June 2019,” Ericsson, June 2019: https://www.ericsson.com/en/mobility-report/reports/june-2019
- Internal statistics from Dynata, MaritzCX and FocusVision Decipher