If you’ve ever launched a new product, designed a logo or come up with an advertising claim, getting customer feedback was an important part of developing your overall strategy. Online surveys can be a powerful method for collecting this customer feedback, including investigating how customers prioritize things in order of preference—a list of product features, favorite colors, most appealing ad concept, etc.
There are three common techniques researchers use to assess how strongly people feel about an item. Each of these techniques uses ‘scaling,’ which is a way to measure or order something on a numerical scale.
In this example, survey participants are asked to rate the importance of each factor using a 5 point scale. Data from a rating scale task can show the frequency of “very/extremely important” ratings across all participants. This provides pretty good guidelines on what matters most. Roughly 80% said value is most important, followed by location and brand name.
In a rating task, items are evaluated independently. That gives an absolute (as opposed to relative) measurements in order to declare: “For 80%, location is important.”
Pros: Rating questions are easy to write, program, and analyze. They don’t take up that much survey space, and it’s also easy to understand the results
Cons: But they can suffer from scale bias, which reflects the tendency for participants to use rating scales in different ways. (e.g. mainly using the positive end of the scale). Nor do rating scales force participants to discriminate and prioritize among items, so they may give high ratings for everything
When to use: They are good for situations when interview space or complexity may be a concern; when you want to evaluate items independently (e.g. satisfaction or performance ratings). If, especially for prioritizing longer lists, getting a general idea what the most/least important factors are, rating tasks are an efficient way to do the job.
One of the strengths of a ranking task is that it forces respondents to discriminate between items. They must prioritize which item in the list is most important, second most important, etc. You cannot have ties. You cannot rate everything highly. This means you will get greater distinction in the resulting data compared to a rating question.
This chart shows how often each item was ranked as one of the top 2 most important factors when deciding where to shop. While before we knew that location, value for the money and brand name were cited as the most important factors – now we see that location really matters most if participants are forced to prioritize among the items.
Since participants rank their preference for items relative to others in the list, rank data is on a relative scale. (Theoretically, the top-ranked item could be bad, but just not as bad as the others). Keep this in mind, because there are situations where you may not want participants to evaluate items relative to each other, such as rating a performance.
Pros: Ranking tasks are easy to write, program, and analyze; forces discrimination between items.
Cons: It’s a little more time-consuming for the participant than a rating question and difficult to do for longer lists of items. (e.g. people tend not to know how to prioritize items beyond the top 5. The ranking also does not allow ties and provides ordinal data only – participants rank order items, so, unlike rating data, there’s no information about how much an individual participant prefers one item over the other, just the order of preference.
When to use: Ranking is good for situations when interview space or complexity may be a concern; when you have a list of options or competitive set (e.g. a list of test names) that you need to prioritize; when it’s important to discriminate between items and ordinal data is okay.
Maximum Difference Scaling (MaxDiff)
For many researchers MaxDiff has become the preferred method for prioritizing a list of items. The allure of it is that it addresses some of the shortcomings of rating and ranking tasks. It is:
- Relatively easy to do
- Doesn’t suffer from scale bias
- Allows discrimination between items
- Indicates strength of preference between items
Participants are asked to make a choice among a set of four or five items and evaluate them at extremes. They select their most and least important items. This task is repeated several times, with the choice set changing each time.
Participants find this to be an easy to understand task. Evaluating two extremes is much simpler than having to discriminate among many items of lesser importance. Also, there are no scale points, so no scale bias. MaxDiff is a carefully designed exercise that enables the researcher to accurately estimate the strength of preference for each item. Each item is given a preference score. From the chart, it’s clear that location is the most important factor. In fact, it is more than 2X as important as merchandise selection or parking availability.
Pros: no scale bias, allows discrimination among items; measures strength of preference; it’s also a natural task. (People tend to make buying decisions by making a choice among a set of alternatives)
Cons: can be more costly to implement, requires special software for setup and analysis; longer interview times; might require some explanation on how to interpret and report scores
When to use: Use MaxDiff when you don’t mind the additional setup complexity; have a defined choice or competitive set (e.g. list of test names) to prioritize, it’s important to discriminate between items and you need to understand preference strength.
Having customers prioritize a list of options can help you make better decisions for your business and your overall product development or marketing strategy. There are multiple survey techniques for investigating customer preferences. Whether it’s MaxDiff or a simple rating scale task, consider what’s right for your budget and data needs.