Rules of thumb when designing a max-diff experiment
These rules of thumb are adapted from those provided by Sawtooth Software:[1]
- Try and limit the number of alternatives in each set to 5 or less. More generally, any set containing more than seven is highly unlikely to be useful, as respondents find it difficult to compare so many alternatives and tend to take short-cuts (e.g., reading only half the alternatives).
- Specify the number of alternatives in each set to be no more than half the number of alternatives in the entire study.
- Ensure that each alternative is shown at least three times (unless anchored MaxDiff is used; see below).
- Where the focus is only on comparing the alternatives (e.g., identifying the best from a series of product concepts), it is a good idea to create multiple versions of the design so as to reduce the effect of order and context effects. Sawtooth Software suggest that if having multiple versions, 10 is sufficient to minimize order and context effects, although there is no good reason not to have a separate design for each respondent. Where the goal of the study is to compare different people, such as when performing segmentation studies, it is often appropriate to use a single version (as if you you have multiple designs, this is a source of variation between respondents, and may influence the segmentation).
Standard designs for MaxDiff
The orthodox approach to creating an experimental design for a MaxDiff study is a to create what is referred to in the statistics literature as an incomplete block designs, where 'block' refers to the questions and 'incomplete' indicates that each question contains only a subset of all of the alternatives being researched.
The creation of such designs is discussed in the following blog posts about Q, R, and Displayr.
Advanced designs for MaxDiff
Large numbers of alternatives
Where there are a large number of alternatives to be researched, and the resulting incomplete block design is infeasibly large, two solutions are:
- Randomly allocate blocks. For example, if a design is created with 20 blocks (i.e., questions) each respondent may be randomly allocated 10).
- Use an incomplete block design to allocate alternatives to respondents and then use an incomplete block design for each set of alternatives. For example, if there are 100 alternatives, you could:
- Create a design with 100 alternatives, 20 blocks and 10 alternatives per block (Design A).
- Create a second design with 10 alternatives, 6 blocks and 5 alternatives per block (Design B).
- Randomly allocate each respondent to one of the 20 blocks in Design A (i.e., so that the respondent only sees the 10 alternatives in this block).
- Use Design B to create the questions to be shown to the respondent (where the alternatives used are dictated by the respondent's block in Design A).
Prohibitions
Strategies for addressing prohibitions include:
- Use commercial software specifically design for prohibitions (e.g., Sawtooth's MaxDiff software).
- Randomizing across respondents. For example, if two alternatives are never to be shown together, create the experimental design with only one alternative in place of these two and then show one of the alternatives to half the respondents and the other to the other half.
- Rotational splitting of alternatives. For example, if two alternatives are never to be shown together, create the experimental design with only one alternative in place of these two and then randomly assign half of the appearances of the alternative to the first of the alternatives and the other half to the second.
- Random removal of alternatives. Wherever a prohibited combination appears, manipulate the design in such a way to remove the prohibition (e.g., replace an alternative with another alternative).
- Create sets by randomly selecting alternatives (e.g., using R's sample function) and discard any sets that contain the prohibited features. Note that this approach is only sensible if different respondents see different sets of alternatives.
Anchored MaxDiff
There are a variety of alternative forms of anchored MaxDiff. The most generally useful proceeds as follows:
- Get respondents to rate the appeal of all of the alternatives (e.g., using a 7-point rating scale).
- (Optionally) Get respondents to rank any alternatives that receive the same rating.
- Select a subset of the alternatives for inclusion in the MaxDiff experiment.
More information about anchored MaxDiff is on the Q Wiki.
See also
See Category:MaxDiff for an overview of MaxDiff and links to key resources.
References
- www.sawtoothsoftware.com/download/techpap/maxdifftech.pdf