Skip to main content icon/video/no-internet

Paired Comparison Technique

The paired comparison technique is a research design that yields interval-level scaled scores that are created from ratings made by each respondent for all possible pairs of items under consideration. The basis for the method dates back to its first reported use in the mid-1800s. Although the technique is a very powerful approach for producing a highly reliable ranking of the rated items, it is underutilized by survey researchers due to the amount of data that often must be gathered, and thus its cost and the burden it places on respondents.

At the simplest level, paired comparisons (i.e. simultaneously comparing two things with each other) are made by each respondent among a set of items using a binary scale that indicates which of the two choices are most preferred, most pleasant, most attractive, or whatever other judgment the respondent is asked to make in comparing the two. However, more complex judgments can be generated by having respondents indicate their choices along a continuum of response choices rather than a simply binary choice (A or B).

For example, if a political pollster wanted to determine the relative ordering of voter preferences among five Republican primary candidates, a paired comparison design would yield the most valid data. In this design, each candidate would be paired with each of the other candidates, and each respondent would judge each pair on some criterion. Typically this would be done by using a scaled response format such as Strongly Prefer Candidate A; Prefer Candidate A; Slightly Prefer Candidate A; Slightly Prefer Candidate B; Prefer Candidate B; Strongly Prefer Candidate B. Generally the midpoint of the preference scale—which in this example would be, “Prefer Neither Candidate A nor Candidate B”—is not offered to respondents because it is reasoned that the likelihood that there is complete indifference between the two is extremely low. Providing this “no preference” choice may encourage some respondents to satisfice and use the middle option too readily.

Scoring using paired comparison data is straightforward. In the previous example a “Strongly Preferred” response would be scored with a 3, a “Preferred” response would scored with a 2, and a “Slightly Preferred” response would be scored with a 1. If Candidate A were paired with Candidate D, and Candidate A were “strongly preferred” over Candidate D by a given respondent, then the respondent would be assigned a + 3 score for Candidate A for that pairing, and the respondent would get a −3 score for Candidate D for that pairing. The scaled scores for a specifie candidate for each respondent would be the sum of the respondent's individual scores from each of the pairings in which that candidate was included. In the example of five candidates being paired in all possible ways, there would be ((c(c − l))/2) possible paired comparisons, with c representing the number of things being paired. Thus in this example there are ((5(5 − l))/2) or 10 pairs: AB, AC, AD, AE, BC, BD, BE, CD, CE, and DE. (The pairings would be presented in a random order to respondents.) Each pairing would require a separate question in the survey; thus, this five-candidate comparison would require 10 questions being asked of each respondent. If one of the candidates in this example were “strongly preferred” by a specific respondent over each of the other four candidates she or he was paired with, that candidate would get a score of + 12 for this respondent. If a candidate were so disliked that every time she or he was paired with one of the other four candidates a given respondent always chose “Strongly Preferred” for the other candidates, then the strongly disliked candidate would be assigned a scaled score of −12 for that respondent. Computing scale scores for each thing that is being rated is easy to do with a computer, and these scaled scores provide very reliable indications of the relative preferences a respondent has among the different items being compared. That is, asking a respondent to rank all of the things being compared in one fell swoop (i.e. with one survey question) will yield less reliable and valid data than using a paired comparison design to generate the ranking.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading