Skip to main content icon/video/no-internet

Metric multidimensional scaling (MDS) transforms a distance matrix into a set of coordinates such that the (Euclidean) distances derived from these coordinates approximate as well as possible the original distances. The basic idea of MDS is to transform the distance matrix into a cross-product matrix and then to find its eigendecomposition, which gives a principal component analysis (PCA). Like PCA, MDS can be used with supplementary or illustrative elements that are projected onto the dimensions after they have been computed.

An Example

The example is derived from O'Toole, Jiang, Abdi, and Haxby, who used a combination of principal component analysis and neural networks to analyze brain imaging data. In this study, 6 subjects were scanned using fMRI when they were watching pictures from 8 categories (faces, houses, cats, chairs, shoes, scissors, bottles, and scrambled images). The authors computed for each subject a distance matrix corresponding to how well they could predict the type of pictures that the subject was watching from his or her brain scans. The distance used was d′, which expresses the discriminability between categories.

O'Toole et al. give two distance matrices. The first one is the average distance matrix computed from the brain scans of all 6 subjects. The authors also give a distance matrix derived directly from the pictures watched by the subjects. The authors computed this distance matrix with the same algorithm that they used for the brain scans; they just substituted images for brain scans.

We will use these two matrices to review the basics of multidimensional scaling: namely, how to transform a distance matrix into a cross-product matrix and how to project a set of supplementary observations onto the space obtained by the original analysis.

Multidimensional Scaling: Eigenanalysis of a Distance Matrix

PCA is obtained by performing the eigendecomposition of a matrix. This matrix can be a correlation matrix (i.e., the variables to be analyzed are centered and normalized), a covariance matrix (i.e., the variables are centered but not normalized), or a cross-product matrix (i.e., the variables are neither centered nor normalized). A distance matrix cannot be analyzed directly using the eigendecomposition (because distance matrices are not positive semidefinite matrices), but it can be transformed into an equivalent cross-product matrix, which can then be analyzed.

Transforming a Distance Matrix into a Cross-Product Matrix

In order to transform a distance matrix into a crossproduct matrix, we start from the observation that the scalar product between two vectors can be transformed easily into a distance (the scalar product between vectors corresponds to a cross-product matrix). Let us start with some definitions. Suppose that a and b are two vectors with I elements. The Euclidean distance between these two vectors is computed as

None

This distance can be rewritten to isolate the scalar product between vectors a and b:

None

where aTb is the scalar product between a and b.

If the data are stored into an I × J data matrix denoted X (where I observations are described by J variables), the between-observations cross-product matrix is then obtained as

None

A distance matrix can be computed directly from the cross-product matrix

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading