Skip to main content icon/video/no-internet

Error Rates

In research, error rate takes on different meanings in different contexts, including measurement and inferential statistical analysis. When measuring research participants’ performance using a task with multiple trials, error rate is the proportion of responses that are incorrect. In this manner, error rate can serve as an important dependent variable. In inferential statistics, errors have to do with the probability of making a false inference about the population based on the sample data. Therefore, estimating and managing error rates are crucial to effective quantitative research.

This entry mainly discusses issues involving error rates in measurement. Error rates in statistical analysis are mentioned only briefly because they are covered in more detail under other entries.

Error Rates in Measurement

In a task with objectively correct responses (e.g., a memory task involving recalling whether a stimulus had been presented previously), a participant's response can be one of three possibilities: no response, a correct response, or an incorrect response (error). Instances of errors across a series of trials are aggregated to yield error rate, ideally in proportional terms. Specifically, the number of errors divided by the number of trials in which one has an opportunity to make a correct response yields the error rate. Depending on the goals of the study, researchers may wish to use for the denominator the total number of responses or the total number of trials (including nonresponses, if they are considered relevant). The resulting error rate can then be used to test hypotheses about knowledge or cognitive processes associated with the construct represented by the targets of response.

Signal Detection Theory

One particularly powerful data-analytic approach employing error rates is Signal Detection Theory (SDT). SDT is applied in situations where the task involves judging whether a signal exists (e.g., “Was a word presented previously, or is it a new word?”). Using error rates in a series of trials in a task, SDT mathematically derives characteristics of participants’ response patterns such as sensitivity (the perceived distinction between a signal and noise) and judgment criterion (the tendency to respond in one way rather than the other).

Typically, SDT is based on the following assumptions. First, in each trial, either a signal exists or it does not (e.g., a given word was presented previously or not). Even when there is no signal (i.e., the correct response would be negative), the perceived intensity of the stimuli varies randomly (caused by factors originating from the task or from the perceiver), which is called “noise.” Noise follows a normal distribution with a mean of zero. Noise always accompanies signal, and because noise is added to signal, the distribution of perceived intensity of signal has the same (normal) shape. Each perceiver is assumed to have an internal set criterion (called threshold) used to make decisions in the task. If the perceived intensity (e.g., subjective familiarity) of the stimulus is stronger than the threshold, the perceiver will decide that there is a signal (respond affirmatively—e.g., indicate that the word was presented previously); otherwise, the perceiver will respond negatively. When the response is not consistent with the objective properties of the stimulus (e.g., a negative response to a word that was presented previously or an affirmative response to a word that was not), it is an error.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading