Skip to main content icon/video/no-internet

Nomograms

Nomograms are graphical representations of equations that predict medical outcomes. Nomograms use a points-based system whereby a patient accumulates points based on levels of his or her risk factors. The cumulative points total is associated with a prediction, such as the predicted probability of treatment failure in the future. Nomograms can improve research design, and well-designed research is crucial for the creation of accurate nomograms. Nomograms are important to research design because they can help identify the characteristics of high-risk patients while highlighting which interventions are likely to have the greatest treatment effects. Nomograms have demonstrated better accuracy than both risk grouping systems and physician judgment. This improved accuracy should allow researchers to design intervention studies that have greater statistical power by targeting the enrollment of patients with the highest risk of disease. In addition, nomograms rely on well-designed studies to validate the accuracy of their predictions.

Deriving Outcome Probabilities

All medical decisions are based on the predicted probability of different outcomes. Imagine a 35-year-old patient who presents to a physician with a 6-month history of cough. A doctor in Chicago might recommend a test for asthma, which is a common cause of chronic cough. If the same patient presented to a clinic in rural Africa, the physician might be likely to test for tuberculosis. Both physicians might be making sound recommendations based on the predicted probability of disease in their locale. These physicians are making clinical decisions based on the overall probability of disease in the population. These types of decisions are better than arbitrary treatment but treat all patients the same.

A more sophisticated method for medical decision making is risk stratification. Physicians will frequently assign patients to different risk groups when making treatment decisions. Risk group assignment will generally provide better predicted probabilities than estimating risk according to the overall population. In the previous cough example, a variety of other factors might impact the predicted risk of tuberculosis (e.g., fever, exposure to tuberculosis, and history of tuberculosis vaccine) that physicians are trained to explore. Most risk stratification performed in clinical practice is based on rough estimates that simply order patients into levels of risk, such as high risk, medium risk, or low risk. Nomograms provide precise probability estimates that generally make more accurate assessments of risk.

A problem with risk stratification arises when continuous variables are turned into categorical variables. Physicians frequently commit dichotomized cutoffs of continuous laboratory values to memory to guide clinical decision making. For example, blood pressure cut-offs are used to guide treatment decisions for hypertension. Imagine a new blood test called serum marker A. Research shows that tuberculosis patients with serum marker A levels greater than 50 are at an increased risk for dying from tuberculosis. In reality, patients with a value of 51 might have similar risks compared with patients with a value of 49. In contrast, a patient with a value of 49 would be considered to have the same low risk of a patient whose serum level of marker A is 1. Nomograms allow for predictor variables to be maintained as continuous values while allowing numerous risk factors to be considered simultaneously. In addition, more complex models can be constructed that account for interactions.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading