Skip to main content icon/video/no-internet

Image texture refers to the subtle spectral differences present in digital or analog images and is defined as the spatial variation in pixel intensities or brightness values. The two most common ways to compute image texture are through first-order or second-order gray-level statistics. First-order texture statistics of local areas can be computed for means, standard deviation, and variance, among others. When computing these values, a moving window (e.g., a 3 × 3 pixel window) computes the mean (or one of the other statistics), which is then assigned to the center pixel. For example, a 3 × 3 moving window can be used to calculate the standard deviation for the center pixel. This process would help determine the pixel areas that have the most standard deviation (e.g., spectral differences) and would amplify edges within the image. This kind of information is usually not readily apparent when simply visually examining the image.

Second-order image texture measures represent a higher-order set of image texture information. These measures are based on spectral value spatial dependency gray level co-occurrence matrices (GLCM), which contain information about pixels and their neighbors at fixed orientations and distances. The GLCM-derived texture measurements have been widely adopted by the remote-sensing community, and typical measures that can be used to extract useful textural information are angular single moment, correlation, entropy, and homogeneity. Jensen and Gatrell (2005) demonstrated that image texture homogeneity was positively correlated with socioeconomic and population variables.

Most remote-sensing/image-processing software packages contain algorithms to measure both first- and second-order image texture. These algorithms usually allow the user to control many parameters. Some of the parameters that the user will probably control when computing image texture are the kind of image texture to measure standard deviation, homogeneity, and so on; spectral band (e.g., near-infrared band); neighborhood pixel window size (e.g., 3 × 3, 5 × 5, or 7 × 7, although larger windows may be selected); and quantization level of the output texture image (8 bit, 12 bit, etc.). It is often necessary to experiment with these parameters until an optimum solution is found.

Image-processing algorithms, such as most forms of supervised and unsupervised classification, are usually single-pixel operations that do not consider image texture information in the classification process. However, sometimes researchers compute texture images and then incorporate them into the classification process as additional bands or as variance thresholds that define clusters (see Lillesand, Kiefer, & Chipman, 2004). In addition, recently, new image classification algorithms, such as feature extraction and other object-oriented methods, have begun to incorporate texture directly into the image classification process.

Ryan R.Jensen

Further Readings

Haralick, R.(1979).Statistical and structural approaches to texture.Proceedings of the IEEE67786–804.http://dx.doi.org/10.1109/PROC.1979.11328
Haralick, R.(1986).Statistical image texture analysis. In T. Young & K. S. Fu (Eds.), Handbook of pattern recognition in image processing (pp. 247–280). New York: Academic Press.
Jensen, J.(2005).Introductory digital image processing: A remote sensing perspective (3rd ed.).Upper Saddle River, NJ: Prentice Hall.
Jensen, R., & Gatrell, J.(2005).Image homogeneity and urban demographics: An integrated approach to applied geo-techniques. In R. Jensen, J. Gatrell,

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading