Skip to main content icon/video/no-internet

Artificial neural networks (ANN) are pattern detection and classification tools loosely modeled on networks of neurons in human or animal brains. The term neural network is used in contexts such as GIS, where there is unlikely to be any confusion with actual physiological neural networks. This entry outlines the basic concept behind the design of neural networks and reviews aspects of their network structures before considering more practical aspects, such as network training, and issues relevant to their use in typical applications.

Background and Definition

A neural network consists of an interconnected set of artificial neurons. Each neuron has a number of inputs and one output and converts combinations of input signal levels to a defined signal output level. A neuron effectively represents a mathematical function f that maps a vector of input values x to an output value y; hence y = f(x1, x2,… xn) = f(x). Typically, the output from a neuron is a weighted sum of the input signals, so that ywixi, where w = [wi] is a vector of weights associated with each input to the neuron. Often, a threshold is applied to the simple weighted sum so that the final output is a binary (on-off) signal.

This simple model of a neuron was first proposed by Warren McCulloch and Walter Pitts in the 1940s. While the relationship to conceptual models of physiological neurons was originally a close one, subsequent neural network developments have favored approaches that enhance their applicability in classification or other tasks, rather than as realistic representations of brain function.

Network Structure

Many network interconnection structures are possible, but most are characterized by the arrangement of neurons into a series of layers, with the outputs from one layer being connected to the inputs of the next. The first layer in a network is called the input layer, and the final layer is the output layer. In typical networks, there are one or more hidden layers between the input and output layers.

A basic distinction between network structures is that feed-forward networks allow signals to proceed only in one direction through the network from the input layer through any hidden layers to the output layer, while in recurrent networks, the outputs from later layers may be fed back to previous layers. In fully recurrent networks, all neuron outputs are connected as inputs to all neurons, in which case there is no layered structure.

In mathematical terms, whatever the interconnection structure of the network, the overall effect is that the full network is capable of representing complex relationships between input and output values, with each neuron producing a weighted sum of weighted sums, because each xi in the input to the simple neuron equation y = f(x) is itself the output from other neurons. There are close parallels between the interconnection weights of a neural network and spatial interaction models, with each neuron representing a single location and interaction weights between neurons equivalent to the spatial interaction matrix. This structural similarity and its implications have been explored most thoroughly in the work of Manfred Fischer and his collaborators.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading