Skip to main content icon/video/no-internet

There are two aspects to computational limitations in decision making. On the one hand, there is the idea that the human brain is computational and that optimal decisions require lengthy computations but that the human computational capacity is limited, and therefore human decision performance is less than optimal and humans must use alternative strategies (heuristics, etc.) to make decisions.

The second aspect is that computers are limited as well from recommending optimal decisions because the algorithms required, by necessity, take too much time. So computers too must use alternative approaches.

The primary dialectic in decision making pits the rational-man model, where decisions are made in accordance with the goal of maximizing utility, against the natural-man model, where decisions are made in a way that has been evolutionarily designed to best fit our environment. One engine of this dialectic is the issue of how much computational power is available to the decision maker. Computer scientists have attempted to model both sides in their machines, with results that have important implications for decision makers and those trying to help them. On the other side, psychologists have tried to apply computational models to observed and experimentally induced behavior.

As steam engines provided the motivating analogy in 19th-century science beyond mechanical engineering, computation provides the current leading analogy for many fields, including theories of the mind. Colloquially and even scientifically, authors discuss the brain as if it were a von Neumann computer: We separate thinking memory from storage memory, and we ask what operations our thinking self can perform with how many memory “cells.” Research results need to be clear whether they mean computation in its strict sense or in its analogous sense. This entry first addresses machine-based computational limitations and then explores these difficulties in human cognition.

Machines

The field of artificial intelligence is the primary field where computational models of decision making get computer scientists' full attention and where their models get tested. Traditional areas of artificial intelligence—game playing, visual understanding, natural-language processing, expert advice giving, and robotics—each requires processing data taken from the environment to result in a conclusion or action that embodies knowledge and understanding. The nature of “computation” differs in each case. In speech recognition, the current leading methods involve numerical calculation of probabilities. In game-playing, the methods call for deriving and considering many alternative game configurations. For expert systems—beginning with the program MYCIN, whose goal was supporting the management of fever in a hospitalized patient—the computer explores pathways through rules. Thus, the computations involve a mix of quantitative and “symbolic” processing.

In computer science, computational limitations refer to two primary resources: time and space. Space refers to how much computer memory (generally onboard RAM) is needed to solve a problem. Time refers not only to the amount of time needed to solve a problem, in seconds or minutes, but also to the algorithmic complexity of problems. Problems whose solution time doubles if the amount of data input into the solver doubles have linear complexity; problems whose solution quadruples have quadratic complexity. For example, inverting a matrix that may represent transition probabilities in a Markov model of chronic disease has cubic complexity, and sorting a list has between linear and quadratic complexity. These algorithms are said to be polynomial (P) in the size of their data inputs. On the other hand, problems whose solution time doubles even if only one more piece of information is added have exponential complexity, and their solution takes the longest amount of time (in the worst case). For instance, enumerating by brute force all possible potential strategies for treatment in a specific clinical problem, where order matters, such as the question of which tests should be done in which order (diagnosing of immunological disease being a classic case, with the multitude of tests available), leads to an exponential number of pathways. If a single new test, for instance, becomes available, then every single pathway would have to consider using that test or not, thereby doubling the number of possibilities to be considered. If these strategies are represented as decision trees, the number of terminal nodes would double.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading