Skip to main content icon/video/no-internet

Once IBM committed to the computerization of its product lines, it put itself on a path toward playing a large role in nanotechnology. Innovation in computing is closely tied to miniaturization, so at some point that innovation would have to take place at the nanoscale. Big Blue, with its outsize role in the history of computing (the computer industry was once characterized as “IBM and the Seven Dwarves”), and its enormous research capacity, began looking ahead to the nanoelectronics era by (at the latest) the early 1970s. Yet, IBM did not merely ride Moore's Law into the nanoscale—it was often far out ahead of Moore's Law, calling for the adoption of seemingly far-fetched techniques that today are the mainstays of nanotechnology research.

Corporate Research in the Golden Age

The trend toward miniaturizing electronic components has been in place since before World War II, with miniature vacuum tubes for proximity fuses and field radios. With the invention of the transistor at Bell Laboratories in 1947, though, miniaturization took on a new urgency. Reducing the size of solid-state components would make them faster and potentially cheaper. The tendency of the smallest commercial components to get smaller at a steady exponential rate is known as Moore's Law, after an observation by Gordon Moore of Fairchild Semiconductor (later Intel) in 1965.

Even before 1965, though, electronics firms noticed this trend and wondered how to keep up with it. Silicon Valley firms such as Intel and Fairchild decided—as the nickname for their region indicates—to do so by sticking with silicon transistors, manufactured with optical lithography. They did so partly because the most successful Silicon Valley firms had very small research units, and focused almost exclusively on a very small set of capabilities. Thus, they were forced to act in concert with large networks of suppliers, and therefore needed to innovate in small, steady, incremental steps, avoiding large, disruptive changes in manufacturing as much as possible.

This innovation model was mirrored, to some extent, by Asian electronics firms, but differed considerably from that of firms in Europe and the northeastern United States. There, companies like RCA, Westinghouse, AT&T, IBM, and Philips had large, autonomous research units. These companies often had oddly diverse arrays of corporate divisions (RCA, for instance, owned the Hertz rental car company); corporate research was one division among many, expected to pay for itself but not overly burdened with communicating to the other divisions.

This was particularly possible in Cold War America because the federal government funded so much basic research. Companies like IBM could secure federal grants, but they also benefited from tax incentives that rewarded corporate spending on basic research. Thus, research on topics very distant from IBM's immediate technological needs could serve as a separate profit center within the company.

This resulted in a golden age for corporate research, when companies like IBM and AT&T hired the world's leading architects to build lavish research “campuses” in pastoral retreats so that they could lure the world's leading scientists to work on topics of their own choosing. In fields such as surface science and information theory, IBM Research and Bell Laboratories could amass enough research talent to eclipse any university or government laboratory.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading