Society Online: The Internet in Context


Edited by: Philip N. Howard & Steve Jones

  • Citations
  • Add to My List
  • Text Size

  • Chapters
  • Front Matter
  • Back Matter
  • Subject Index
  • Part I: Social Capital, Community, and Content

    Part II: Wired News and Politics Online

    Part III: Economic Life Online

    Part IV: Culture and Socialization Online

    Part V: Personal and Global Contexts of Life Online

  • Dedication

    To the people who have given us a great context: Alexandra, Elaine, Jodi, L. Z., and Ross


    View Copyright Page


    This book is the result of intense collaboration across the social sciences, and we are proud to have been humble nodes in an unusually creative network of sociologists, economists, political scientists, psychologists, historians, and legal, management, and communication scholars. Whether or not technology itself can cause collaborative constructive interaction, the contributors to this volume study technology with a collaborative constructive instinct.

    The team at the Pew Internet and American Life Project, particularly Harrison “Lee” Rainie and Amanda Lenhart, helped to feed our research instinct with raw data and intelligent interpretation. Margaret Seawell and her colleagues at Sage Publications helped to keep us on track, on topic, and on time, for which we are very grateful.

    Even though this volume is about society online, there is nothing about pornography in this collection. Quite deliberately, a couple of our authors chose to refer to a fake Web site——as a pseudonym for a fictional company they were analyzing. Unfortunately, a fellow who works as a real estate agent by day let us know that he believes he has copyright and trademarks for some business that will be using this URL shortly. Despite the legal precedents of academic “fair use,” our contributors agreed to change their pseudonym. This volume still has nothing about pornography in it, and we wish that individual success with his venture.

    Philip Howard thanks Eszter Hargittai, Alexis Cabrera, Tema Milstein, Christian Sandvig, David Silver, Jennifer Stromer-Galley, and Francisco de Zarate for the crucial support they gave him in this editing project. He gratefully acknowledges a fellowship with the Pew Internet and American Life Project during its first funding term, a fellowship that enabled him to play with numbers. Howard also acknowledges being too deeply embedded in caf´s and libraries such as the following: Tryst in Washington, D.C., where he failed to look cool; Dos Gringos in Adams Morgan, with a great view of a crazy street and where “Blelvis” (the black Elvis) would serenade him home after a long workday; Panini Panini in Roger's Park, a plain late night caf´ with the best machiatto in Chicago; Ennui, where the friendly Kat badgered him about his Burningman pictures until he finally brought them in; Unicorn in Evanston, where he could actually do a little work; the Bourgeois Pig in Lincoln Park, where he was allowed to sleep; Intelligensia, where good coffee and Dave Grazian could often be found; and the unbeatable Future's on Bloor Street in Toronto. The Ryerson Library in the Art Institute of Chicago was a gloriously calm environment for real editing. Northwestern University should be deeply ashamed for reneging on its membership association with the library. Sonny's, in a hidden corner of the building next door, served lousy coffee in Styrofoam cups, but the ginger chicken certainly was a great staple lunch once a week for a year. Howard's real home context, with Alexandra, Elaine, and Ross, is now stretched across several time zones but is still a loving source of support. Aside from cafés and libraries, the best place to spend a summer editing a book is on a Spanish beach.

    Steve Jones thanks the many students and colleagues at the University of Illinois at Chicago, particularly those in the Department of Communication, the Great Cities Institute, and the Electronic Visualization Lab, with whom and from whom he always learns much about media (new and old) and the world. Unlike Howard, Jones is monotonous in his choice of cafés (monocafeistic, one might even say), and he thanks Sun and Nick at Caf´ Express on Main for sticking with it despite Star*ucks moving in across the street. He also thanks L. Z. and looks forward to many years together.

    The contributors to this book deserve special thanks. They made our job as editors a pleasure, and their work and ideas were an inspiration to us.

    Philip N.Howard, Beaches of Tenerife, Spain
    SteveJones, Office overlooking the Eisenhower Expressway, Illinois


    Harrison “Lee”Rainie, Pew Internet and American Life Project

    Internet communication began with a computer crash and unmemorable twaddle. The first attempted transfer of information packets in 1969 was not launched with the same portentous thunder as was Samuel Morse's first telegraph message in 1844 (“What hath God wrought?”). Rather, Charley Kline, an engineer at the University of California, Los Angeles (UCLA), froze his computer in 1969 when he began typing “L-O-G” (on his way to “L-O-G-I-N”) to start the file transfer program. Programmers fixed the glitch quickly, and the file sharing began.

    E-mail did not come into being until 1971, and again there was no self-puffery in the inaugural text. Indeed, there was not even an effort to match the practical tone of Alexander Graham Bell's initial phone call in 1876 (“Mr. Watson, come here; I want you”). Instead, an engineer for a U.S. Defense Department contractor named Ray Tomlinson sent a test message from one computer to another that was sitting less than 5 feet away. He thinks the text was probably “QWERTYUIOP,” although he cannot remember for certain. It was a milestone moment that was not self-evident. Tomlinson did not think much of his lovely little hack that allowed people using computers at remote sites to write electronic notes to each other. Indeed, the one thing for which Tomlinson is remembered is that he picked “@” as the locator symbol in electronic addresses.

    Soon enough, however, the extreme humility that characterized the dawn of computer network communication was replaced by an orgy of extravagant claims about the revolutionary power of the internet. The predictions from technologists, government leaders, entrepreneurs, the business press, and investment carnival barkers were voluminous and unsparing. No aspect of people's social lives, business arrangements, workplace culture, media power, civic engagement, or learning environments would be immune from the internet's radical influence. The cultural gatekeepers were fond of asserting, “The internet changes everything.”

    The Pew Internet and American Life Project was born of the idea that these were testable propositions. Rebecca Rimel, president of the Pew Charitable Trusts, was struck by the fact that many of the debates about the impact of the internet were taking place without reference to data and basic social science research. She and the board of trustees of the foundation hoped that a research project could play a useful role in the disputes by providing nonpartisan facts and analysis about the social impact of people's internet use. The foundation provided generous support for a research agenda that would monitor Americans' use of the internet and focus on several aspects of internet use that were not major concerns of the many proprietary research firms that concentrated on e-commerce. Those areas of emphasis included how people's internet use affected their interactions with family, friends, and others; their involvement with various communities; their health care; their educational experiences; their civic and political lives; and their workplaces.

    Much of the fruit of the Pew Internet and American Life Project's work is explored here, as are data from several other prominent research projects such as the General Social Survey (, the HomeNet Study (, the Survey2001 Project (, PoliticalWeb (, and the UCLA Center for Communication Policy ( In the course of doing telephone interviews with more than 60,000 people during the past 30 months, the Pew project learned that internet use is helping Americans to share and acquire knowledge, make important health care decisions, deepen and extend their social networks, access cultural material, probe new corners of the planet, pursue their passions and hobbies, become more productive, gather up more consumer information, and entertain themselves more vividly. At the same time, we have learned that many users head in different “directions” online. Although Americans use online tools to connect to distant people and groups that share their interests, they also use those same tools to become more connected locally with the organizations and people in the places where they live. One overall message from respondents to our surveys was that the use of the internet is good for building new communities as well as for deepening existing relationships.

    Of course, ties that bind can be helpful as well as harmful. For example, the same technology that helps those who suffer from rare cancers find each other and form life-enhancing support groups can just as easily be used by pedophiles to encourage each other and construct sophisticated rationales for their behavior. Not surprisingly, internet users have finessed the question of whether the internet is a good or bad thing. Their attitude can be summed up as follows: “I'm okay, they're not.” Wired Americans believe that their own use of the internet benefits them and is socially enhancing, although they worry that others may be doing ugly, criminal, perverted, or self-destructive things online. Worse, wired Americans worry that all of the temptations of the virtual world can lure the impressionable—that is, everybody else—to the dark side. This is one of the many reasons why it is so hard to make policy that addresses the wide range of internet-spawned concerns. People do not want their own access to internet information and services curtailed, but they hope that something can be done to keep others from harming the innocent or even themselves.

    To a degree, those who think in these “me or them” terms are right. It is easy to spot in the Pew Internet and American Life Project's data that context matters a great deal in the way in which people use the internet and how they feel about it. Various groups of people—by gender, age, race, income bracket, educational level, locale, or experience level—use the internet in different ways. People take to the online world the things that interest and motivate them in the offline world, so the variety and meaning of their experiences on the internet reflect that diversity. Moreover, patterns of use evolve over time. The longer people are online, the more likely they are to venture into new activities, explore new relationships, and rely on the internet to help them complete crucial tasks or make major decisions. This highlights one of the major continuing lines of research by the Pew project: Internet use and the impact of going online are highly contingent on the rationale for going online and circumstances of each cyber venture. We know that for most people, internet use enhances, extends, and supplements what they do offline. However, we still have much to learn about what internet use displaces in people's lives and what it motivates them to do.

    We are delighted that some of the scholars who have written for this volume have found our work at the Pew Internet and American Life Project to be useful. We hope that they are not alone. Our data and reports are available free of charge through our Web site ( This seems to be the least that a project like this can do to contribute to the open source spirit of the internet's most fevered partisans. We hope that others will find new insights in our data that we ourselves have missed or will feel free to challenge our interpretations of what we have seen. Maybe the collective intelligence of this network of researchers will eventually produce a reasoned and fact-buttressed judgment about the impact of a communications era that humbly began with a computer crash and a message that read as if a monkey had been banging on a keyboard.

    Harrison “Lee”RainieWashington, D.C.

    Prologue: The Case for Multimethod Research: Large Sample Design and the Study of Life Online

    JamesWitte, Clemson University

    Social scientists, particularly survey researchers, were slow on the uptake when it came to considering the impact of the internet on society. Once its significance crossed the academic radar screen, however, scrutiny has been intense and the debate has been loud and (at times) bitter. The discussion took on a particularly acerbic tone with Norman Nie's widely reported and cited statement (from The New York Times and The Washington Post to CNN and National Public Radio): “The internet could be the ultimate isolating technology that further reduces our participation in communities even more than television did before it” (Nie & Erbring, 2000b, p. 19). Perhaps Nie's strongest critic, sociologist Amitai Etzioni, flatly dismissed Nie's findings: “The internet, like other new technologies, changes our lives, and not all for the better. However, claims that it increases our social isolation are wholly unsupported, especially by this study” (Etzioni, 2000, p. 42). Nie and his collaborator, Lutz Erbring, began their response by referring to Etzioni as being “in the sunset of a distinguished career.” Toward the end of their response, they concluded, “Professor Etzioni seems either more confused than we thought possible, or simply prepared to dispense with logic just so he can maul us coming and going, or both” (Nie & Erbring, 2000a, p. 45).

    As the war of words raged, an important point was often overlooked: The study on which the debate turned was not only a study of the internet but also a study conducted using the internet. Nie and Erbring's negative assessment derived from their InterSurvey sample, a national “random sample” of 4,113 American adults in 2,689 households. Respondents were provided with free internet access and WebTV connections to participate in the survey. Their widely cited findings were based on those respondents who had internet access prior to the WebTV connection installed by InterSurvey (Nie & Erbring, 2000b). Indeed, throughout the discussion of their work, Nie and Erbring repeatedly drew on the quality of their sample as the feature that set their online research apart from others. Although the InterSurvey sample offered WebTV access to a random sample of individuals, there is no guarantee that those who actually participated in the survey constituted a random sample. As in the case of a telephone survey, a variety of unobserved selection processes may dramatically alter the random character of the sample. For example, an individual with established and regular patterns of internet use and online interaction would presumably be far less willing to adopt a new method of access than would a less committed user.

    This point was amply illustrated by data from Survey2000, a Web-based convenience sample of visitors to the National Geographic Society (NGS) Web site (Witte & Howard, 2002). On the whole, this sample looked nothing like the anomic and isolated online world described by Nie and Erbring. As a group, these individuals had strong prosocial attitudes, were socially active, and were politically engaged. Because Survey2000 respondents used a variety of means to connect with the internet, it is possible to compare WebTV respondents with non-WebTV respondents within the Survey2000 group. Based on this comparison, the WebTV users in the Survey2000 sample closely resembled the inhabitants of the place Nie and Erbring described. Even controlling for gender, age, educational attainment, employment status, and household composition, significantly lower levels of social participation were found among WebTV respondents. Moreover, these differences were not small, outweighing the impact of many of the demographic control variables.

    The point of this analysis is not to say that one author or pair of authors— Nie and Erbring or Etzioni—was more correct than another. Nor would one want to conclude that WebTV leads to social isolation. Indeed, it is far more plausible that a subtle selection bias came into play, leading less sociable individuals, regardless of demographics, to be WebTV users in the Survey2000 sample but also among the randomly selected individuals who were offered the WebTV as part of the InterSurvey sample. Instead, the aim here is to highlight the importance of sampling, particularly with a novel means of data collection such as a Web survey. For a number of reasons, the internet is likely to play an important role in the future of survey research. However, as survey research goes online, there are also a number of reasons to proceed with caution. This prologue begins with an overview of sampling issues particularly relevant to online surveys and studying life online. The empirical section of the prologue is organized around the 2000 General Social Survey (GSS) and Survey 2001, a follow-up study to the National Geographic's Survey 2000. After an introduction to the design, content, and data collection strategies associated with each of these surveys, the surveys are compared with an eye to several themes beginning with sample demographics. Subsequent sections consider the relationship between sampling and substantive results, specifically internet use and environmental issues.

    Analysis of the Problem: Issues in Sampling

    Web-based survey research represents the most recent addition to a growing repertoire of computer-assisted survey tools dating back to the early 1970s (Couper & Nicholls, 1998; Dillman, 2000). On the one hand, there are clear advantages to a Web-based approach. To begin with, as with all forms of computer-assisted survey research, a Web-based instrument allows for complicated skip patterns that tailor the survey to the respondent and eliminate redundant or irrelevant questions. Moreover, it does so with a degree of accuracy and transparency unmatched by computer-assisted telephone interviewing (Nicholls, Baker, & Martin, 1997). At the same time, computer-based systems, including Web-based surveys, may include detailed help functions to guide and assist the respondent to a degree that is not possible with paper-and-pencil, self-administered formats (Dillman, 2000; Jenkins & Dillman, 1997). Programming a Web-based survey can be costly, particularly if the instrument involves complex skip patterns or elaborate design elements. However, this cost is fixed. Unlike face-to-face or telephone surveys, increasing the sample size is not associated with added interviewer costs. As with other computer-assisted formats, a Web-based survey also eliminates the time and expense of data entry because this is performed by the respondent in the course of the survey (Baker, Bradburn, & Johnson, 1995). In addition, as with other self-administered formats, a Web-based survey potentially reduces interviewer effects and permits a degree of anonymity not found in survey modes that depend on respondent-interviewer interaction (Aquilino, 1994; Aquilino & LoSciuto, 1990; Tourangeau & Smith, 1998; Turner, Forsyth, et al., 1998; Turner, Ku, et al, 1998). Finally, a Web-based survey may draw on the multimedia capabilities of the internet to yield instruments that collect data in an engaging and interactive manner.

    On the other hand, many of these advantages are potential liabilities. Although interviewer effects may bias the data, a measure of social interaction as part of the survey process may prompt the respondent to complete the survey and to provide thoughtful and accurate data (Burton & Blair, 1991). Similarly, an engaging and interactive instrument may appear to be trivial or game-like, leading respondents to provide unreliable or incomplete data (Kiesler & Sproull, 1986). Moreover, even a well-developed “help” function might not respond as flexibly to respondent queries as would a trained interviewer. Most important, questions of sampling bias and unknown selection probabilities present real limits to inferential claims based on a Web-based survey sample.

    The potential sample bias inherent in the use of the Web to collect data is generally identified as the critical methodological issue facing Web surveys. As a first reaction to a survey hosted by the NGS Web site, the reader may call to mind the famous Literary Digest election poll that predicted Alfred Landon's “victory” over Franklin Roosevelt in the 1936 U.S. presidential race. This poll had a sample size of more than 2 million but still came to the wrong conclusion (Dillman, 2000; Lohr, 1999). But the similarities run only skin deep. The Literary Digest poll made no effort to assess the representativeness of its sample, whereas Survey2001— like its predecessor, Survey2000—explicitly did so. Although one ought to be concerned about the limitations of a Web survey sample, one also should keep in mind that serious sample bias issues—whether associated with questions of coverage, questions of nonresponse, or a combination of factors—confront all forms of survey research (Witte & Howard, 2002). Little has changed since Smith (1983) reviewed various means to address nonresponse in the GSS and other surveys. Smith concluded,“There is no simple, general, accurate way of measuring nonresponse bias” (p. 402).

    Methodologically, the goal of survey research is to collect data on a sample that represents a population. Randomness does not guarantee representativeness; rather, it provides the means to quantify the level of confidence with which one can say that the sample does not represent the population. Survey2001 did not yield a random sample, and we do not “know” the selection probabilities for sample members. However, this does not mean that the survey cannot yield representative social science data. Although we do not “know” the selection probabilities, our data allow us to “estimate” these probabilities. The survey collected data on standard demographic characteristics (e.g., gender, age, race, education), and combinations of these attributes for the sample can be compared with other data sources. The selection bias is also likely to be correlated with certain factors, such as attitudes and values toward community and culture, that cut across standard demographic variables. For this reason, a number of items used in Survey2001 were based on other studies, including the 2000 GSS and the widely used New Environmental Paradigm (NEP) instrument, that depend on traditional sampling and data collection methods. These items can also serve as external benchmarks to assess the representativeness of the sample despite its nonrandomness. Although the type of validity that one aspires to with traditional sampling methods remained beyond the reach of Survey2001, Donald Campbell's proximal similarities approach to validity suggests important ways in which data such as those collected by Survey2001 offer valuable social science information (Trochim, 2000). Although this approach uses quantitative techniques to consider differences between one's sample and the population, it ultimately relies on sociological insight rather than statistical rules to make judgments as to what one can say about the population based on the sample.1

    Until internet access is as widespread as telephone access, survey researchers may need to use traditional survey modes to round out a Web-based sample (Schaefer & Dillman, 1998). To pursue this strategy efficiently, however, it is important to determine the characteristics of those who can be reached through Web-based methods. In cases such as Survey2001, where a sampling frame does not exist for the population, key informant sampling, targeted sampling, and snowball sampling are common techniques (Heckathorn, 1997). The Survey2001 project employed three specific strategies in an effort to address the limitations of the NGS convenience sample. First, outreach efforts invited respondents from locations other than the NGS Web site to participate in Survey2001. Banner and button graphics were distributed to hundreds of Web sites with a request to post these as links to Survey2001. Rather than directly linking to Survey2001 at, these links were set with a link such as 9003, where “sws=” was set to a unique parameter for each participating site. This parameter was then captured as part of the referring link so that individual responses could be associated with the Web site that generated them.2 Second, informal snowball sampling methods were employed. At the end of the survey, respondents were encouraged to send an e-mail invitation to three of their friends asking them to take the survey. Each of these invitations contained a link that included the anonymous survey identification number of the respondent who sent the e-mail. Third, a parallel phone survey was conducted to interview a random sample of 3,000 respondents nationwide. Although much shorter than the Web survey, these phone interviews provide a tool to assess the substantive impact of relying on Web data collection techniques.

    Comparing Different Samples: Survey2001 and the 2000 GSS
    Different Design, Content, and Data Collection

    The Survey2001 project included two major data collection efforts: the Web survey hosted on the National Geographic Magazine (NGM) Web site and a parallel phone survey conducted using standard random digit dialing methods. The discussion here emphasizes the Web survey, although preliminary findings from the phone survey are discussed as well. The Survey2001 project was delivered using a database system that organized questions, possible answers, and responses in a single mySQL database. A Java servlet application managed communications between the server's database and the client's browser. This servlet not only guided the delivery and presentation of survey questions, including question layout and skip patterns, but also guided the recognition and collection of respondents' answers.

    Survey2001 question content focused on several themes. Along with standard demographic items, questions addressed internet access and use, conservation attitudes and behavior, community participation, and cultural themes such as community orientation and activities, leisure time activities and preferences, reading behavior, political attitudes and participation, and belief in science and para-science. The basic Web survey instrument was designed to take 25 minutes to complete and included the demographic, internet, and conservation questions along with one randomly selected cultural theme. At the end of this basic questionnaire, respondents were thanked for their participation and asked whether they were willing to answer a set of further questions, those making up the other cultural themes. The Survey2001 telephone survey included considerably fewer questions than did the Web version, with the former emphasizing demographic characteristics, internet use and access, conservation, and musical taste. Individual items were selected to highlight similarities and differences in response patterns between the Web and telephone survey samples. Priority was given to those Web survey items that could best be adapted to a telephone survey format.

    As noted previously, recruitment of Survey2001 respondents did not rely solely on the NGM Web page. Links to the survey were placed on more than 100 other Web sites, and survey respondents were encouraged to invite others to participate in the survey. Moreover, as each respondent started the survey, the referring link was captured, thereby identifying which avenue a respondent used to reach the survey. These data are summarized in Table P.1 Whereas 14,064 respondents (60.6%) linked to the survey from the NGS Web site, another 8,569 respondents (36.9%) came from the other sites that posted a link to Survey2001. Another 559 respondents (2.4%) were recruited through e-mail solicitation from other respondents. Table P.1 also indicates that these different avenues for participation drew in some respondents interested in taking the survey in languages other than English. Although the e-mail invitations to recruit other respondents were sent in all languages, this technique appeared to be particularly effective with English-language respondents, who made up 86.8% of the e-mail referral sample as compared with 75.0% of the total sample.

    Table P.1 Origin and Language of Survey2001 Respondents

    Looking further down Table P.1, it is also noteworthy that the relative share of complete surveys varied with language given that a greater proportion of English-language respondents (85.1% total) was found among complete surveys from each of the sources than was found among the surveys initiated. Here it is interesting to note that although the number of e-mail referrals leading to complete surveys was relatively small (267 or 3.4% of the total number of complete surveys), this represents a 42% increase in the relative share of survey respondents of this type. Therefore, this suggests that the use of e-mail referrals may be an effective means to obtain complete surveys.

    In terms of sample design, the GSS is a very different type of study based on a traditional sampling of the U.S. noninstitutionalized adult population. The U.S. adult household population covers about 97.3% of the resident population of the United States; however, coverage varies greatly by age group. For example, just under 10% of the population ages 18 to 24 years lived outside of households (mostly in college dorms and military quarters) and are not represented by the sample. Thus, some of the heaviest users of the internet are systematically excluded from the GSS. GSS data collection began in 1972, with nationally representative samples of approximately 1,500 respondents typically being conducted every year—although at times every other year—until 1994, when the GSS moved to a regular 2-year cycle. The 2000 GSS was a face-to-face, 90-minute, in-home interview conducted with 2,817 respondents between February and mid-June 2000. With funding from the National Science Foundation, the 2000 GSS focused on respondents' internet use and attitudes regarding the internet as well as on a variety of demographic and general attitudinal items. Extensive information regarding the GSS may be obtained through the National Opinion Research Center or from the GSS Web site (

    During the first 3 years of data collection, the GSS employed a modified probability sampling technique, relying to a certain degree on filling quotas for particular subgroups. However, beginning in 1975, when higher funding levels became available, the GSS moved toward full probability sampling, which has been used exclusively since 1977. The sampling is conducted in two major stages. Primary sampling units, consisting of one or more counties, are selected in the first stage, and segments consisting of one or more blocks in each primary sampling unit are selected in the second stage. In a few cases, segments were sub-sampled, a procedure that constituted a third stage of sample selection.

    For example, the 1990 sampling frame consisted of 100 primary sampling units, each of which was a metropolitan area or a nonmetropolitan county. Prior to selection, the United States was divided into 2,489 primary sampling units, which were sorted into strata according to region, state, percentage minority, and per capita income. This procedure ensured proportional representation according to the stratification criteria. Then 100 primary sampling units were selected, with the selection probability for each unit being proportional to the number of housing units. Of these primary sampling units, 19 were so large that they were automatically included in the first-stage sample. Then the selected units were subdivided and stratified, primarily according to geography and percentage minority, and second-stage sample segments were selected using systematic sampling with selection probability proportional to the number of housing units in the segment. Individual units were then randomly selected within each segment.

    Once a sample was selected following these procedures, interviewers entered the field to conduct face-to-face interviews. For example, the original full-probability sample in 1996 consisted of 4,559 cases. After eliminating vacant dwellings, households where the interviewer experienced language difficulties, businesses, and other nondwelling addresses, the net sample came to 3,846 households. These households then yielded 2,904 completed surveys, with the bulk of the difference between eligible households and completed surveys being due to 757 respondent refusals, amounting to a refusal rate of 19.8%—quite low by industry standards. Nevertheless, despite the care used to determine the probability sample, there is no reason to assume that nonresponse is randomly distributed. Indeed, for more than two decades, it has been known that males are underrepresented in full-probability samples, including the GSS (Smith, 1979), and that this is primarily a function of nonresponse. It should also be pointed out that the full-probability GSS samples used since 1975 were designed to give each household an equal probability of inclusion in the samples. Thus, for household-level variables, the GSS sample is self-weighting. However, at the individual level, because only one individual per household is interviewed, individuals living in large households are underrepresented. Individual weights to correct for these factors are distributed with the GSS and should be used—as they are in the GSS results presented here—when the individual is the unit of analysis.

    Sampling and Different Demographics

    It comes as no surprise that the very different sampling strategies employed by the 2000 GSS and Survey2001 lead to obvious and predictable differences in the sample according to demographics. These differences are readily apparent in Table P.2 The demographic characteristics of the 2000 GSS roughly parallel the conventional picture of the U.S. population.

    Table P.2 Demographics of 2000 GSS and Survey2001: U.S. Respondents

    For example, the U.S. Department of Education reports that 15.9% of Americans age 25 years or over have not obtained a high school degree (National Center for Education Statistics, 2001, Table 8). Looking at Table P.2, the GSS results are nearly identical at 15.8%. The full GSS sample is intended to represent adults age 18 years or over. Filtering out the 18- to 24-year-olds, however, has a little effect, leaving 14.9% of the 2000 GSS respondents age 25 years or over without a high school degree.

    As noted previously in the sampling discussion, weights should be used with GSS individual-level data. Their impact is most obvious when one considers gender directly; males make up 43.6% of the unweighted sample as compared with 48.0% after weighting. However, considering other demographic characteristics, the effects are minimal even when there is some correlation with gender. For example, weighting has almost no effect on education; compared with the distribution presented in Table P.2, the percentages at each level of education change by no more than 0.3% when weights are not used.

    Having said this, however, there is no guarantee that weighting by demographic characteristics will yield proportional response patterns for other variables. This would be the case only if the lower response rate among men were randomly distributed across all men. More realistically, one should assume that nonresponse is concentrated among certain types of men and that, more specifically, it is these types of men who are underrepresented in the sample. This requires a more sophisticated model of weighting that takes into account the social processes that account for the disproportional representation of particular demographic groups in the unweighted sample in the first place.

    The Survey2001 sample, on the other hand, is younger, differently distributed regionally (with heavier concentrations in the South Atlantic and Pacific regions), more highly educated, and more likely to be employed (especially part-time) than the population-at-large. Indeed, the characteristics of the Survey2001 sample better approximate popular conceptions of the online population than they do the general U.S. population.

    Sampling and Different Substantive Results: Internet Use and Study Design

    Clearly, one would expect to find differences in incidence of computer use and internet access between the 2000 GSS and Survey2001 respondents. The 2000 GSS found that approximately 60.0% of Americans had access to a computer at home, at work, or somewhere else. Among this group, 79.8% reported that they had access at home, whereas 65.7% of all employed individuals reported having access to a computer at work. An overwhelming majority of persons with a computer at home (85.9%) also reported having internet access at home. Another 3.0% of those without a computer at home reported having internet access via WebTV. Not surprisingly, greater rates of access were recorded among Survey2001 respondents; indeed, they needed some means of internet access simply to participate in the survey. Nearly all Survey2001 respondents (94.0%) said that they had access to a computer at home. Among employed persons, 86.1% reported computer access at work.

    Given these obvious differences in access, it is nonetheless interesting to consider the type and level of internet activity among those respondents who reported that they use the internet in each sample. The results presented in Table P.3 summarize the frequency with which individuals visited specific types of Web sites during the past 30 days. The findings consider visits to three types of sites: news and current event sites, television and movie sites, and health and fitness sites. Simply comparing all respondents, the differences are extremely large, with far less frequent visits to Web sites reported by GSS respondents than by Survey2001 respondents. For example, 24.7% of GSS respondents had not visited a news or current event Web site during the past 30 days as compared with only 9.4% of Survey2001 respondents. Considering television and movie sites, the difference between the two samples was larger still in that as 63.7% of the GSS respondents had never visited such a site as compared with 33.7% of the Survey2001 respondents. The GSS respondents were somewhat less likely (50.3%) to have not visited such a site during the past 30 days. This number was still notably higher than that found among the Survey2001 respondents, of whom only 34.4% had not been to such a site during the past 30 days.

    Table P.3 Online Activities of 2000 GSS and Survey2001 Internet Users: Visits to Types of Web Sites During the Past 30 Days

    Certainly, some measure of this difference is rooted in the much higher level of educational attainment of the Survey2001 sample, as noted earlier in the discussion of Table P.2 To get a handle on the importance of this difference, the final two columns of Table P.3 consider visits to sites of these types only among those respondents in each sample who reported that they had a 4-year college degree. As one would expect, focusing on the better educated members of these samples decreases the proportion of those who reported that they had not visited a Web site of each of these types during the past 30 days. Not surprisingly, the change was relatively minor among the Survey2001 respondents given that more than two thirds of this sample reported having a college degree. Yet even among the GSS sample, the magnitude of change tended to be modest, particularly when it comes to visiting television and movie sites as well as health and fitness sites.

    Faced with this conflicting evidence regarding the frequency of specific online activities, one's first inclination may be to deem the GSS findings as more accurate and attribute the Survey2001 results to that survey's unconventional and obviously nonrandom sampling strategy. To do so, however, would be to ignore other sources that suggest levels of use closer to Survey2001 data than to GSS data. For example, using random digit dialing techniques, the Pew Internet and American Life Project found that 64% of all internet users reported that they had obtained health information online in 2001, with a similar percentage indicating that they had used the internet as a news source (Horrigan & Rainie, 2002). Furthermore, the Pew data indicate that the use of the internet for such purposes varies with individuals' internet experience, as nearly three quarters of longtime internet users reported that they had used the internet for health information.

    Sampling and Different Substantive Results: Environmental Issues

    Table P.4, which compares Survey2001 subsample responses with a subset of eight NEP items, hints at the value of this approach. The first three rows of data for each item compare three Web subsamples: respondents coming directly from the NGS site, respondents coming from the Sierra Club site, and respondents from all other sites. Focusing on the extreme “pro-environmental” responses for each item,3 the NGS respondents stand out as holding stronger environmental views than those of respondents coming from the mixed category of other sites. Sierra Club respondents, however, are seen as even more pro-environmental than NGS respondents. It is also noteworthy that a much smaller percentage of Sierra Club respondents selected “don't know” than did either the NGS respondents or those from other sites. At first glance, finding clearer preferences and pro-environmental attitudes among visitors to an avowedly environmental club Web site is hardly surprising. However, it raises an important point. These predictable and highly plausible findings are the product of a non-random online survey. The results are from a nonrandom sample, but they are from a representative sample.

    Table P.4 Survey2001 Respondents'Attitudes Regarding Conservation and the Environment (percentages)

    Results from the Survey2001 phone survey provide useful data regarding the impact of the data collection mode.4 In this instance, our phone sample is limited to a targeted subset of upper middle class urban respondents who are demo-graphically quite similar. Despite their demographic similarities, when we look at their responses to the conservation items in Table P.4, two clear differences emerge between the phone samples (the fourth row for each item) and the Web samples (the first three rows for each item). Most generally, we see that both the NGS and Sierra Club samples tended to articulate stronger environmental concerns than did the phone samples drawn from these two highly educated, relatively affluent areas. Moreover, for the most part, the results from this phone sample (albeit preliminary data with a small sample size) are relatively similar to those provided by respondents from a heterogeneous mix of Web sites.

    Although it is plausible to conclude that the stronger environmental views found among these two groups of Web respondents (as compared with the phone respondents despite similar demographics) may be attributed to selection bias, a second pattern found in Table P.4 does not lend itself to a similar interpretation. For seven of the eight items in this table, the proportion of individuals who selected “don't know” for their response was greater in the NGS and other Web samples than in the phone sample—often by a large margin. Among the Sierra Club respondents, it is plausible to see low levels of “don't know” responses as being associated with higher levels of knowledge and concern. However, such an interpretation is not compelling for the phone sample, which did not set itself apart through its environmental positions. In this case, the greater decisiveness of the phone respondents may have resulted from their answering to a human interviewer in contrast to the Web respondents, who answered only to a server.


    The primary aim of this prologue has been to show that sampling matters when studying life online. Indeed, this statement is true for all research, regardless of the topic. The goal of this prologue has also been to show that this truism holds for face-to-face interviews and phone surveys as well as for the emerging field of Web surveys. The results presented here demonstrate that the Survey2001 sampling strategy presented a picture of a society that is far more wired than the population-at-large. This is readily conceded. However, the analyses presented here also show that more traditional sampling methods have their own limitations as well. The effects of nonresponse are rarely given the attention they are due in traditional survey techniques. Furthermore, in the case of the 2000 GSS, coverage restrictions simply excluded a small but particularly wired segment of the population—young people living in institutional settings, especially college dorms and the military. On the other hand, Survey2001, as a Web-based survey of online life, was better positioned to capture a large sample of individuals who were particularly plugged into life online. This is readily evident in the findings presented here showing that the Survey2001 respondents were more frequent users of various types of Web sites than were the 2000 GSS respondents, even after restricting the comparison to college-educated respondents.

    Quite simply, one should not let the sampling weaknesses of Web-based surveys fully obscure the strengths of such surveys. Preliminary results with the Survey2001 data also permit comparisons between the Web samples and phone samples regarding online activities. Focusing on the urban, well-educated phone subsample most like the Web sample, particular types of online activities— mailing list participation, online education, and surfing the Web for recreational purposes—were far more common among the Web respondents. Further analysis with these data may show that these patterns are Web survey specific; however, they also raise an important rival interpretation. Telephone surveys, with their own patterns of nonresponse and selection, may give an inaccurate picture of Web users. The virtues of multimethod studies may extend to covering the blind spots of telephone surveys as well as those of Web surveys.

    In addition, one should take care that a preoccupation with sampling issues in Web surveys does not obscure other critical issues associated with such a new form of survey research. The large increase in “don't know” responses to Web surveys, as compared with phone surveys, calls attention to the significance of “interviewer effects” even when an interviewer is not physically present. Similarly, preliminary analyses of experimental use of photo prompts in Survey2001 strongly suggest that the instrument effects of Web surveys are myriad and subtle. Thanks to the careful methodological research of individuals working with face-to-face, mail, and phone surveys, we have a pretty good understanding of how important such influences may be. However, all bets are off when one simply assumes that such effects will operate in the same fashion online. Nevertheless, this uncertainty should not discourage survey researchers from taking to the Web and conducting online surveys. After an initial period of neglect, debates such as those between Nie and Erbring (2000a) and Etzioni (2000) focused the attention of social scientists on the internet. Their disparate conclusions have alerted social scientists to a variety of complex methodological issues related to the study of the internet. As a result, we find ourselves in a new phase of internet research. Now is the time for “the tradition-bound activity of normal science” (Kuhn, 1962, p. 6). In fact, it is only through such efforts that survey research will acquire the experience and data needed to make the most of the internet as a new and exciting data collection tool.


    1. An often overlooked fact in discussions of the Literary Digest polls is that in the previous four presidential elections, they correctly predicted the winner. Beyond the 1936 truth that a large sample does not guarantee accurate results, it ought to be emphasized that a nonrandom sample does not amount to a recipe for invalid results.

    2. Using results from the first 12 weeks of Survey2001 data collection, 22 sites contributed at least 30 observations. Most important in this group were the Sierra Club (1,051 respondents) and Science News Online (539 respondents).

    3. These are “strongly agree” for the first, third, fourth, sixth, and eighth items in Table P.4 and “strongly disagree” for the second, fifth, and seventh items.

    4. The phone sample includes two targeted subsamples: 1,400 randomly selected respondents from rural Maine and rural California and 269 randomly selected urban respondents from Cambridge, Massachusetts, and Santa Monica, California. This latter group was sampled to pretest the instrument on a group of respondents likely to demographically resemble NGS Web site visitors, particularly with regard to education and income.

    Aquilino, W. (1994). Interview mode effects in surveys of drug and alcohol use. Public Opinion Quarterly, 58, 910–940.
    Aquilino, W., & LoSciuto, L. (1990). Effect of interview mode on self-reported drug use. Public Opinion Quarterly, 54, 362–395.
    Baker, R.P., Bradburn, N.M., & Johnson, R. (1995). Computer-assisted personal interviewing: An experimental evaluation of data quality and survey costs. Journal of Official Statistics, 11, 415–449.
    Burton, S., & Blair, E. (1991). Task conditions, response formulation processes, and response accuracy for behavioral frequency questions in surveys. Public Opinion Quarterly, 55, 50–79.
    Couper, M., & Nicholls, W., II. (1998). The history and development of computer assisted survey information. In M.Couper, J.Bethlehem, C.Clark, J.Martin, W.Nicholls, & J.O'Reilly (Eds.), Computer-assisted survey information collection. New York: John Wiley.
    Dillman, D. A. (2000). Mail and internet surveys: The tailored design method. New York: John Wiley.
    Etzioni, A. (2000, May-June). Debating the societal effects of the internet: Connecting with the world. Public Perspective, pp. 42–43.
    Heckathorn, D. A. (1997). Respondent-driven sampling: A new approach to the study of hidden populations. Social Problems, 44, 174–199.
    Horrigan, J. B., & Rainie, L. (2002). Getting serious online. Working paper, Pew Internet and American Life Project.
    Jenkins, C. R., & Dillman, D. A. (1997). Towards a theory of self-administered questionnaire design. In L.Lyberg, P.Biemer, M.Collins, E.DeLeeuw, C.Dippo, N.Schwarz, & D.Trewin (Eds.), Survey measurement and process quality. New York: John Wiley.
    Kiesler, S., & Sproull, E. (1986). Response effects in the electronic survey. Public Opinion Quarterly, 50, 402–413.
    Kuhn, T. S. (1962). The structure of scientific revolutions. Chicago: University of Chicago Press.
    Lohr, S. L. (1999). Sampling: Design and analysis. Pacific Grove, CA: Duxbury.
    National Center for Education Statistics. (2001). Digest of education statistics, 2001. Washington, DC: U.S. Department of Education.
    Nicholls, W, Baker, R., & Martin, J. (1997). The effect of new data collection technologies on survey data quality. In L.Lyberg, P.Biemer, M.Collins, E.DeLeeuw, C.Dippo, N.Schwarz, & D.Trewin (Eds.), Survey measurement and process quality (pp. 221–248). New York: John Wiley.
    Nie, N., & Erbring, L. (2000a, May-June). Debating the societal effects of the internet: Our shrinking social universe. Public Perspective, pp. 44–45.
    Nie, N., & Erbring, L. (2000b). Internet and society. Unpublished manuscript, Stanford Institute for the Quantitative Study of Society.
    Schaefer, D. R., & Dillman, D. A. (1998). Development of a standard e-mail methodology: Results of an experiment. Public Opinion Quarterly, 62, 378–397.
    Smith, T. W. (1983). The hidden 25 percent: An analysis of nonresponse on the 1980 General Social Survey. Public Opinion Quarterly, 47, 386–404.
    Smith, T. W. (1979). Sex and the GSS: Nonresponse differences. General Social Survey Methodological Report No. 9.
    Tourangeau, R., & Smith, T (1998). Collecting sensitive information with different modes of data collection. In M.Couper, J.Bethlehem, C.Clark, J.Martin, W.Nicholls, & J.O'Reilly (Eds.), Computer-assisted survey information collection. New York: John Wiley.
    Trochim, W. M. K. (2000). The research methods knowledge base. Cincinnati, OH: Atomic Dog Publishing.
    Turner, C. F., Forsyth, B. H., O'Reilly, J. M., Cooley, P. C., Smith, T. K., Rogers, S. M., & Miller, H. G. (1998). Automated self-interviewing and the survey measurement of sensitive behaviors. In M.Couper, J.Bethlehem, C.Clark, J.Martin, W.Nicholls, & J.O'Reilly (Eds.), Computer-assisted survey information collection. New York: John Wiley.
    Turner, C. F., Ku, L., Rogers, S. M., Lindberg, L. D., Pleck, J. H., & Sonensteh, F. L. (1998). Adolescent sexual behavior, drug use, and violence: Increased reporting with computer survey technology. Science, 280, 867–873.
    Witte, J., & Howard, P. N. (2002). Technological and methodological innovation in survey instruments: The future of polling. In F.Cook & J.Manza (Eds.), Navigating public opinion (pp. 272–289). Oxford, UK: Oxford University Press.
  • About the Editors

    Philip N. Howard is Assistant Professor in the Department of Communication at the University of Washington. He has published several articles and chapters on the use of new media in politics and public opinion research, was the first politics research fellow at the Pew Internet and American Life Project, and currently serves on the advisory board of the Survey2001 project. He teaches courses in political communication, organizational behavior, and international media systems and is currently preparing a book-length manuscript titled Politics In Code: Franchise and Representation in the Age of New Media.

    Steve Jones is Professor and Head of the Department of Communication at the University of Illinois at Chicago. He is the author or editor of numerous books, including Doing Internet Research, The Encyclopedia of New Media, CyberSociety, and Virtual Culture. He is cofounder and president of the Association of Internet Researchers and is coeditor of New Media & Society, an international journal of research on new media, technology, and culture. He also edits New Media Cultures, a series of books on culture and technology (Sage), and Digital Formations, a series of books on new media.

    About the Contributors

    William Sims Bainbridge is Deputy Director of the Division of Information and Intelligent Systems at the National Science Foundation, after having directed the division's Human Computer Interaction, Universal Access, and Knowledge and Cognitive Systems programs. He received his Ph.D. from Harvard University. He is the author of 10 books, 4 textbook-software packages, and approximately 150 shorter publications in information science, social science of technology, and sociology of culture. His software employs innovative techniques to teach theory and methodology: Experiments in Psychology, Sociology Laboratory, Survey Research, and Social Research Methods and Statistics. Most recently, he coedited Converging Technologies to Improve Human Performance, which explores the combination of nanotechnology, biotechnology, information technology, and cognitive science.

    Dan L. Burk is Professor at the University of Minnesota Law School, specializing in the areas of cyberlaw and biotechnology. He holds appointments at both the law school and the Center for Bioethics and has also been closely involved in the development of the new Joint Degree Program in Law, Health, and the Life Sciences and in the creation of the university's new Internet Studies Center. His publications include “Cyberlaw and the Norms of Science” (in Intellectual Property & Technology Forum), “Virtual Exit in the Global Information Economy” (in Chicago-Kent Law Review), “Trademark Doctrines for Global Electronic Commerce” (in South Caroline Law Review), and “Ownership Issues in Online Use of Institutional Materials” (in Cause/Effect).

    Carin Dessauer is a principal with mc2 and a Senior Fellow with the American Press Institute. She is a former CNN and executive and a recent fellowship professor at the George Washington University School of Media and Public Affairs. She has served as a contributing editor to Campaigns and Elections and as associate editor for Congressional Quarterly's 1990 Politics in America.

    Kirsten A. Foot is Assistant Professor in the Department of Communication at the University of Washington. She is interested in the reciprocal relationship between information/communication technologies and society. Her current research projects include a comparative study of mayoral candidate Web sites and analyses of the Web spheres that developed around the events of September 11,2001, and in anticipation of the 2002 U.S. elections. She is codirector of the research group, where she is developing new techniques for studying social and political action on the Web. She is also coedi-tor of the book series Acting With Technology.

    Philip Garland is a second-year M.A. student in the Department of Communication at the University of Washington. He is interested in the intersection of American politics, media, and race as well as inconsistencies in new media or the “digital divide” along racial lines. His current work focuses on racial profiling discourse before and after September 11,2001, and how rap music, as black political speech, is covered in the news media.

    Wendy Griswold is Professor of Sociology and Comparative Literary Studies at Northwestern University. She received her Ph.D. from Harvard University and has taught at Harvard and the University of Chicago. Her research and teaching interests center on cultural sociology; sociological approaches to literature, art, and religion; time and place; and comparative studies in Europe and Africa. Her recent books include Bearing Witness: Readers, Writers, and the Novel in Nigeria (2000) and Cultures and Societies in a Changing World (1994), which has been translated into Japanese and Italian. She is writing a book on cultural regionalism titled Regionalism and the Reading Class. She directs the Culture and Society Workshop at Northwestern.

    Laura J. Gurak is Associate Professor of Rhetoric at the University of Minnesota. Her research emphasis in on the rhetorics of science and technology, rhetorical criticism, internet studies, online research methods, social aspects of computing, law and technology (intellectual property and privacy), electronic literacies, and technical and professional communication. Her books include Cyberliteracy: Navigating the Internet With Awareness and Persuasion and Privacy in Cyberspace: The Online Protests Over Lotus MarketPlace and the Clipper Chip.

    Eszter Hargittai is Assistant Professor in the Department of Communication at Northwestern University. Her research focuses on the increasing role of commercial interests in channeling information toward users. Her publications include “Open Portals or Closed Gates? Channeling Content on the World Wide Web” (in Poetics), “Radio's Lessons for the Internet” (in Communications of the ACM), and “Weaving the Western Web: Explaining Differences in Internet Connectivity Among OECD Countries” (in Telecommunications Policy). She is also Associate Director of the International Networks Archive, whose aim is to assemble data sets relevant to empirical research on mapping globalization in a central location and to standardize them so that the various indicators can be combined.

    James E. Katz is Professor in the Department of Communication at Rutgers University. His enduring research interest has been to understand how the interplay between technology and social processes affects interpersonal power, organizational structures, and the creation of cultural meaning. He has won postdoctoral fellowships at Harvard University and the Massachusetts Institute of Technology, served on the faculties of the University of Texas at Austin and Clarkson University, and headed the social science research unit at Bell Communication Research (Bellcore). He was granted national and foreign patents on his inventions in telecommunications technology and is the author of several books in the field of technology and society, including Connections: Social and Cultural Studies of the Telephone in American Life (1999), which has been cited in Choice as a “landmark” study.

    Meyer Kestnbaum is Professor in the Department of Sociology at the University of Maryland. He received his Ph.D., A.M., and A.B. from Harvard University. His publications include “War and the Development of Modern National States” with Theda Skocpol (in Sociological Forum) and “Mars Unshackled: The French Revolution in World-Historical Perspective” with Theda Skocpol (in The French Revolution and the Birth of Modernity). His current research includes work on state building during times of revolution and a new project titled “Bridging the Gap: Assuring Military Effectiveness When Military Culture Diverges From Civilian Society,” under the aegis of the Triangle Institute for Security Studies.

    Nalini P. Kotamraju is completing the Ph.D. in sociology at the University of California, Berkeley. She is collaborating with Nina Wakeford on the Mobile Devices and the Cultural Worlds of Young People Project, a comparative ethnography of young people and mobile phones in the United Kingdom and the United States, sponsored by the Annenberg Center for Communication. Her publications include “Keeping Up: Web Design Skill and the Reinvented Worker” (in Information Communication and Society) and “The Birth of Web Site Design Skills: Making the Present History” (in American Behavioral Scientist). She has also done research for Sun Microsystems, Productopia, and Liquid Thinking.

    Elena Larsen is Research Fellow at the Pew Internet and American Life Project. She has a master's degree from the Annenberg School for Communication at the University of Pennsylvania, where her major research projects included analysis of wireless broadband technology policy and a study of the 2000 presidential election campaign on the Internet. She filled her time between college and graduate school by working as a program analyst for the Department of the Treasury.

    Lisa Nakamura is Assistant Professor of Communication Arts and Visual Culture Studies at the University of Wisconsin-Madison. She is the author of Cybertypes: Race, Ethnicity and Identity on the Internet (2002) and a coeditor of Race in Cyberspace (2000). She has published articles on cross-racial role-playing in Internet chat spaces; race, embodiment, and virtuality in the film The Matrix; and political economies of race and cyberspace in publications such as the Women's Review of Books, Unspun: Key Terms for the World WideWeb, The Cybercultures Reader, and The Visual Culture Reader 2.0. She is working on a new book tentatively titled Visual Cultures of Race in Cyberspace.

    Gina Neff is Assistant Professor in the Department of Communication at the University of California, San Diego. She received her Ph.D. from the Department of Sociology at Columbia University, where she was a research associate in the Center on Organizational Innovation. Her doctoral research, titled Organizing Uncertainty in Silicon Alley, looks at the ways in which risk and uncertainty were experienced in New York City during the early days of the internet. She is also working on a comparative project that examines social networking practices in the internet industry in New York and Berlin, and she is analyzing the emergence of social structure in an online classroom.

    Alan Neustadtl is Associate Professor of Sociology at the University of Maryland. He received his Ph.D. from the University of Massachusetts, Amherst. His research is focused on the distribution of political power across interest groups, campaign finance, and the networks of corporate political action. His books include Money Talks and Dollars and Votes. His current research interests include examining the nature of the growth of internet use by Americans, with special consideration of internet use for political and cultural purposes, and the nature and existence of a “digital divide.”

    Pippa Norris is McGuire Lecturer in Comparative Politics at the John F. Kennedy School of Government at Harvard University. Her research compares elections and public opinion, political communications, and gender politics. Her books include Democratic Phoenix: Political Activism Worldwide, which focuses on how political activism has been reinvented for modern times, and two books scheduled for publication in 2003: Rising Tide: Gender Equality and Cultural Change Around the World (with Ron Inglehart) and Electoral Engineering: Voting Rules and Political Behavior.

    Richard A. Peterson is Professor of Sociology at Vanderbilt University. His research interests are in the music industry, taste and stratification, and production of culture. His current research projects examine the aging of the fine arts audience, scenes where music genres are created, and the spread of omnivorous tastes. His recent articles include “Two Ways Culture Is Produced” (in Poetics),“Alternative Country: Origins, Music, World-view, Fans, and Taste in Genre Formation” with Bruce A. Beal (in Popular Music and Society), and a chapter titled “The Re-Creation Indicator” with Carrie Y. Lee (in Quality of Life Indicators: A New Tool for Assessing National Trends). His recent book is The Aging of Arts Audiences National Endowment for the Arts (with Pamela Hull and Roger Kern).

    Harrison “Lee” Rainie is Director of the Pew Internet and American Life Project, a research center that examines the social impact of the Internet, specifically focusing on how people's Internet use affects families, communities, health care, education, civic and political life, and workplaces. Prior to receiving the Pew grant, he was managing editor of U.S. News & World Report. He is a graduate of Harvard University and has a master's degree in political science from Long Island University.

    Ronald E. Rice is Professor and Chairperson of the Department of Communication at Rutgers University. He received his Ph.D. from Stanford University and has corporate experience in systems and communication analysis, banking operations, data processing management, publishing, and statistical consulting. He has published widely in communication science, public communication campaigns, computer-mediated communication systems, information systems, information science and bibliometrics, and social networks. His most recent coauthored or coedited books include Public Communication Campaigns (2001), The Internet and Health Communication (2001), Accessing and Browsing Information and Communication (2001), and Social Consequences of Internet Use: Access, Involvement, and Interaction (2002).

    John P. Robinson is Professor of Sociology at the University of Maryland, where he also directs the Internet Scholars Program and the Americans' Use of Time Project. He received his Ph.D. from the University of Michigan. He is an expert on time use, societal trends and social change, the impact of the mass media and the internet on society, and social science methodology. He is the author of Time for Life: The Surprising Ways Americans Spend Time (1997), Measures of Political Attitudes (1999), and Measures of Personality and Social Psychological Attitudes (1991). He is the cofounder and editor of the journal IT & Society. His more recent articles on the internet and time displacement have appeared in IT & Society and Social Science Computer Review. He has codirected the University of Maryland's summer “Webshop,” in which 50 top graduate students from around the country interact with leading internet scholars.

    John Ryan is Professor in and Chairperson of the Department of Sociology at Virginia Tech. His research focuses on the sociology of work, organizations, and occupations, with an emphasis on mass media, industries, and emerging technologies. He also has a long-term interest in the digital transformation of culture industries, with a particular focus on copyright law. His most recent book is Media and Society: The Production of Culture in the Mass Media (with William Wentworth).

    Saskia Sassen is Ralph Lewis Professor of Sociology at the University of Chicago. She is the author of several books, most recently Guests and Aliens (1999), and is the editor of Global Networks/Linked Cities (2002). She chairs the newly formed Information Technology, International Cooperation, and Global Security Committee of the Social Science Research Council for the United States.

    Steven M. Schneider is Associate Professor of Political Science at the State University of New York Institute of Technology and Codirector of His research focuses on the impact of the Internet on political and social life as well as on the development of systems to identify, collect, analyze, and access large-scale archives of born-digital materials for scholarly study. His recent research has examined the emergence of online structures for political action on campaign Web sites in the United States and the potential for expanding the public sphere through Internet-based persistent conversation.

    Leslie Regan Shade is Assistant Professor in the Department of Communication at Concordia University in Montreal, where her research and teaching focus on the social, ethical, and policy aspects of information and communication technologies. She is the author of Gender and Community in the Social Construction of the Internet (2002) and is the coeditor of Mediascapes: New Patterns in Canadian Communication (2002), Civic Discourse and Cultural Politics in Canada (2002), and E-Commerce vs. E-Commons: Communications in the Public Interest (2001). Her current research focuses on how children and youth are using the Internet in their homes, and she is preparing a book on Internet policy.

    David Silver is Assistant Professor in the Department of Communication at the University of Washington. His research interests focus primarily on the intersections among computers, the internet, and contemporary American cultures. Since 1996, he has been building the Resource Center for Cyberculture Studies, an online, not-for-profit organization whose purpose is to research, study, teach, support, and create diverse and dynamic elements of cyberculture.

    David Stark is Arnold A. Saltzman Professor of Sociology and International Affairs at Columbia University, where he directs the Center on Organizational Innovation, and is External Faculty Member of the Santa Fe Institute. He is a major contributor to the new economic sociology and examines problems of worth and value in various organizational contexts. He is currently studying the coevolution of collaborative organizational forms and interactive technologies. His current research in Eastern Europe includes a multi-country study of how nongovernmental organizations use new information technologies as well as a longitudinal analysis of the patterns of ownership and organizational change among the largest 1,800 Hungarian enterprises during the past decade of transformation.

    Doreen Starke-Meyerring is Assistant Professor in the Department of Communication at McGill University. She received her Ph.D. from the Department of Rhetoric at the University of Minnesota. Her research focuses on the intersections between culture and rhetoric on and about the internet, especially in the context of higher education. She is a coauthor of Partnering in the Learning Marketspace (2001).

    Jennifer Stromer-Galley is Assistant Professor in the Department of Communication at the University at Albany, State University of New York. She conducted her research on Internet voting as a doctoral candidate at the Annenberg School for Communication at the University of Pennsylvania. Her research interests include the uses of communication technology and implications for democratic practice, and her current work investigates the motives people have for using political chat spaces online. Her recent publications can be found in the Journal of Communication, Javnost/The Public, and Journal of Computer-Mediated Communication.

    James Witte is Associate Professor in the Department of Sociology at Clemson University. His ongoing research focuses on ways in which to use the World Wide Web to collect survey data and on similarities and differences between society online and society offline. His other research area builds on his dissertation, Labor Force Integration and Marital Choice, and examines different ways in which labor market forces construct and define individual identity.

    Nathan Wright is completing his Ph.D. in sociology at Northwestern University. His main areas of interest include religion, consumption, and popular culture. He is the coauthor of a paper titled “Cowbirds, Locals, and the Dynamic Endurance of Regionalism” (with Wendy Griswold).

    • Loading...
Back to Top

Copy and paste the following HTML into your website