Skip to main content icon/video/no-internet

Content Filtering

From the beginning, the Internet was designed to make information more accessible. We can now obtain incredible amounts of data at any time of day, from almost any place in the world. But open access to all information can be problematic, especially when it comes to obscene or offensive materials. One way to prevent children from accessing objectionable material online is the use of content-filtering devices. These tools, which can be software or hardware-related, can be used to screen and block content that includes particular words or images. Content-filtering devices are a comparatively new development in the history of the Internet, and their use remains complex from a technical perspective, and controversial from a legal one.

Content filters restrict what users may view on their computer or television screen. Programs such as Cybersitter™, NetNanny™, and CyberPatrol™ screen Web pages and email messages for category-specific content. For example, if a parent does not want a child to be able to retrieve pages containing full nudity, they can select the “no full nudity” option in a content filtering program.

Once a user sets up a content-filtering program to restrict access to objectionable material, the program works in two distinct ways when an Internet connection is made. First, it checks to make sure the site is not on the software company's “blocked” site list. Second, it previews incoming pages and email by scanning it against an objectionable “buzzword list.” If the site is listed in either of those databases, it will not be displayed on the screen, and instead a page will appear notifying the user that the site or message is blocked.

The blocked and buzzword lists themselves are created in two ways: human review and automated selection. Companies that develop content-filtering software maintain staffs of reviewers who scan the Internet for objectionable sites. The sites are then placed into different categories in the blocked list database. That way, if a user has selected not to view sites related to alcohol or drugs or cults, the software will automatically load the correct category sets from the database.

Such a system is not foolproof. The World Wide Web is growing much faster than the software companies can review it, and it is only logical that the review process relies at least in part on automation. It would be nearly impossible for a team of human reviewers to determine what is and is not objectionable on the Web in every category. Moreover, today's safe Web site might be tomorrow's top porn website, or vice-versa. As a result, even if there were enough reviewers to catalog the entire Web, the blocked list would be out of date by the time they finished.

Sometimes, acceptable sites get wrongly labeled as objectionable. This results in frustration and anger—especially on the part of the Webmaster of the allegedly objectionable site. Some sites supplying information about breast cancer, for example, might be blocked, if the word “breast” appears on a buzzword list. But the more dangerous problem, according to opponents of content-filtering programs, who often call them “censorware,” is that sites are sometimes blocked for apparently political reasons. For example, http://Peacefire.org, a site that opposes content filters, is often blocked by those same content filters, and select political Web sites are also blocked.

...

  • Loading...
locked icon

Sign in to access this content

Get a 30 day FREE TRIAL

  • Watch videos from a variety of sources bringing classroom topics to life
  • Read modern, diverse business cases
  • Explore hundreds of books and reference titles

Sage Recommends

We found other relevant content for you on other Sage platforms.

Loading