Access Technologies: DSL and Cable [Secure eReader]
Click on image to enlarge.
eBook by James H. Green
eBook Category: Technology/Science
eBook Description: Time-pressed executives want to learn more, but can be intimidated by jargon-heavy, techno-speak books. Access Technologies: DSL and Cable uses case histories to introduce today's key technologies, explain how they work, and show their use in real-life situations--all in an easy-to-read style that doesn't sacrifice technical content.
eBook Publisher: McGraw-Hill Companies/McGraw-Hill, Published: 2002
Fictionwise Release Date: September 2002
Introduction to Access Technologies
Telecommunications networks have made some remarkable strides in the last few decades. Fiber-optic technology has revolutionized information transport, providing enormous amounts of high-quality bandwidth that the older transcontinental microwave network could not support. High-speed routers and switches and new protocols have brought the Internet to the masses, making it possible to obtain information and communicate anywhere in the world for a flat monthly fee. Despite these advances, a major bugbear remains. The high-speed backbone network is accessed over a copper cable local loop that was designed for a nineteenth-century network. To be sure, copper wire quality increased significantly during the twentieth century, and local loop technology improved, but local access remains the chokepoint for the vast majority of Internet users.
From the start, data networks were designed around the voice-frequency circuit because nothing else was available. Older telecommunications systems had a high error rate because of noise that came from a variety of sources such as clattering switches and relays, atmospheric conditions, and technician activity. Since noise was an analog phenomenon induced into analog circuits, the longer the path, the higher the noise level, and the greater the error rate. Although a high noise level is annoying to voice sessions, it is fatal to data, so when fiber optics came along to replace the analog microwave network that laced the country, it brought a revolution in transmission quality. No longer was bandwidth constrained by the channelized voice circuits in the wide area. Common carriers and large users served by fiber optics could now obtain an ample supply of digital capacity in whatever bandwidth they chose.
The irony of the fiber-optic revolution is that, except for users large enough to justify the expense of bringing fiber directly to their premises, bandwidth is still constrained by the capacity of a twisted pair of copper wires. The reason is that the local architecture was designed long before any potential for broadband data communications existed and the cost of changing it is higher than the revenue potential can support. The wide area network is concentrated into a backbone of shared switches and circuits, fanning out to millions of dedicated connections to every business and household in the developed world. Not only are the multipliers huge, but also these cables must pass through some expensive real estate.
Ideally, service providers would bring fiber optics to every business and residence, but the cost of digging up the streets is enormous and the only way it can be justified is by bringing services for which subscribers will pay enough to justify the cost. Telephone service is a given -- a lifeline service that few can do without. Beyond that are entertainment services, most of which ride on coaxial cable today. The community antenna television (CATV) providers have already invested the capital to bring a broadband channel to more than 80 percent of the households. Were either telephone or CATV to start from scratch today, the billions that were invested in twisted-pair wire and coax would instead be invested in fiber optics, but replacement is an expensive proposition.
Only in the past decade has a third service emerged for which subscribers are willing to pay: information. Before the advent of the Internet, it was difficult to conceive that information would become such an important commodity, but today access to the Internet is a third revenue stream that has important implications for the future. The issues are clear to the service providers. Large business users require fiber optics because no other medium can support their bandwidth requirements. Therefore, not only the incumbent local exchange carriers (ILECs, i.e., the traditional telephone companies), but also competitive local exchange carriers (CLECs) and competitive access providers (CAPs), are bringing fiber to those users with enough bandwidth demand to justify it.
That leaves the small businesses and residences, which is where the real multipliers are. The fact is that the copper wire and coaxial cable plant is already in place and the providers don't expect enough revenue to justify replacing it. Therefore, they are deploying methods of extending the life of their existing plant. The ILECs are applying digital subscriber line (DSL) technologies to their twisted-pair wire. CATV providers are converting their one-way systems, which were originally designed to deliver entertainment services, to two-way systems. Wireless providers plan to bypass wired alternatives altogether by using radio frequency devices. They have already made inroads into the CATV market with direct broadcast satellite services. Wireless providers such as Teligent and Wavestar have made broad forays into the voice and data market, but have encountered financial problems that tend to dampen the enthusiasm of future investors. These three classes of service providers are eyeing each other's markets hungrily. Satellite providers are including local channels to woo CATV customers. Cable companies are preparing to offer telephone service, and local exchange carriers (LECs) are positioning themselves to deliver video on demand (VOD). All three offer Internet access, either by providing their own service or as common carriers for Internet service providers (ISPs).
For larger businesses, Internet access has become a necessity that falls just behind telephone service in importance. These companies and agencies require full-time connections to the Internet. In addition, most multilocation organizations have a wide area network to tie all sites together for access to corporate databases and e-mail. Residential subscribers and small businesses, on the other hand, cannot justify the cost of broadband access that most corporations enjoy, so they are left with dial-up access. Once users become accustomed to high-speed access at the office, however, the comparatively interminable waiting time of dial-up access becomes so painful that they are willing to pay for a better way.
Better alternatives are becoming available, but it is difficult to know what to believe. Service providers' advertisements often make extravagant claims of access speed based on idealized conditions that most users will not experience. Horror stories of lengthy delays and inept technicians abound, many of which are unfortunately true. The objective of this book is to provide users with information about the various access technologies, what to expect from them, and where they fit. Let's begin by looking at the default method that most users employ, the public switched telephone network (PSTN).
The Public Switched Telephone Network
From its inception in the nineteenth century, the PSTN has had a simple and straightforward architecture. Subscribers are connected by copper wire to a central switching system located in a building known as the central office (CO). The CO is also known as a serving wire center (SWC), so called because all of the cables from the subscribers route to this center. The CO houses one or more switching systems. In the early days of the telephone these were manually operated, but today's switching systems use computer-driven electronic switches. Older systems are analog, but analog technology is obsolete and is being replaced by digital switches as economics permit. Large metropolitan areas have multiple wire centers that are connected by groups of local trunks. A trunk is any circuit between switches. All users share trunks and the central switching fabric within the CO. These shared pathways are engineered to a low probability of blockage, but calls may be blocked during unusual calling peaks such as disasters. If a call cannot be completed because of blockage, the network returns a fast busy signal.
This point is key to understanding access technologies: the bandwidth restrictions in the PSTN are not in the local loop. The switching and trunking networks limit the bandwidth to that needed for a voice session. Except for longer telephone loops, the local loop has far more bandwidth than a voice session requires.
The local network connects to the long-haul network over access trunks. The interexchange carriers (IXCs) provide trunks from their switching systems to the LECs and interconnect their switches with intertoll trunks. Today, virtually all of these trunks ride over fiber-optic facilities. For more information on how the optical network functions, refer to the I-book Optical Networking, available at http://shop.mcgraw-hill.com/cgi-bin/pbg/indexebooks.html. The optical backbone provides a high-quality facility with low error rates that is used by voice and data alike.
Before optical networking became prevalent, those constructing data networks had little choice but to obtain analog circuits from the public telephone network. These circuits were identical to telephone circuits except that they were not switched. Instead, they were connected directly to form dedicated or private line circuits. In prefiber days the circuits were derived over analog microwave or coaxial cable. In either case, for data the error rate was high and the bandwidth was constrained by the voice circuit, which is nominally 0 to 4 kHz -- actually about 300 to 3300 Hz. Into this network the Bell System began to deploy digital toll switches, converting the analog circuits to digital circuits through a device known as a channel bank. Gradually, as fiber optics replaced the microwave and as competitive carriers entered the picture, channel banks gave way to direct digital connections.
For voice connections, the all-digital network resulted in a dramatic increase in quality. With analog circuits, noise is cumulative. As amplifiers along the way boost the signal, they also increase the noise. Digital circuits are periodically regenerated, which keeps noise at a low level. Coupled with the immunity to interference that is inherent in fiber optics, digital circuits ensure voice signal quality. They also lower the error rate for data to the point that errors may occur only once in a trillion bits. Digital circuits also increase the bandwidth by a factor of three or four compared to the maximum data capacity of an analog circuit, but that is not enough for broadband applications such as the Internet. The PSTN is built on a time-division multiplexing (TDM) model. The lowest level in this model, a DS-0, operates at 64 kbps, which is the widest bandwidth the PSTN is designed to switch. If more bandwidth than that is needed, say for a conference-quality videoconference, it is necessary to bond multiple channels together through inverse multiplexing.
The problem is further aggravated by the characteristics of the local loop. The cable plant in the telephone network was designed to support analog telephones. Most of the intelligibility in the human voice is contained within the narrow pass band of 0 to 4 kHz. The local telephone networks in the world were designed at a time when either a private entity such as the Bell System and independent telephone companies in North America or the postal telephone and telegraph (PTT) agencies in the rest of the world owned everything including the telephone set. These entities had the objectives of minimizing investment and controlling maintenance cost. Therefore, until recently with the advent of speed dial and analog display telephones, all of the intelligence in the network resided in the CO.
Even as the world has converted to digital, additional shortcomings of the PSTN remain. For one thing, the digital circuit hierarchy does not scale well. Voice circuit bandwidth is 64 kbps. The next step up from a single digital channel is T1/E1, which has 24 channels in North America and Japan and 30 channels in most of the rest of the world, with nominal transmission speeds of 1.5 and 2.0 Mbps, respectively. The next step up the hierarchy, T3/E3, has transmission speeds of 45 and 34 Mbps. In the wide area, users can obtain fractional T1/E1 or T3/E3, but these are not generally available in the local loop. The problem is further aggravated in North America and Japan by the fact that 64 kbps is not available in the local loop. For reasons we will explain later, a digital local loop in countries that use the T1 multiplexing scheme has only 56 kbps of bandwidth.
Until recently, these constraints affected only businesses. Residences needing access to information databases had no choice but dial-up modems. Modems have become increasingly sophisticated and quite inexpensive over the years. A V.90 modem is theoretically capable of up to 56 kbps of full-duplex transmission (meaning data is sent in both directions simultaneously), but various constraints such as noisy loops and ISP configuration limit the actual transmission speed to something like 33.6 kbps in many cases. Dial-up modems do an excellent job of cramming a lot of information into a small amount of bandwidth, but the limits of the technology have been reached. No further advances are likely.
Dial-up modems did a reasonably good job when the typical database provider offered primarily textual information with limited graphical content, but the World Wide Web changed that forever. The essence of a Web page is bitmapped graphics, which transfer slowly across a dial-up connection. The telecommunications industry, aware of the deficiencies of the analog network, began to deploy digital circuits in the 1960s, and a few years later began to develop the integrated services digital network (ISDN) to provide end-to-end digital connectivity. Basic rate ISDN (BRI) provides two digital "bearer" channels of 64 kbps each. An external signaling channel of 16 kbps completes the circuit, which is supported on a single copper cable pair. Primary rate ISDN (PRI) provides full T1/E1 bandwidth. ISDN is an improvement over an analog dial channel, but it has several drawbacks. First is the lack of general availability. Many small LECs don't provide ISDN, and for those that do, the cost is often high. Furthermore, the LECs do not support the service well. The same service representatives who handle plain old telephone service (POTS) often can't take orders for ISDN, and the customer either has to get assistance in applying ISDN or learn a lot more about the service than most want to know.
Perhaps a greater drawback of BRI is the fact that it is just too slow in today's arena. Even with both channels bonded together to give a full-duplex bandwidth of 128 kbps, it still seems slow to people who are accustomed to high-speed Internet access. From the ISPs' viewpoint, any dial-up access has numerous drawbacks. Modems are expensive to own and troubleshoot. Furthermore, if the customer remains connected, as many do, ISDN or analog ports are sitting idle, perhaps waiting for an occasional events such as arrival of an e-mail message. Business customers have an even greater problem. The need for Internet access for the small business is much the same as for residences except for the fact that many more users need simultaneous access. Therefore dial-up access is feasible only for the smallest of businesses. Full-time access is essential for nearly any business that requires an Internet connection. In addition, businesses with branch offices almost invariably need data connectivity for exchanging files and e-mail.
For years, the only alternative was to use dedicated circuits. Now, broadband connections to the home are becoming essential for many users. The emphasis has shifted from video to Internet access. In addition, many people are interested in working from home at least part of the time. Telecommuting requires access to files in the office computer, and using a remote-access server is a poor substitute for being online. The length of time it takes to download large files over a dial-up modem makes the remote-access server effective only for traveling employees who do not have a broadband alternative. In addition to providing universal access to corporate databases and connection to the Internet, many call centers would like home users to function as call center agents at least part of the time.
The motivation for full-time access to the Internet doesn't rest only with the subscribers. The LECs are also interested in getting Internet users off the dial-up network. The PSTN was originally engineered to support an average connection holding time of about four minutes. Calls into the Internet are far longer than that and have upset the engineering calculations that were used for designing the PSTN. Besides the revenue potential of providing Internet access via DSL, the LECs can benefit from transferring Internet connections from the dial network to dedicated circuits.
The Cable Network
The history of cable is only about one-third as long as that of the PSTN. The first application of cable was in Astoria, Oregon, in 1950. From that beginning, cable expanded to the point that it can now reach about 80 percent of the households in the United States, although fewer than that subscribe to the service. Early cable systems received distant channels off the air and repeated them over a coaxial network, or even open wire transmission facilities, to subscribers who were unable to receive television directly. That gradually gave way, however, to the concept of cable as a medium that provides premium entertainment channels.
The CATV equivalent of the LEC central office is the headend. Here, signals are picked off the air, downlinked from satellites, or originated locally. The channels are applied to modulators that link them to the cable at any compatible frequency. The conventional very-high-frequency (VHF) broadcast band in North America consists of 12 channels, each of which is 6 MHz wide. These cover the frequency range of 54 to 88 MHz and 174 to 216 MHz. The ultra-high-frequency (UHF) channels 14 to 69 cover the range of 470 to 806 MHz. Cable has an advantage over conventional television in that ranges that are assigned to other services can be used to support cable channels. Today's cable systems cover the range of 54 MHz to as much as 1 GHz, a bandwidth that can support more than 150 channels. Most systems stop at 750 MHz, however, reserving the top 250 MHz for future services.
Cable was originally constructed as a one-way broadcast medium. The transmission direction from the headend to the subscriber is referred to as the downstream direction. (The other direction, obviously, is upstream.) Amplifiers were one-way in earlier systems, but some systems were constructed with bidirectional amplifiers. In some cases the service providers did this to support enhanced services that required two-way communication. In other cases their franchises required them to provide two-way communication services for the municipalities in exchange for use of the public right-of-way. Modern two-way systems use the frequencies below 40 MHz for upstream communications, providing a downstream bandwidth that is far greater than upstream.
Cable provides a broadband pipe that can potentially reach a high percentage of the population in most developed countries, and would appear to be an ideal medium for data communications. As we will discuss in Chaps. 5 and 6, however, the shared-medium nature of cable imposes some restrictions that are costly to overcome.
Wireless has long played a vital role in telecommunications, primarily in the long-haul transmission network and in mobile applications. Satellite services once were -- and for some countries still are -- a way of bringing telecommunications services to locales that lack heavy enough demand to justify undersea cable. Point-to-point terrestrial microwave was once the primary means of intercontinental telecommunications service, and still is a convenient way of bypassing obstructions that are expensive to cross with cable. In the access market, however, wireless has been impaired by several factors. First is the fact that broadband demand didn't really develop in the local network until fiber optics was available and the quality of fiber is so much higher that is always preferred if it is economically feasible.
The most limiting aspect of wireless has been the lack of available frequencies. Microwave got its start from radar technologies that were first employed in World War II and then converted to commercial telecommunications service after the war. The lowest microwave bands were developed first because the technology was easy to use and the transmission characteristics are superior to those of the higher bands, where rain, fog, and obstructions can limit the usability of the medium. Microwave frequencies are a finite, limited resource, and competition for the available wireless spectrum is intense.
In recent years, the low-band microwave spectrum centering around 2.4 GHz has been released for telecommunications services. The common carrier personal communications service (PCS) occupies two sub-bands, with unlicensed spectrum between the two bands. One of the objectives of splitting the band in this manner is to enable transceivers to hop easily from the licensed PCS band to the unlicensed spectrum that is available for private wireless use. A single instrument can serve users in both bands, even enabling users to move from a private system to the public network without interrupting a session. This same spectrum plus others are available for public and private wireless use, including access services, as we will discuss in later chapters.
Wireless appears at first glance to be an ideal replacement for the telephone local loop. So far, however, most wireless local loop (WLL) application has been in developing countries. In the developed nations wired services are in place, and there has been little impetus to replace them.
Copyright © 2002 by The McGraw-Hill Companies, Inc.