At its inception, the telecommunications network relied on copper to carry information. But the bandwidth of copper is limited by its physical characteristics—as the frequency of the signal increases to carry more data, more of the signal's energy is lost as heat. Additionally, electrical signals can interfere with each other when the wires are spaced too close together, a problem known as crosstalk. In 1940, the first communication system relied on coaxial cable that operated at 3 MHz and could carry 300 telephone conversations or one television channel. By 1975, the most advanced coaxial system had a bit rate of 274 Mbit/s, but such high-frequency systems require a repeater approximately every kilometer to strengthen the signal, making such a network expensive to operate.
It was clear that light waves could have much higher bit rates without crosstalk. In 1957, Gordon Gould first described the design of the optical amplifier and the laser that was demonstrated in 1960 by Theodore Maiman. The laser is a source for light waves, but a medium was needed to carry the light through a network. In 1960, glass fibers were in use to transmit light into the body for medical imaging, but they had high optical loss—light was absorbed as it passed through the glass at a rate of 1 decibel per meter, a phenomenon known as attenuation. In 1964, Charles Kao showed that to transmit data for long distances, a glass fiber would need loss no greater than 20 dB per kilometer. A breakthrough came in 1970, when Donald B. Keck, Robert D. Maurer, and Peter C. Schultz of Corning Incorporated designed a glass fiber, made of fused silica, with a loss of only 16 dB/km. Their fiber was able to carry 65,000 times more information than copper.
The first fiber-optic system for live telephone traffic was in 1977 in Long Beach, Calif., by General Telephone and Electronics, with a data rate of 6 Mbit/s. Early systems used infrared light at a wavelength of 800 nm, and could transmit at up to 45 Mbit/s with repeaters approximately 10 km apart. By the early 1980s, lasers and detectors that operated at 1300 nm, where the optical loss is 1 dB/km, had been introduced. By 1987, they were operating at 1.7 Gbit/s with repeater spacing of about 50 km.[4]
Optical Amplification
The capacity of fiber optic networks has increased in part due to improvements in components, such as optical amplifiers and optical filters that can separate light waves into frequencies with less than 50 GHz difference, fitting more channels into a fiber. The erbium-doped optical amplifier (EDFA) was developed by David Payne at the University of Southampton in 1986 using atoms of the rare earth erbium that are distributed through a length of optical fiber. A pump laser excites the atoms, which emit light, thus boosting the optical signal. As the paradigm shift in network design proceeded, a broad range of amplifiers emerged because most optical communication systems used optical fiber amplifiers.[5] Erbium-doped amplifiers were the most commonly used means of supporting dense wavelength division multiplexing systems.[6] In fact, EDFAs were so prevalent that, as WDM became the technology of choice in the optical networks, the erbium amplifier became "the optical amplifier of choice for WDM applications."[7] Today, EDFAs and hybrid optical amplifiers are considered the most important components of wave division multiplexing systems and networks.[8]
Wavelength Division Multiplexing
Using optical amplifiers, the capacity of fibers to carry information was dramatically increased with the introduction of wavelength-division multiplexing (WDM) in the early 1990s. AT&T's Bell Labs developed a WDM process in which a prism splits light into different wavelengths, which could travel through a fiber simultaneously. The peak wavelength of each beam is spaced far enough apart that the beams are distinguishable from each another, creating multiple channels within a single fiber. The earliest WDM systems had only two or four channels—AT&T, for example, deployed an oceanic 4-channel long-haul system in 1995.[9] The erbium-doped amplifiers on which they depend, however, did not amplify signals uniformly across their spectral gain region. During signal regeneration, slight discrepancies in various frequencies introduced an intolerable level of noise, making WDM with greater than 4 channels impractical for high-capacity fiber communications.
To address this limitation, Optelecom, Inc. and General Instruments Corp. developed components to increase fiber bandwidth with far more channels. Optelecom and its head of Light Optics, engineer David Huber and Kevin Kimberlin co-founded Ciena Corp in 1992 to design and commercialize optical telecommunications systems, the objective being an expansion in the capacity of cable systems to 50,000 channels.[10][11] Ciena developed the dual-stage optical amplifier capable of transmitting data at uniform gain on multiple wavelengths, and with that, in June 1996, introduced the first commercial dense WDM system. That 16-channel system, with a total capacity of 40 Gbit/s,[12] was deployed on the Sprint network, the world's largest carrier of internet traffic at the time.[13] This first application of all-optical amplification in public networks[14] was seen by analysts as a harbinger of a permanent change in network design for which Sprint and Ciena would receive much of the credit.[15] Advanced optical communication experts cite the introduction of WDM as the real start of optical networking.[16]
Capacity
The density of light paths from WDM was the key to the massive expansion of fiber optic capacity that enabled the growth of the Internet in the 1990s. Since the 1990s, the channel count and capacity of dense WDM systems has increased substantially, with commercial systems able to transmit close to 1 Tbit/s of traffic at 100 Gbit/s on each wavelength.[17] In 2010, researchers at AT&T reported an experimental system with 640 channels operating at 107 Gbit/s, for a total transmission of 64 Tbit/s.[18] In 2018, Telstra of Australia deployed a live system that enables the transmission of 30.4 Tbit/s per fiber pair over 61.5 GHz spectrum, equal to 1.2 million 4K Ultra HD videos being streamed simultaneously.[19] As a result of this ability to transport large traffic volumes, WDM has become the common basis of nearly every global communication network and thus, a foundation of the Internet today.[20][21] Demand for bandwidth is driven primarily by Internet Protocol (IP) traffic from video services, telemedicine, social networking, mobile phone use and cloud-based computing. At the same time, machine-to-machine, IoT and scientific community traffic require support for the large-scale exchange of data files. According to the Cisco Visual Networking Index, global IP traffic will be more than 150,700 Gbits per second in 2022. Of that, video content will equal 82% of all IP traffic, all transmitted by optical networking.[22]
^Agrawal, Govind P. (2002). Fiber-Optic Communications Systems. John Wiley & Sons, Inc.
^Nemova, Galina (2002). Optical Amplifier. p. 139.
^Ramaswami, R., and Sivarajan, K., Optical Networks: A Practical Perspective, Second Edition, 2001, Elsevier, Philadelphia, PA, ISBN0080513212, 9780080513218
^Gilder, George (December 4, 1995). ""Angst and Awe on the Internet"". Forbes ASAP.
^Goldman Sachs (July 30, 1997). "Ciena Corporation: Breaking the Bandwidth Barrier". Technology: Telecom Equipment, US Research Report.
^Cvijetic, Milorad and Djordjevic, Ivan B. (2013). Advanced Optical Communication Systems and Networks. Artech House.{{cite book}}: CS1 maint: multiple names: authors list (link)
^Zhou, X., et al., “64-Tb/s (640×107-Gb/s) PDM-36QAM transmission over 320km using both pre- and post-transmission digital equalization,” 2010 Conference on Optical Fiber Communication/National Optical Fiber Engineers Conference, March 2010, San Diego, CA