Commentary

The Packets Must Get Through

From the book "The Consequences of Net Neutrality Regulations on Broadband Investment and Consumer Welfare"

This essay originally appeared in “The Consequences of Net Neutrality Regulations on Broadband Investment and Consumer Welfare” published by the American Consumer Institute.

To the ears of the American consumer, a rule that would require phone, cable and wireless companies to treat all Internet and Web applications the same way-with no favoritism shown-might sound like a fair deal.

From its start, open access is what the Internet has been all about. Indeed, consumers should be able to access the Internet and Web applications they wish. Any individual, business or organization who wants to set up a web presence, from a personal blog to a major e-commerce site, should face no barrier to reaching users.

No one wants to take the Internet’s resources or utility away. Yet the proposal by the Federal Communications Commission (FCC) to create a “non-discrimination” rule, which would come under the general heading of network neutrality, although intended to preserve robust, open and quality access to all Internet applications, stands to have the opposite effect.

The non-discrimination rule, if enacted, would prohibit telephone companies, cable companies, wireless companies and other Internet service providers (ISPs)-the companies that built and own the local and long distance networks that carry Internet traffic-from applying any technology, technique or software that would prioritize, organize or otherwise structure Internet traffic so that it is delivered faster, has a guaranteed level of quality, or is partitioned in such a way that it does not slow or impede other traffic. While the FCC’s Notice of Proposed Rulemaking on network neutrality, released October 22, 2009, would allow vaguely defined “reasonable” network management, the NPRM also stated “that a bright-line rule against discrimination… may better fit the unique characteristics of the Internet.”56

To support his point, FCC Chairman Julius Genachowski says the non-discrimination rule was a founding principle of the Internet.57 To call it a “principle” is somewhat misleading. It is true that when network engineers developed the Internet Protocol (IP), it was designed to use the intelligence in the computers and routers at each end of the connection. That was because at the time, the late 1960s and 1970s, there was no intelligence in the telephone network to perform even the most basic of quality and prioritization functions. Non-discrimination was a necessary condition of the early Internet, not a prescribed rule as to how Internet transmission would always work.

The Internet: Then and Now

Today, 40 years since the first Internet connection was set up, network transmission technology is far different. The public communications network does hold the intelligence to improve, enhance and prioritize Internet traffic. In private networks, it already does. In wireless, the entire history of technology evolution is about finding ways to fit more data into a radio channel of fixed space. Some of these techniques, because they grant transmission priority to certain applications over others, allowing data applications on devices like iPhones and BlackBerrys to work, would likely be considered discrimination under the FCC’s new rule.

Second, and perhaps more important, today’s Internet applications are a far cry from the simple text characters transmitted at 300 bits-per-second (b/s) over those first connections.

Think of the ways you’ve used the Internet today. You’ve probably sent email, maybe with photos or lengthy documents attached. Perhaps you’ve made a clothing purchase, or paid your credit card bill. Maybe you’ve downloaded some music, or watched a video from YouTube, Netflix or Hulu.

Did you use your wireless phone to send a text message on your way to work? To update your picture on Facebook? To check up your fantasy football team? Your cell phone uses the Internet, too.

When you badged into your office, your building’s security system likely used the Internet to verify your employment status and let you in. In fact, your company’s entire security network, from video surveillance to fire alarms, probably uses the Internet, especially if it is spread over several buildings and locations.

Then there are all the unseen transactions that occur within the network itself. Search engines constantly crawl the Web collecting keyword data from Websites worldwide. When you perform a Web search, data from thousand of servers are instantly correlated, packaged and delivered to your desktop, with ad links that correspond to your search parameters. The Web-based financial transaction that occurs in seconds involve multiple links and data exchange between you, the retailer, your bank, the retailer’s bank, a credit verification database and any other party with a stake in the transaction.

As you might imagine, all this adds up to an enormous amount of data moving across the network. Indeed, Bart Swanson and George Gilder have been tracking the growth of Internet traffic since early this decade. In January 2008, using data from Cisco Systems, the world’s leading supplier of Internet switches and routers, Swenson and Gilder reported that monthly Internet traffic in 2007 had reached 2.5 exabytes, or 2.5 quintillion bytes (2.5 x 1019), up from approximately 1 exabyte in 2005. Cisco projected monthly Internet traffic would reach 5.5 exabytes by 2009 and 9 exabytes by 2011.58

While there is vast amount of bandwidth capacity in the public network, vast does not mean unlimited. And while investment in infrastructure continues, the costly deployment of more physical facilities-fiber optics and cell antennas-should not be legally locked in as the only solution growing bandwidth consumption.

Besides, construction of more physical facilities only addresses the congestion problem. You may indeed speed traffic by building more lanes, but the expanding diversity of Internet and Web applications creates quality requirements that can’t be solved by the addition physical facilities alone.

This is where the non-discrimination principle of network neutrality would create massive problems for users and applications providers. In order for some applications to function correctly, their data may require special treatment as it crosses the network. This is especially true with video, which is both data-intensive (a 10-minute, low-resolution YouTube video can be 100 megabytes) and error-sensitive. In fact, enterprises which put a lot of video on their networks, such as in the building security example above, use techniques such as bandwidth management, partitioning and packet prioritization to make sure video is transmitted effectively yet does not interfere with the flow of mission-critical enterprise data. It’s troublesome that the FCC would prohibit in the public sphere techniques that are indispensible to smooth operation of business networks.

Packets and Prioritization

Since it is key to understanding the unintended consequences network neutrality presents, let’s examine what we mean by data packets and packet prioritization.

The way the Internet Protocol is engineered, data-all those ones and zeros-travels the network in packets. The term is apropos. Think about the way you send a letter. You write your message on a piece of stationary and place it in an envelope, which you then address and mail. The post office uses the information on the envelope to route your letter to the intended recipient. If there is a problem, the letter is returned to sender, using the return address, also written on the envelope.

Data packets work the same way. A string of data is bundled into an electronic packet. The packet’s envelope, or header, contains the destination information, in the form of an IP address. The network routers read this information and send the packet to its destination. If something goes wrong and the packet can’t be delivered, the network signals the transmitting end, akin to a “return to sender.” The transmitting computer or router sends the packet again and continues to do so until the machine at the other end acknowledges receipt.

The only difference is that on the Internet, an application, be it an email, image or video, contains thousands, if not millions, of packets. When we mail a letter, we can send the whole message in one envelope. On the Internet, it is more akin to sending your letter one word at a time, leaving it up to your recipient to wait for all the envelopes to arrive, then to assemble the message. And, as with the post office, on the Internet packets may not arrive in the order they were sent. As a sender, you will have to rely on the intelligence of your recipient to reorder the packets and reconstruct your message. If one packet is lost or damaged on the way, your recipient may have to deduce the missing information. This, of course, adds time to the ultimately delivery and communication of your message. In Internet lingo, this delay is called latency.

This is why the Internet transmission is often referred to as “best effort.” It is practically the same principle as first-class mail. Computers send out data packets, they are transmitted across the network and arrive at their destination essentially when they get there.

Best effort is not as big a problem for email, documents and small files which can be assembled quickly. These applications can better tolerate latency and errors.

Other applications are far more sensitive to errors and latency. Take video streaming, for example. Video not only consists of much more data than most files, it also has to be delivered in the right order and needs to be assembled quickly.

Almost everyone has experienced freeze-ups while watching Internet video. These can be annoying with free services such as YouTube and Hulu. Imaging paying $10 to $20 for a streaming video only to have it fail partway through.

Latency is a major issue in gaming. If you’ve played Resident Evil online you know how frustrating it is to be killed by an oncoming zombie while you’re firing away with your mouse yet seeing no result on screen.

Fortunately, the post office does not employ the non-discrimination principle. In mail or shipping, senders can pay more for one- or two-day delivery. They can request a return receipt. They can insure valuable items against loss. All of these come at an extra cost, but they are not seen as unfair to individuals who use regular mail, nor do “fast lane” services interfere with standard delivery.

Under the FCC’s non-discriminatory rule, there would be no ability for providers of sophisticated applications to pay a premium to guarantee a higher level of performance. Nor could service providers charge the companies that use immense amounts of bandwidth-search engines, studios, media companies, peer-to-peer services-fees that would reflect the cost of the added management strain they place on the network. While the motivation is preservation of an open Internet, the outcome would be the opposite. The rules would demand ISPs follow 40-year-old data communications architectures that have already been surpassed. The result would be an expensive, slow, poorly performing Internet that would be unable to support bandwidth-rich applications.

On the other hand, there is every sign that foregoing Internet regulation would lead to the development of business models and market-based solutions that would create an environment where all types of applications could be supported and delivered; getting the network management support they need while avoiding interference with applications that work just fine with best effort.

The FCC argues that the market alone cannot manage competing interests when it comes to applications management on the Internet. Yet there has been no pattern of abuse. The non-discrimination rule comes in response to a single incident where a service provider used network technology to manage the way a third-party application worked. In October 2008, Comcast, the nation’s largest cable company, confirmed reports that it was intentionally slowing down the rate of voluminous video files that were being transferred via BitTorrent.com, one of many so-called peer-to-peer (P2P) sites that allow users to search for and exchange movies and TV shows between and among their own PCs. BitTorrent software is designed to set up as many simultaneous connections as possible between the user’s PC and BitTorrent’s file sharing site (the more connections, the faster the transmission). To keep BitTorrent users from flooding the network, especially at peak times, Comcast introduced software that limited the number of simultaneous connections the BitTorrent software could set up. BitTorrent users could still reach the site, but the rate of transfer was slowed. Comcast argued this network management decision was made to ensure service quality for the vast majority of Comcast Internet customers whose high-speed connections would be slowed by the amount of bandwidth P2P applications were gobbling up. Even cable industry critics such as George Ou, writing on ZDNet, conceded Comcast was within its rights to do so:

We can think of it as a freeway onramp that has lights on it to rate limit the number of cars that may enter a freeway… If you didn’t have the lights and everyone tries to pile on to the freeway at the same time, everyone ends up with worse traffic.59

What’s more, Comcast and BitTorrent negotiated an amicable solution that respected each other’s interest. Government handwringing over network neutrality has gone on for at least four years, yet the one instance of a dispute between a service provider and an applications provider over applications prioritization was resolved by market forces within weeks.60

The necessity of the FCC’s network neutrality rules is questionable in general, but its Non-discrimination mandate is downright counterproductive. Consumers will be better off without it. In summary, here are some reasons why:

Regulation will increase consumer costs

The cost of the management required to support sophisticated applications should be borne by the companies that produce, market and profit from these applications. Network neutrality, especially the non-discrimination principle, will force service providers to shift those costs onto the public in the form of higher broadband fees. Even network neutrality proponents, such as Computerworld’s Mark Gibbs, admit this. “Now the downside: We’re going to have to pay more. There’s little doubt that regulated Internet service will probably be more expensive but that’s the consequence of doing what’s right for our society.”61

Gibbs worries that if phone and cable companies can charge applications providers for prioritization and management, it will stifle innovation. That is not true. Fee-based network management services would, however, force entrepreneurs to develop business plans that account for the full cost of delivering service, a disciplined approach that is much more likely to yield long-run success all around. The network neutrality alternative sets up a dubious scheme that permits a business to privatize its gains from Internet commerce, while socializing its costs. It’s hard to see what’s “right for our society” about this.

There will be no cost check on commercial bandwidth consumption

When commercial bandwidth costs are socialized-that is transferred to consumers-businesses have no incentive to limit their exploitation of Internet capacity. The exaflood will only get worse as the largest users, immune from paying the cost of their consumption, grab as much bandwidth as they can. So in addition to paying more, as net neutrality enthusiast Gibbs states, consumers will find the Internet a slow, frustrating experience. Wealthier consumers may have the option of purchasing higher bandwidth options, such as fiber to the home, but with no check on the supply side, even that capacity stands to be consumed.

Smaller players will be hurt, not helped

The final irony is that non-discrimination is supposed to protect the proverbial “little guy.” Yet, with no partitioning or prioritization available for deep-pocketed companies, the smaller operations will be run off the road first. It would make sense for Fox or Universal to purchase a “fast lane” for its video feeds that are routinely downloaded by millions of users. The quality of this video might be better than what a local blogger can afford, but then again Fox and Universal can afford many things the lone blogger can’t. The point of the open Internet isn’t what the small Web site can afford; it’s whether the small Web site can be heard. When heavy traffic can be prioritized and partitioned, the small site gets through.

Many innovative applications will never be developed

Policymakers argue that non-discrimination on network management is needed to ensure the Internet remains an incubator for innovation. To counter this, let’s return to the post office analogy. Those who have visited Seattle may have come across the Pikes Place Fish Market, where each morning you can buy Alaskan king salmon that had been swimming the icy Pacific waters just hours before. At one time, if you wanted the best fish in the Northwest, you had to live in the Emerald City. Today, because of overnight shipping, Pikes Place Fish Market can deliver anywhere in the U.S.

For Pikes Place Fish Market, normal shipping (i.e., “best effort”), which can take three to seven days, was never an option, for obvious reasons. Its access to the national market, and the chance for consumers in Texas to buy superior seafood fresh from the catch would not have been possible without a premium choice for delivery.

The Internet works the same way. We already can name many existing services, like video and gaming, which would benefit from a fast lane. What we don’t yet know are the applications and services that will be created because there is a fast lane. Regulation closes these opportunities off.

For all the talk about preserving a free and open Internet, network neutrality’s non-discrimination rule would do neither. As bandwidth consumption increases almost geometrically, today’s Internet needs commercial options that include prioritization, bandwidth optimization, applications partitioning and packet prioritization. If the Internet’s going to work, the packets must get through.

Notes:
56 Federal Communications Commission, “Notice of Proposed Rulemaking: In the Matter of Preserving the Open Internet Broadband Industry Practices,” GN Docket No. 09-191, WC Docket No. 07-52, Oct. 22, 2009.
57 Julius Genachowski, “Preserving a Free and Open Internet: A Platform for Innovation, Opportunity, and Prosperity,” speech to Brookings Institution, Sept. 21, 2009.
58 George Gilder and Bret Swanson, Estimating the Exaflood, Discovery Institute, January 2008. Available at http://www.discovery.org/scripts/viewDB/filesDB-download.php?command=download&id=1475.
59 George Ou, “A Rational Debate on Comcast Network Management,” ZDNet, Nov. 6, 2007, available at http://blogs.zdnet.com/Ou/?p=852.
60 The Comcast-BitTorrent example, along with the two other cases on which the FCC is building its case for Internet regulation, as discussed in depth in my policy study “The Internet is Not Neutral (and No Law Can Make It So),” Reason Foundation, May 2009.
61 Mark Gibbs, “Network Neutrality: Doing the Right Things” Computerworld, Oct. 1, 2009. Available at http://www.computerworld.com/s/article/9138792/Network_neutrality_Doing_the_right_things?taxono myId=16&pageNumber=2.

Steven Titch is a policy analyst at Reason Foundation.This essay originally appeared in “The Consequences of Net Neutrality Regulations on Broadband Investment and Consumer Welfare” published by the American Consumer Institute.