The 160-mile download diet: Local file-sharing drastically cuts network load

Ever since Bram Cohen invented BitTorrent, Web traffic has never been the same. Whether that's a good thing or a bad thing, however, is a matter of debate.

Peer-to-peer networking, or P2P, has become the method of choice for sharing music and videos. While initially used to share pirated material, the system is now used by NBC, BBC and others to deliver legal video content and by Hollywood studios to distribute movies online. Experts estimate that peer-to-peer systems generate 50 to 80 percent of all Internet traffic. Most predict that number will keep going up.

Tensions remain, however, between users of bandwidth-hungry peer-to-peer users and struggling Internet service providers.

To ease this tension, researchers at the University of Washington and Yale University propose a neighborly approach to file swapping, sharing preferentially with nearby computers. This would allow peer-to-peer traffic to continue growing without clogging up the Internet's major arteries, and could provide a basis for the future of peer-to-peer systems. A paper on the new system, known as P4P, will be presented this week at the Association for Computing Machinery's Special Interest Group on Data Communications meeting in Seattle.

"Initial tests have shown that network load could be reduced by a factor of five or more without compromising network performance," said co-author Arvind Krishnamurthy, a UW research assistant professor of computer science and engineering. "At the same time, speeds are increased by about 20 percent."

"We think we have one of the most extensible, rigorous architectures for making these applications run more efficiently," said co-author Richard Yang, an associate professor of computer science at Yale.

The project has attracted interest from companies. A working group formed last year to explore P4P and now includes more than 80 members, including representatives from all the major U.S. Internet service providers and many companies that supply content.

"The project seems to have a momentum of its own," Krishnamurthy said. The name P4P was chosen, he said, to convey the idea that this is a next-generation P2P system.

In typical Web traffic, the end points are fixed. For example, information travels from a server at Amazon.com to a computer screen in a Seattle home and the Internet service provider chooses how to route traffic between those two fixed end points. But with peer-to-peer file-sharing, many choices exist for the data source because thousands of users are simultaneously swapping pieces of a larger file. Right now the choice of P2P source is random: A college student in a dorm room would be as likely to download a piece of a file from someone in Japan as from a classmate down the hall.

"We realized that P2P networks were not taking advantage of the flexibility that exists," Yang said.

For the networks considered in the field tests, researchers calculated that the average peer-to-peer data packet currently travels 1,000 miles and takes 5.5 metro-hops, which are connections through major hubs. With the new system, data traveled 160 miles on average and, more importantly, made just 0.89 metro-hops, dramatically reducing Web traffic on arteries between cities where bottlenecks are most likely to occur.

Tests also showed that right now only 6 percent of file-sharing is done locally. With the tweaking provided by P4P algorithms, local file sharing increased almost tenfold, to 58 percent.

The P4P system requires Internet service providers to provide a number that acts as a weighting factor for network routing, so cooperation between the Internet service provider and the file-sharing host is necessary. But key to the system is that it does not force companies to disclose information about how they route Internet traffic.

Source: University of Washington