P2P Content Distribution and Hosting

From P2P Foundation
Jump to navigation Jump to search


Description

"The traditional method of distributing large files is to put them on a central server. The server and the client can then share the content across the network using agreed upon protocols (from HTTP, FTP to an infinite number of variations) when using IP connections the data can be sent over TCP or UDP connection or a mix of the two, this all depends mostly on the requirements on the service, machines, network and many security considerations.

The advantages regarding optimization of speed, availability and consistency of service in regards to optimal localization is nothing unheard off. Akamai Technologies and Limelight Networks among other similar solutions have attempted to commercially address this issue, even Google has distributed the location of its data centers to increase the response of its services. This has addressed the requirement of large content and service distribution but is not is not a fully decentralization of the control structure.

P2P evolved in to solve a distinct problem, that central servers do not scale well. Bandwidth, space and CPU constitute are a point of failure, that can easily bring the function of a system to an end, as any centralization of services." (https://en.wikibooks.org/wiki/The_World_of_Peer-to-Peer_%28P2P%29/Print_version)