P2P Infrastructure - Discussions: Difference between revisions

From P2P Foundation
Jump to navigation Jump to search
No edit summary
No edit summary
Line 1: Line 1:
Curated selections from the discussions on P2P Infrastructure at the Google Group: [http://groups.google.com/group/building-a-distributed-decentralized-internet Building a Distributed Decentralized Internet]
Curated selections from the discussions on P2P Infrastructure at the Google Group: [http://groups.google.com/group/building-a-distributed-decentralized-internet Building a Distributed Decentralized Internet]


Line 152: Line 151:


[[Category:P2P Infrastructure]]
[[Category:P2P Infrastructure]]
[[Category:Autonomous Internet]]

Revision as of 03:26, 3 March 2011

Curated selections from the discussions on P2P Infrastructure at the Google Group: Building a Distributed Decentralized Internet


This documentation project is related to the ContactCon conference to be held on October 20, 2010 in NYC.


1

Venessa Miemis:

- As we identify and bring in various projects, it is crucial to list them according to what layer they address. (you can see the layers of OSI here http://en.wikipedia.org/wiki/OSI_model )

- Most of the projects I've seen are simply software that has some p2p- ish aspect. Slapping that onto a network with ISP's and a backbone, and an oligopoly of router equipment isn't going to solve either provisioning/bandwidth or governance issues. We need to support interest in both hardware and software solutions in order to really build a new p2p net.

- I think when people feel that the pieces of the distributed internet puzzle are resolved, they are mostly referring to a software layer, and we have a lot of variations on the software layer. Generally they depend on a functioning 'hardware' stack, usually up to the transport layer (e.g. ~TCP role in the OSI model). I agree that the hardware/ software division is a good one. The point about the distinction being fuzzy is valid, although we can probably expect anything up to TCP to be a hardware implementation (http://en.wikipedia.org/wiki/ TCP_Offload_Engine), or 'firmware'.

- distributed hardware a big issue, and until solved no amount of cleverness in P2P software protects against the Egypt attack at the ISP (or cutting the big wire, or shooting down the satellite etc.)

- A truly general purpose P2P implementation - as opposed to the specific use cases currently addressed by a lot of existing P2P software (file sharing, social interaction, processor pooling & memory sharing) is a very hard problem. This is where the redundancy and coherence algorithms really get tricky, especially if the network has high latency & low reliability, which would seem to be a necessary design constraint. We're really talking about a distributed filesystem and probably OS. The current extent of our technology seems to suggest we can really only do that usefully in a data centre, and somewhat less effectively between data centres.

- Problem: A user enters in a URL, and brings up a web page that behaves in an application-like fashion, communicating frequently with an OLTP datastore to send and receive significant amounts of data. Potential first step to solution: assume that the OLTP datastore and associated web host is *not* distributed - i.e. it exists in a big data centre (like it does now). What *is* distributed is the hardware route to that data centre. Trying to implement the whole hog here is just not yet within our grasp from bandwidth, connectivity, latency and other perspectives. Implementing the simpler solution protects against the Egypt attack (i.e. an Egyptian can tweet). That said, the simpler solution is still staggeringly difficult, because our problem is really the hardware one, not the software one


What is really possible, given a society dominated by hostile forces?

Problem: Network topologies robust to random failures are NOT robust to targeted attacks. Network topologies robust to targeted attacks are NOT robust to random failures.

Potential Solution: Move beyond a singular topology to a plural one.

In other words, Imagine a dual-network with TWO routing methods and TWO topologies, and requests are sent in parallel along both networks, so that if nodes are down in one network, data is routed via the other.

TWO is a minimum, though.

The pluralization of networks goes back to the pluralization of politics taken up by Connolly in his work The Ethos of Pluralization ( http://www.upress.umn.edu/Books/C/connolly_ethos.html )

It is another example of why code people should know theory, and theory people should learn code. Cross-domain knowledges.

scale-free networks are robust to random failures, but vulnerable to targeted attacks because you can simply take out the hubs first and isolate most of the network parts from the other parts

Potential Solution: A mesh network using a single routing method/topology. This architecture does not preclude multiple routes (in fact in a network with a lot of nodes, there should be multiple routes). This would seem to be an architecture robust to random failures, but I'm not sure why that architecture would make it any less robust to targeted attack than our current internet - in fact it is probably more robust (from a failure/attack perspective).

---

I think there is a tendency for stagnation on the 'next steps' angle because we aren't clear enough on the problem yet, and nailing a solution in this situation is more or less impossible.

- I'm not sure the goals/requirements are clearly defined. For example - 'an internet that can't be shut down' and 'distributed internet' aren't synonymous.

- 'An internet that can't be shut down' isn't actually possible in the strictest sense, we're talking about varying scales of catastrophe/ attack - we need to define what we expect to survive and what we don't - i.e. what is the worst case scenario we should be trying to design for. e.g. Do we expect both 'Egypt shutdown' and/or 'Libya shutdown' to not be possible?

- What does 'shut down' mean. Do we mean everyone can always access the internet if they want to? e.g. in the Egypt scenario, the internet was still operational, but Egypt was unable to get access to it.

- I think we do want a distributed internet (hardware and software), as it will be more resilient, but that can take many forms - indeed what we have now is distributed to a large degree. What do we mean by distributed? An increase in resilience may deliver a reduction in performance and micro-level reliability.

- We don't really identify the characteristics of the new internet - it seems implicit that we expect it will operate much the same as the current internet in terms of what we can do with it, but if this is actually a design constraint we have, then our options are significantly reduced. e.g. performance and reliability in our new network will be exceedingly difficult to match to our current network.

My feeling is that when we define those things above clearly, we will discover we don't have the technology to achieve them. As we scale back our expectations, to something we can achieve, we may discover that the wins we get aren't worth it yet - the internet we have is our best option for the moment, or that the direction of our technological advancement is naturally destined for the architecture we describe - we are already on the best path.

Regardless of the result of the analysis (achievable or not) defining the problem and goals clearly will make it much easier to work out the next steps - even if that next step is to 'eagerly await the future, keep oiling its cogs, and make sure nefarious interests don't hinder it'