Social Distribution Networks

From P2P Foundation
Jump to navigation Jump to search


"Today there seems to be a new distribution model that is emerging. One that is based on people’s ability to publically syndicate and distribute messages — aka content — in an open manner. This has been a part of the internet since day one — yet now its emerging in a different form — its not pages, its streams, its social and so its syndication. The tools serve to produce, consume, amplify and filter the stream." (


John Borthwick:

"First and foremost what emerges out of this is a new metaphor — think streams vs. pages. This seems like an abstract difference but I think its very important. Metaphors help us shape and structure our perspective, they serve as a foundation for how we map and what patterns we observe in the world. In the initial design of the web reading and writing (editing) were given equal consideration – yet for fifteen years the primary metaphor of the web has been pages and reading. The metaphors we used to circumscribe this possibility set were mostly drawn from books and architecture (pages, browser, sites etc.). Most of these metaphors were static and one way. The steam metaphor is fundamentally different. Its dynamic, it doesnt live very well within a page and still very much evolving. Figuring out where the stream metaphor came from is hard — my sense is that it emerged out of RSS. RSS introduced us to the concept of the web data as a stream — RSS itself became part of the delivery infrastructure but the metaphor it introduced us to is becoming an important part of our eveyday day lives.

A stream. A real time, flowing, dynamic stream of information — that we as users and participants can dip in and out of and whether we participate in them or simply observe we are are a part of this flow. Stowe Boyd talks about this as the web as flow: “the first glimmers of a web that isnt about pages and browsers” (see this video interview, view section 6 –> 7.50 mins in). This world of flow, of streams, contains a very different possibility set to the world of pages. Among other things it changes how we perceive needs. Overload isnt a problem anymore since we have no choice but to acknowledge that we cant wade through all this information. This isnt an inbox we have to empty, or a page we have to get to the bottom of — its a flow of data that we can dip into at will but we cant attempt to gain an all encompassing view of it. Dave Winer put it this way in a conversation over lunch about a year ago. He said “think about Twitter as a rope of information — at the outset you assume you can hold on to the rope. That you can read all the posts, handle all the replies and use Twitter as a communications tool, similar to IM — then at some point, as the number of people you follow and follow you rises — your hands begin to burn. You realize you cant hold the rope you need to just let go and observe the rope”. Over at Facebook Zuckerberg started by framing the flow of user data as a news feed — a direct reference to RSS — but more recently he shifted to talk about it as a stream: “… a continuous stream of information that delivers a deeper understanding for everyone participating in it. As this happens, people will no longer come to Facebook to consume a particular piece or type of content, but to consume and participate in the stream itself.” I have to finish up this section on the stream metaphor with a quote from Steve Gillmor. He is talking about a new version of Friendfeed, but more generally he is talking about real time streams.

The streams are open and distributed and context is vital: The streams of data that constitute this now web are open, distributed, often appropriated, sometimes filtered, sometimes curated but often raw. The streams make up a composite view of communications and media — one that is almost collage like (see composite media and wholes vs. centers). To varying degrees the streams are open to search / navigation tools and its very often long, long tail stuff. Let me run out some data as an example. I pulled a day of data — all the links that were clicked on May 6th. The 50 most popular links generated only 4.4% (647,538) of the total number of clicks. The top 10 URL’s were responsible for half (2%) of those 647,538 clicks. 50% of the total clicks (14m) went to links that received 48 clicks or less. A full 37% of the links that day received only 1 click. This is a very very long and flat tail — its more like a pancake. I see this as a very healthy data set that is emerging.

Weeding out context out of this stream of data is vital. Today context is provided mostly via social interactions and gestures. People send out a message — with some context in the message itself and then the network picks up from there. The message is often re-tweeted, favorite’d, liked or re-blogged, its appropriated usually with attribution to creator or the source message — sometimes its categorized with a tag of some form and then curation occurs around that tag — and all this time, around it spins picking up velocity and more context as it swirls. Over time tools will emerge to provide real context to these pile up’s. Semantic extraction services like Calais, Freebase, Zemanta, Glue, kynetx and Twine will offer a windows of context into the stream — as will better trending and search tools." (

More Information

  1. The Stream, i.e. the real-time web