[p2p-research] Fwd: [Autonomo.us] building stuff
michelsub2004 at gmail.com
Thu Aug 13 12:23:27 CEST 2009
---------- Forwarded message ----------
From: Thomas Lord <lord at emf.net>
Date: Wed, Aug 12, 2009 at 2:55 AM
Subject: [Autonomo.us] building stuff
To: "Bradley M. Kuhn" <bkuhn at ebb.org>
Cc: "autonomo.us discussion mailing list" <discuss at lists.autonomo.us>
On Tue, 2009-08-11 at 08:29 -0400, Bradley M. Kuhn wrote:
> We need more federated, user-respecting, data-exporting free software
> code under AGPLv3 written and deployed.
Could we "talk shop" about that a bit?
Speaking very broadly, two possible strategies
come to mind. These could be dubbed:
The "Just Do It" Strategy
The "Build a Web Operating System" Strategy
Briefly, "Just Do It" means just hoping
lots of people go off and build these
"Build a Web Operating System" means to
build systems software *on top of which*
the desired applications can be more
easily developed - and then hope people
go off and build applications on that new
Examining those two strategies in more
* Option 1: The "Just Do It" Strategy
One strategy is to hope that lots of people go
off and start writing distributed, decentralized,
peering applications. Then hope that lots of
people run them, that users generally enjoy
control over their servers (or the option to
move to a server they control); that federated
networks are formed, and that all the pieces work
I am almost tempted to add to the list: "Also, ponies!"
because I don't think the "Just Do It" strategy
can work very well. It seems like wishful thinking
to me. "And candy!"
The problem is that AT BEST we wind up with something
like: federated wikis, federated micro-blogging,
federated blogging, federated photo-blogging,
federated web documents, federated social networking,
and so on.... all implemented separately.
Many programs will all have to solve similar
problems, such as managing user identity and
authentication, sharing files between peers (with
decent update semantics), messaging / IPC between
peers, and so forth.
If we just have everyone go off to implement those
applications then each project will separately have
to solve those same problems. The most likely
outcome will be that the applications will solve those
programs in a wide range of ad hoc ways.
Now consider the impact of that on a user who wants
to run their own server, setting up instance of
many of those applications. And consider the developer
who wants to build integration between them. It will
be a mess of sufficient proportions that few will
even want to try.
To administer a server, for example, a user would have
to run and tend to multiple messaging mechanisms,
multiple user identity systems, and so forth.
As an example of the integration problem, consider a
developer who wants to make a "back-up tool". The
back-up tool gathers a user's data from many applications,
can save it remotely, and can restore it if requested.
Yet if the technical interface for extracting and
restoring data is different in every application, writing
the back-up tool is a nightmare.
Finally, I think the "Just Do It" strategy will continue
to have huge problems even getting off the ground
simply because having to solve all of those problems
that reliably show up in distributed, decentralized peering
systems is HARD, even if you are only going for an
ad hoc, application-specific solution.
* Option 2: "Build a Web Operating System"
Consider the question: why do we use operating
systems rather than running programs directly on
bare iron? I know, it's a silly question but
bear with me.
An OS facilitates resource sharing. It
provides stable, higher-level interfaces to
low-level facilities. It creates standards by
which programs inter-operate.
For example, unix implements sharing of CPU,
memory, disk, and other devices.
Unix provides high level interfaces, such as files,
for low level facilities like disks. (If you regard
databases as systems software - as "part of the OS" -
they provide an alternative high level interface
Unix creates standards for interoperability such
as pipes and "exec".
In the "Just Do It" discussion two problems were
mentioned: administrative complexities for users
running their own servers and interoperability concerns
for developers. Consider the analogous questions
An administrator can generally manage access controls
for all applications at a single point by managing
unix user ids and groups.
A developer can write a back-up tool for (nearly) all
applications just by knowing about the file system.
A single back-up tool will save both my email and
my photos, for example.
I claim that the common needs of distributed, decentralized,
peering, data-liberating, user-controllable web services
closely resemble the facilities offered by a conventional
For example, the storage needs of a wiki, a blog, a photoblog,
and a web document service can all be satisfied by a
single, shared, globally distributed, decentralized, peering
based file system with features for indexing, complex
queries on indexes, transactional updates, and good
synchronization semantics between peers.
Such a storage system only *needs* to be implemented
once, well. Conversely, given one such storage system,
the P2P wiki, blog, photoblog, and web document services
become vastly easier to write: a lot of the heavy lifting
has been done. The remaining problems look much more like
conventional web application development and much less
like systems programming challenges.
Similar observations can be made about user identities
and messaging/IPC in a P2P context.
So those are some reasons to think of the "first step"
as beginning to grow a Web Operating System but you
might wonder: is that really a "strategy"?
I think what makes it a strategy is a phenomenon
which I've given a made-up name "basiscraft" (inspired
by the concept of a "basis set" in linear algebra):
Basiscraft is the simple notion that building
an extensible framework based on a few, core, general
purpose facilities is the best way to build a
complex collection of related applications.
Historically, Apache is one example of great
basiscraft. CGI support, loadable modules,
and virtual hosts - combined with the other
layers of the LAMP stack and the facilities
they brought - led to the explosion of development
that gave us the modern web.
Another historic example is unix: Pipes, the
file system, the shell, and basic tools like
cat and grep led in the late '70s and early '80s
to an explosion of enthusiastic application development.
GNU Emacs is another fine example.
The GNU Emacs example is especially relevant
to the mission of autonomo.us because it
helps to demonstrate that a *techno-political*
aim can often be advanced by making a good extensible
framework in which all subsequent hacks are essentially
"direct action" towards the political cause.
If you want to inspire the creation of interoperable
applications in some domain, identifying a solid
"basis set" of core functionality and paying special
attention to how the pieces can be combined into
larger structures is an excellent strategy.
So I would like to inspire,
cajole, recruit, solicit support for, study,
design, and build a web operating system as
the most effective way to restore software
freedom on the web - as the program of direct
I've hinted above at what technology I think
comprises such a system: a file system with
special features, database support, messaging
support.... We can discuss the best mix of
features and the meta-problem of how to
identify and build-out that best set of features.
Meanwhile, as a strategic direction: "Build a
Web Operating System".
Discuss mailing list
Discuss at lists.autonomo.us
Work: http://en.wikipedia.org/wiki/Dhurakij_Pundit_University - Research:
http://www.dpu.ac.th/dpuic/info/Research.html - Think thank:
P2P Foundation: http://p2pfoundation.net - http://blog.p2pfoundation.net
Connect: http://p2pfoundation.ning.com; Discuss:
Updates: http://del.icio.us/mbauwens; http://friendfeed.com/mbauwens;
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the p2presearch