[p2p-research] Workshop on Media Ecologies: Q&A: Sam Rose

Michel Bauwens michelsub2004 at gmail.com
Tue Aug 25 05:13:13 CEST 2009

thanks for the wagn explanation Sam, and Nathan, unfortunately, no time to
play with another wiki engine for the moment, I have to remain focused on my
p2p-f work, which is a challenge already, given my full-time job,


On Tue, Aug 25, 2009 at 9:21 AM, Nathan Cravens <knuggy at gmail.com> wrote:

>  Hi Michel,
> OpenKollab's content is presently managed with wagn.
>  http://wiki.openkollab.com/Home
> I hope you might setup an account there and play with it.
> We'd love to have you, particularly in terms of the Information Resources
> section.
> That is, if you have time...
> Hi Sam,
>> This is one example of what we are talking about.
> Thanks for that example.
>>   We have code that works
>> >> with Wagn now, that will allow for this abstraction, and we'll have
>> >> several demos for Nov.  We have even extended this to microcontrollers
>> >> like Arduino, and will likely provide a demo of this, too.
>> >
>> > FLOWS in action? Even with implementation of this protocol translator,
>> there
>> > will remain a great deal of work to do, but this will save a great deal
>> of
>> > work toward higher level functions and for great self sufficiency.
>> The other thing this can do is make a new type of web service that was
>> maybe made to solve one problem, but is then available for multiple
>> other users, communities, etc
> Right, we might say that the paper folding is stored so other folk or bots
> might fold the paper in that same configuration later on.
> You may not have paper and want to make it yourself? Okay, there's your
> representation, here's where the materials are located. No bots in the area?
> No problem, all of this is in walking distance, and other folk may be
> interested in your project to help place those needed items closer to your
> location.
>   > We're at a conference taking place inside an auditorium.
>> > Everyone can see projected how you are accessing a graphical interface.
>> > You go to a search page and type the name of the event you are
>> attending.
>> > A page for the event is accessed.
>> > You find a 'dynamic' blueprint or map of the very room you are
>> presenting
>> > from.
>> > From this map you are able to see what objects you can access to
>> manipulate
>> > in some way.
>> > A mobile robotic arm in set upon a desk. A sheet of paper is nearby on
>> the
>> > desk.
>> > From the map you are able to locate the sheet of paper from your screen,
>> > select it, then use the cursor to fold it into a particular shape. This
>> is a
>> > new design, never before virtually shaped in this way.
>> > After confirming your entry, the robot arm, with that information, folds
>> the
>> > paper to your specifications.
>> > Demonstration complete.
>> I don't know if we'll be able to demo something like that in time for
>> Nov, but we can definitely show how some of it is currently possible
>> (assuming you already have the folding robot, the data model of the
>> room, and the software that enables toolpaths for folding :-) )
> That would be fantastic if you could... Then all we need to tell folk to
> promote this thing is something like, "Watch a robot fold some paper! How is
> this p2p you ask?! Okay! Present how to fold this paper to your peer group
> with the best way you know about folding it; then the other folk change the
> folding in a way everyone can agree on if everyone agreeing on something
> is necessary. Then this paper fold approach is left in a repository, tapped
> by FLOWS whenever another paper folder starts tinkering with paper in a
> similar fashion. In this story, what has been done before will be presented,
> unless the user wants to work in ignorance as not to stifle that creative
> drive.
> The folding robot in my fantasy world is a general bot, a bipedal bot with
> two arms, legs---the kind most folk think of when thinking: robot. We just
> need to bring some robotics folks that already spent the dough on the
> equipment to apply our integrated platform. These guys are speaking our
> language as well with interest in agreeing on standards and speaking in
> terms of 'robots collaborating with other robots' at one conference in an
> article Ryan sent to this list.
> http://www.newscientist.com/article/mg20327206.300-robots-to-get-their-own-operating-system.html?DCMP=OTC-rss&nsref=online-news
> I've collected that and two other sources here:
> 'Search and Retrieval: 'Hard''
> http://wiki.openkollab.com/wagn/Platform
> I assume the bot then can scan the room and identify the paper in the room
> using something or perhaps this: LabelMe <http://labelme.csail.mit.edu/>
> I learned of this image recognition app, which shows other uses, from:
> Precision Agriculture: Sustainable Farming In The Age Of Robotics
> http://www.csail.mit.edu/csailspotlights/feature2
>> >
>> > When things of this sort become common and seamless, it is imperative
>> that
>> > we make theft obsolete! That cannot be stressed enough! It is for
>> reasons
>> > such as this I have described open manufacturing in a positive rather
>> > than neutral manners: positive in the sense that it must also render
>> > everything  'gratis' or free to have at no monetary cost, but also
>> without
>> > sacrificing lives, our own and the ecologies that sustain us.
>> Well, it all boils down to who you allow to run your virtual folding
>> tool, of course.
>> > After Smári's jaw dropping presentation of Industry 2.0, I hope
>> achieving
>> > such an aim does not seem so daunting.
> If Smári has presented this--OX?--that is not the one I'm referring, but
> instead to the revised presentation of the work he and the fab lab peeps are
> hacking, which I suspect will cover all the essential bases for comfortable
> life support with little or no money involved.
>> The only things daunting about it are the software that creates the 3D
>> representation of the folded object. I did not see Smári's Industry
>> 2.0 presentation, but maybe he presented some existing technology
>> (software and robotics) that does this already?
> If the only difficult task is to render a 3d object digitally; we know that
> is done already. I hear CAD standards are not yet compatible with CAM and so
> forth. (FLOWS solves this, right? ;p) So, perhaps you mean rendering a 3d
> object so that instructions for folk or robots can create the rendered
> object successfully after easily retrieving materials from the web to
> procure it?
>> Also, dynamic modeling of a room would presumably be mocked up, since
>> not technology that I know of exists that dynamically models rooms on
>> the fly,
> See the work of Hans Moravec. There's video on his page of a bot
> successfully zips through a room and renders the entire space. Moravec has
> worked in machine vision for years... Its much like Photosynth (closed
> source and therefore evil), but better, because you can send a bot to
> photograph what images are missing in the model. Revisions of the visual
> model are only needed for objects known to move around, change hands, or
> diminish with use. . . .
> and because the robotics would only need a connection to the
>> internet, and a piece of paper placed in the robot's working space to
>> accomplish the remote folding, assuming the software that lets you
>> build the "folded paper" model can send data across the wire to the
>> robot.
> Right. Dat bot can afford wifi cuz eaits free.
> This demo would be an awesome display for sure. If all of the above
>> software exists, we could use FLOWS to make each component part of the
>> software talk to the other and pass data to where it needs to go in a
>> *standard* way.
> Let's do it Sam!! :)
>> I would think about doing what you describe above with COLLADA
>> https://collada.org/mediawiki/index.php/COLLADA_-_Digital_Asset_and_FX_Exchange_Schema
>> and maybe something like
>> http://www.sirikata.com/wiki/index.php?title=Main_Page  plus maybe
>> heeksCAD/CAM and xyz robot that could accomplish folding
> How 'bout EMC? Fenn forwarded this info to the OM list.
> http://wiki.openkollab.com/wagn/Enhanced_Machine_Controller_EMC
>  <http://wiki.openkollab.com/wagn/Enhanced_Machine_Controller_EMC>
>> Although, it seems like it would be easier to just use telesurgery
>> robots and cameras if you want to do remote folding! I guess it
>> depends on what the real goal is here.
> The goal is to do anything remotely!
>> A more practically immediately implementable example, IMO,  of what
>> FLOWS and open standards can do with regard to flexible fabrication
>> would be to allow people to store and serve multiple parts of a
>> "package" of CAD files, bill of materials, parametrics data, and any
>> other relative data about a technology, or the technologies needed to
>> make that technology, in a distrbuted way (like on multiple servers).
> I hope Zach (cc'd) you might have something to say on this with his work in
> CAD repositories (Thingiverse) and distributed manufacturing. You're coming
> to the workshop, right? ;)
>> These packages could still be maintained by a specific project or
>> person. That project or person would really do the job of "vetting"
>> the contents of that package so that other people reasonably know they
>> can trust it.
> It would be best if this aspect were automated, via rates of selection, 'do
> you like?' input requests, or some other automated selection criteriori.
>> But, the same files could live in many, many packages,
>> each maintained by a specific maintainer. FLOWS gives a standard way
>> of letting a system know that your files or data are part of a package
>> (or to submit for inclusion in a package). Now, you can park your
>> design files *anywhere*, yet they can still be part of a package.
> Yes. Redudancy where it counts, just in case a server or two crashes--the
> mesh gots you covered.
>> Another practical immediate example is that you could export certain
>> contents of those files to be repackaged as a PDF, and even create a
>> print on demand book from that collection of files. You could actually
>> export a collection of files in any way that is possible through
>> existing open source libraries. A FLOWS based component could also
>> send out all kinds of meta data about the packages. Who is accessing
>> them, multiple materials sources for what the package is made of, etc
> Sounds as if that would be a practical implement you can charge these
> proprietarians for royally so as to put the reserve notes to better use:
> meaning: less. ;)
> Nathan
> _______________________________________________
> p2presearch mailing list
> p2presearch at listcultures.org
> http://listcultures.org/mailman/listinfo/p2presearch_listcultures.org

Work: http://en.wikipedia.org/wiki/Dhurakij_Pundit_University - Research:
http://www.dpu.ac.th/dpuic/info/Research.html - Think thank:

P2P Foundation: http://p2pfoundation.net  - http://blog.p2pfoundation.net

Connect: http://p2pfoundation.ning.com; Discuss:

Updates: http://del.icio.us/mbauwens; http://friendfeed.com/mbauwens;
http://twitter.com/mbauwens; http://www.facebook.com/mbauwens
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://listcultures.org/pipermail/p2presearch_listcultures.org/attachments/20090825/23b17c4b/attachment-0001.html>

More information about the p2presearch mailing list