Open Source Science: Difference between revisions
No edit summary |
No edit summary |
||
| Line 1: | Line 1: | ||
=Concept= | =Concept= | ||
| Line 8: | Line 7: | ||
'''"“Open Source Science” refers to the sharing of all data, ''including failed experiments'', and is likened to “open source” code in computing."''' | '''"“Open Source Science” refers to the sharing of all data, ''including failed experiments'', and is likened to “open source” code in computing."''' | ||
(http://bethritterguth.wikispaces.com/rpp) | (http://bethritterguth.wikispaces.com/rpp) | ||
==Typology== | |||
Jamais Cascio: | |||
"Ball covers three broad categories of mass-collaborative science. The first I would characterize as mass analysis, in which large numbers of people take a look at a set of data to try to find mistakes or hidden details. His best example of this is the NASA Clickworkers project, which used a large group of volunteers to look at maps of Mars in order to identify craters. It turned out that the collective crater identification ability of volunteers given a small amount of training was as good as the best experts in the field. Ball links this directly to the James Surowiecki book, The Wisdom of Crowds, which argues that the collective decision-making power of large groups can be surprisingly good. WorldChanging's Nicole Boyer has mentioned The Wisdom of Crowds in a couple of her essays, most notably this week's The Wisdom of Google's Experiment. The ability of groups to act collectively to analyze and generate information is one of the drivers of collaborative efforts such as Wikipedia -- any individual contributor won't be an expert on everything, but the collected knowledge of the mass of authors is unbeatable. | |||
The second model of collaborative science he discusses is that of mass evaluation, in which large numbers of people have the opportunity to vet articles and arguments by researchers. This is a less quantitative and more subjective approach than collaborative analysis, but can still produce high-quality results. Ball cites Slashdot and Kuro5hin as examples of this approach, with the mass of participants on the sites evaluating the posts and/or comments, eventually pushing the best stuff up to the top. In the world of science, articles submitted to journals are regularly checked out by groups of reviewers, but the set of evaluators for any given article is usually fairly small. Ball cites the physics pre-print journal arXiv as an exemplar of a countervailing trend -- that of open evaluation. ArXiv allows anyone to contribute articles, and lets participants evaluate them -- a true "peer review." | |||
The third model Ball discusses is perhaps the most controversial -- that of collaborative research, where research already in progress is opened up to allow labs anywhere in the world to contribute experiments. The deeply networked nature of modern laboratories, and the brief down-time that all labs have between projects, make this concept quite feasible. Moreover, such distributed-collaborative research spreads new ideas and discoveries even faster, ultimately accelerating the scientific process. Yale's Yochai Benkler, author of the well-known Coase's Penguin, or Linux and the Nature of the Firm, argues in a recent article in Science (pay access only) that such a method would be potentially revolutionary. He calls it "peer production;" we've called it "open source" science, and have been talking about the idea since we started WorldChanging." | |||
(http://www.worldchanging.com/archives/001090.html) | |||
Source: "The [http://www.nature.com/news/2004/040816/full/040816-14.html Common Good]," a new essay in Nature by consulting editor Philip Ball | |||
Revision as of 03:59, 18 November 2009
Concept
Definition
Beth Ritter-Guth:
"“Open Source Science” refers to the sharing of all data, including failed experiments, and is likened to “open source” code in computing." (http://bethritterguth.wikispaces.com/rpp)
Typology
Jamais Cascio:
"Ball covers three broad categories of mass-collaborative science. The first I would characterize as mass analysis, in which large numbers of people take a look at a set of data to try to find mistakes or hidden details. His best example of this is the NASA Clickworkers project, which used a large group of volunteers to look at maps of Mars in order to identify craters. It turned out that the collective crater identification ability of volunteers given a small amount of training was as good as the best experts in the field. Ball links this directly to the James Surowiecki book, The Wisdom of Crowds, which argues that the collective decision-making power of large groups can be surprisingly good. WorldChanging's Nicole Boyer has mentioned The Wisdom of Crowds in a couple of her essays, most notably this week's The Wisdom of Google's Experiment. The ability of groups to act collectively to analyze and generate information is one of the drivers of collaborative efforts such as Wikipedia -- any individual contributor won't be an expert on everything, but the collected knowledge of the mass of authors is unbeatable.
The second model of collaborative science he discusses is that of mass evaluation, in which large numbers of people have the opportunity to vet articles and arguments by researchers. This is a less quantitative and more subjective approach than collaborative analysis, but can still produce high-quality results. Ball cites Slashdot and Kuro5hin as examples of this approach, with the mass of participants on the sites evaluating the posts and/or comments, eventually pushing the best stuff up to the top. In the world of science, articles submitted to journals are regularly checked out by groups of reviewers, but the set of evaluators for any given article is usually fairly small. Ball cites the physics pre-print journal arXiv as an exemplar of a countervailing trend -- that of open evaluation. ArXiv allows anyone to contribute articles, and lets participants evaluate them -- a true "peer review."
The third model Ball discusses is perhaps the most controversial -- that of collaborative research, where research already in progress is opened up to allow labs anywhere in the world to contribute experiments. The deeply networked nature of modern laboratories, and the brief down-time that all labs have between projects, make this concept quite feasible. Moreover, such distributed-collaborative research spreads new ideas and discoveries even faster, ultimately accelerating the scientific process. Yale's Yochai Benkler, author of the well-known Coase's Penguin, or Linux and the Nature of the Firm, argues in a recent article in Science (pay access only) that such a method would be potentially revolutionary. He calls it "peer production;" we've called it "open source" science, and have been talking about the idea since we started WorldChanging." (http://www.worldchanging.com/archives/001090.html)
Source: "The Common Good," a new essay in Nature by consulting editor Philip Ball
Discussion
Related terms
Beth Ritter-Guth:
"Open Source Science, Open Data, Open Standards, and Open Access Science generally refer to the same principle; it indicates the publication of data for free use and distribution via the web using wikis, blogs, chemical docking programs, or other RSS technology.
Historically, this same data has only been available, in parts, through traditional peer review journals. ODOSOS is one acronym used to define "Open Data, Open Source, Open Standards" (Murray-Rust). However, there is legitimate discussion about what constitutes “Open Source” as compared to “Open Standards” and “Open Data.”
Open Access, for example, refers to the publication of "final" data or articles, and is not, inherently, about the sharing of collaborative data although there is a place for that to exist within OA (BOAI).
“Open Source Science” refers to the sharing of all data, including failed experiments, and is likened to “open source” code in computing. It includes both the process and the resulting data. As such, it communicates the "thinking behind the chemistry" - a practice not embraced by traditional methods (Bradley).
“Open Data” is similar to Open Source Science in the philosophy of sharing, but differs because it does not include the publication of failed data or experiments, and shares, instead, successful processes and data. In short, "open data" refers to data "which we can attach a CC [Creative Commons] or similar license" (Murray-Rust).
Finally, “Open Standards” refers to the sharing of the processes by which data is shared." (http://bethritterguth.wikispaces.com/rpp)
More Information
Project(s)
OpenSourceScience
URL = http://www.opensourcescience.net/index.php?title=Main_Page
"OpenSourceScience is a public space for managing controversial scientific experiments in a way that provides open access to of all phases of the research. We provide a centralized resource for scientific collaboration, and help underwrite scientifically rigorous experiments that may contribute to an improved understanding of human consciousness.
The essence of the open source model is the rapid creation of innovative results within an inclusive and collaborative environment. At OpenSourceScience, we bring together the skeptical community, controversial science researchers, and interested laypeople to help design and facilitate high-quality scientific experiments. Our community encompasses multiple points of view joined together by a commitment to "follow the data". This spirit of cooperation promises to improve the long-term viability of our results."