Category:Protocols and Algorithms

From P2P Foundation
Jump to navigation Jump to search

New section, created July 2017: how do protocols and algorithms increasingly govern our world, for good or ill; and how we can change it, for example through Design Justice


Introduction

Anouk Ruhaak:

"Many of the new data governance models being pioneered today rely on some notion of collective governance and consent.


These include

  1. Data Trust (where trustees govern data rights on behalf of a group of beneficiaries),
  2. Data Commons (where data is governed as a commons),
  3. Data Cooperatives (where data is governed by the members of the coop) and consent champions (where individuals defer some of their data sharing decisions to a trusted institution)."

(https://foundation.mozilla.org/en/blog/when-one-affects-many-case-collective-consent/)


Quotes

"We need to ask then not only how algorithmic automation works today (mainly in terms of control and monetization, feeding the debt economy) but also what kind of time and energy it subsumes and how it might be made to work once taken up by different social and political assemblages—autonomous ones not subsumed by or subjected to the capitalist drive to accumulation and exploitation."

- Tiziana Terranova [1]


The Consilience Project on Axiological Design

"We propose that there are inevitable and unexpected impacts of technologies on both the human mind and society as a whole. For most of history, the process of tech design has either assumed that such second- and third-order effects do not occur or that tech innovation is net positive. This approach is called "technological orthodoxy", and it views technology as neutral with regard to human values. This must change if humanity is to survive in a world of ever-increasing technological presence and complexity. At this moment in history, it is essential that we adopt an approach to design that accounts for how tech affects the way people think and behave. This is axiological design. Axiology is the philosophical study of value, including both ethics and philosophy of mind. Axiological design is the application of principled judgment about value to the design of technology. "

- Consilience Project [2]


Technology is about designing subjects

"Today, technology allows us a new form of design: one that designs subjects, not objects; people, not things. By designing the information someone consumes, we can frame their opinions. By designing the interactions they have with digital devices, we can frame their thinking. This is known by not only tech giants but by military intelligence. And now, it is time that it becomes known by designers - especially those at the vanguard of dying paradigms. Our environments, our tools and even our ideas are extensions of ourselves. Our clothes extend our skin’s ability to keep our body warm, and our glasses improve our eye’s ability to see. This is simple enough. But what about language, or the internet? What does it do to us? How do they extend our humanity? More importantly: can we design that extension? In this century, algorithmically powered ontological design will radically reinvent what “human” means. It will not only be used to create “better” humans, but to redesign the very concepts of “better” itself, disrupting the values of the old world order and kickstarting a struggle for the new. Creatively terrifying designs are becoming possible."

- Daniel Fraga [3]


There is no such thing as objective data science

"The key thing as you say is separating the objective science of data collection, from the subjective philosophy of data interpretation. We know pathos is needed here, because it is precisely what separates us from the machine. We can interpret data where the machine can only collect it. The question is if data collection can ever be purely objective. Unless we record absolutely everything, making a 1 to 1 reproduction of being, we are subjectively choosing aspects of being to collects, which must rely on something other than the science of the data collection itself A machine can not choose what data to collect. It must collect indiscriminately from its parameters. We choose what to collect from subjective notions of what we find worthy of study for instance. Pathos again. Whenever we choose to collect data, there is also data we are choosing not to collect, thus mixing science with subjectivity from the get go. Is this a problem? No not really. But we should be aware of it."

- Paradox Eleung [4]


Anjana Susarla on the New Algorithmic Divide

"Many people now trust platforms and algorithms more than their own governments and civic society. An October 2018 study suggested that people demonstrate “algorithm appreciation,” to the extent that they would rely on advice more when they think it is from an algorithm than from a human. In the past, technology experts have worried about a “digital divide” between those who could access computers and the internet and those who could not. Households with less access to digital technologies are at a disadvantage in their ability to earn money and accumulate skills. But, as digital devices proliferate, the divide is no longer just about access. How do people deal with information overload and the plethora of algorithmic decisions that permeate every aspect of their lives? The savvier users are navigating away from devices and becoming aware about how algorithms affect their lives. Meanwhile, consumers who have less information are relying even more on algorithms to guide their decisions." {https://www.fastcompany.com/90336381/the-new-digital-divide-is-between-people-who-opt-out-of-algorithms-and-people-who-dont?)


Privacy is a Public Good

"How do we manage consent when data shared by one affects many? Take the case of DNA data. Should the decision to share data that reveals sensitive information about your family members be solely up to you? Shouldn’t they get a say as well? If so, how do you ask for consent from unborn future family members? How do we decide on data sharing and collection when the externalities of those decisions extend beyond the individual? What if data about me, a thirty-something year old hipster, could be used to reveal patterns about other thirty-something year old hipsters? Patterns that could result in them being profiled by insurers or landlords in ways they never consented to. How do we account for their privacy? The fact that one person’s decision about data sharing can affect the privacy of many motivates Fairfield and Engel to argue that privacy is a public good: “Individuals are vulnerable merely because others have been careless with their data. As a result, privacy protection requires group coordination. Failure of coordination means a failure of privacy. In short, privacy is a public good.” As with any other public good, privacy suffers from a free rider problem. As observed by the authors, when the benefits of disclosing data outweigh the risks for you personally, you are likely to share that data - even when doing so presents a much larger risk to society as a whole." - Anouk Ruhaak [5]


If There’s No AI, What is Being Promoted?

"What follows is a sketch, the foundation of a propaganda model, focused on what I’ll call the ‘AI Industrial Complex‘. By the term AI Industrial Complex, (AIIC) I mean the combination of technological capacity (or the lack thereof) with marketing promotion, media hype and capitalist activity that seeks to diminish the value of human labor and talent. I use this definition to make a distinction between the work of researchers and practical technologists and the efforts of the ownership class to promote an idea: that machine cognition is now, or soon will be, superior to human capabilities. The relentless promotion of this idea should be considered a propaganda campaign. It’s my position there is no existing technology that can be called ‘artificial intelligence’ (how can we engineer a thing we haven’t yet decisively defined?) and that, at the most sophisticated levels of government and industry, the actually existing limitations of what is essentially pattern matching, empowered by (for now) abundant storage and computational power, are very well understood. The existence of university departments and corporate divisions dedicated to ‘AI’ does not mean AI exists; it’s evidence there’s powerful memetic value attached to using the term, which has been aspirational since it was coined by computer scientist John McCarthy in 1956. Once we filter for hype inspired by Silicon Valley hustling (the endless quest to attract investment capital and gullible customers) we are left with promotion intended to shape common perception about what’s possible with computer power."

- Dwayne Monroe [6]

Key Resources

Key Articles

  • Frank Pasquale, The Second Wave of Algorithmic Accountability, Law and Poltical Economy, LAW & POL. ECON. (Nov. 25, 2019), https://lpeblog.org/2019/11/25/the-second-wave-ofalgorithmic-accountability/ [9] (“While the first wave of algorithmic accountability focuses on improving existing systems, a second wave of research has asked whether they should be used at all—and, if so, who gets to govern them.”).

Key Books

  • To read foundational work on the power of algorithms, see generally FRANK PASQUALE, THE

BLACK BOX SOCIETY:THE SECRET ALGORITHMS THAT CONTROL MONEY AND INFORMATION (2015).

  • Recursivity and Contingency. By Yuk Hui. Rowman & Littlefield International (2019)

[11]. Recommended by Bernard Stiegler: "Through a historical analysis of philosophy, computation and media, this book proposes a renewed relation between nature and technics." For details see: Towards a Renewed Relation Between Nature and Technics.

Pages in category "Protocols and Algorithms"

The following 181 pages are in this category, out of 181 total.