Category:Protocols and Algorithms

From P2P Foundation
Jump to navigation Jump to search

New section, created July 2017: how do protocols and algorithms increasingly govern our world, for good or ill; and how we can change it, for example through Design Justice


Contextual Quote

"AI and Blockchain-based agent-centric coordination are two fundamentally opposed/complementary processes:

"AI is currently about regurgitating synthesis. Only people and accident are able to undermine axioms which can change the trajectory of broken systems of thought. So, evolutionarily speaking, AI is backward-looking, and human social coordination is a living evolutionary process. Agent-centric coordination enables further evolution through giant networks of human agents acting as sensors and actuators in a living decision-making network. AI is like the brown mud plagiarized synthesis of anything interesting."


Introduction

Anouk Ruhaak:

"Many of the new data governance models being pioneered today rely on some notion of collective governance and consent.


These include

  1. Data Trust (where trustees govern data rights on behalf of a group of beneficiaries),
  2. Data Commons (where data is governed as a commons),
  3. Data Cooperatives (where data is governed by the members of the coop) and consent champions (where individuals defer some of their data sharing decisions to a trusted institution)."

(https://foundation.mozilla.org/en/blog/when-one-affects-many-case-collective-consent/)


Quotes

"We need to ask then not only how algorithmic automation works today (mainly in terms of control and monetization, feeding the debt economy) but also what kind of time and energy it subsumes and how it might be made to work once taken up by different social and political assemblages—autonomous ones not subsumed by or subjected to the capitalist drive to accumulation and exploitation."

- Tiziana Terranova [1]


"In stark contrast to the early days of internet development, when many stakeholders had a say, discussions about AI and our future are being shaped by leaders who seem to be striving for absolute ideological power. The result is “Authoritarian Intelligence.” The hubris and determination of tech leaders to control society is threatening our individual, societal, and business autonomy. What is happening is not just a battle for market control. A small number of tech titans are busy designing our collective future, presenting their societal vision, and specific beliefs about our humanity, as theonly possible path. Hiding behind an illusion of natural market forces, they are harnessing their wealth and influence to shape not just productization and implementation of AI technology, but also the research."

- Judy Estrin [2]


John Robb on the War over the Means of Reality Production

"Over the last seven years, with the advent of social networking, there’s been an online civil war over who controls our information flow and how they get to do it. It’s been a messy, confusing fight that has touched on the following:

  • What type of information is allowed amplification, and what should be de-amplified?
  • What is fact or fiction? Can truth be hate speech? Is fiction harmful (conspiracy theories or unapproved theories)? Should false information and ideas be censored?
  • What is disinformation (harmful fiction or spun facts designed to mislead), and how can it be suppressed (de-amplification, soft bans, hard bans, blacklists)?

Until late last year, it looked like the conflict was over, and we were on a worrisome trajectory toward disaster:

  • An open-source alliance of global corporations, online political networks (networked tribes held together by their opposition to some great evil), and struggling institutions (from academia to government) had won that fight.

This alliance had established a censorship and control system growing ever more constrictive by the day (that could, given time, rival the networked authoritarianism we have seen in China). It also used the system to control political outcomes in the US and beyond.

Worse, the system showed signs of non-linear behavior — we saw this when the networked monoculture created by this system rapidly escalated Russia’s invasion of Ukraine into a sprawling global war between the West and Russia (China, etc.).

Elon’s acquisition of Twitter and use of information warfare (the Twitter files) paused this trajectory. However, it won’t last long. One reason is that nothing was done to fundamentally change the nature of our information system (digital rights and ownership); another is that a new and much more disruptive wave of technological change is on the way."

- John Robb [3]


Steven D. Hales on AI creating Philosophical Zombies

"The deep transformation that artificial intelligence will bring to the human spirit: In zombie movies, the zombies are themselves brainless and to survive must feast on the brains of others. Philosophical zombies are creatures that can do the same things we can, but lack the spark of consciousness. They may write books, compose music, play games, prove novel theorems, and paint canvases, but inside they are empty and dark. From the outside they seem to live, laugh, and love, yet they wholly lack subjective experience of the world or of themselves. Philosophers have wondered whether their zombies are even possible, or if gaining the rudimentary tools of cognition must eventually build a tower topped in consciousness. If zombies are possible, then why are we conscious and not the zombies, given that they can do everything we can? Why do we have something extra? The risk now is that we are tessellating the world with zombies of both kinds: AIs that are philosophical zombies, and human beings who have wholly outsourced original thinking and creativity to those AIs, and must feast on their brainchildren to supplant what we have given up. Why bother to go through the effort of writing, painting, composing, learning languages, or really much of anything when an AI can just do it for us faster and better? We will just eat their brains."

- Steven D. Hales [4]


Towards Limbic Anti-Capitalism

"Elsewhere I looked at the uncounted costs of that increasingly frank application of limbic capitalism to the intimate domain of sex relations, whether in porn evangelism, transactional sex understood as ‘empowering’, or the monetisation of male desire and loneliness via OnlyFans. And I delved into the increasingly surreal and unsettling effects of the internet’s increasingly evident function as accelerant of all limbic, desire-based forms of commerce, and hazarded a few guesses as to where we’re heading politically if we don’t change course. Lola Bunny has paywalled herself; kids’ YouTube is the stuff of nightmares; if we’re not careful our future looks like a kind of fully automated luxury gnosticism where we’ll end up ruled by robots, because we no longer believe ourselves capable of self-government.

I’ve written much else besides over the last year or so, but this is the terrain I’ll be seeking to weave into a single argument in the book. I can assure you it’s not all doom: I have plenty to say not just on how all this is terrible, but in the book I’ll also share my thoughts on how to we might try and reconstruct livable sex relations in the rubble of absolute freedom - and how this more broadly serves an urgently needed practice of limbic anti-capitalism."

- Mary Harrington [5]


The Consilience Project on Axiological Design

"We propose that there are inevitable and unexpected impacts of technologies on both the human mind and society as a whole. For most of history, the process of tech design has either assumed that such second- and third-order effects do not occur or that tech innovation is net positive. This approach is called "technological orthodoxy", and it views technology as neutral with regard to human values. This must change if humanity is to survive in a world of ever-increasing technological presence and complexity. At this moment in history, it is essential that we adopt an approach to design that accounts for how tech affects the way people think and behave. This is axiological design. Axiology is the philosophical study of value, including both ethics and philosophy of mind. Axiological design is the application of principled judgment about value to the design of technology. "

- Consilience Project [6]


Technology is about designing subjects

"Today, technology allows us a new form of design: one that designs subjects, not objects; people, not things. By designing the information someone consumes, we can frame their opinions. By designing the interactions they have with digital devices, we can frame their thinking. This is known by not only tech giants but by military intelligence. And now, it is time that it becomes known by designers - especially those at the vanguard of dying paradigms. Our environments, our tools and even our ideas are extensions of ourselves. Our clothes extend our skin’s ability to keep our body warm, and our glasses improve our eye’s ability to see. This is simple enough. But what about language, or the internet? What does it do to us? How do they extend our humanity? More importantly: can we design that extension? In this century, algorithmically powered ontological design will radically reinvent what “human” means. It will not only be used to create “better” humans, but to redesign the very concepts of “better” itself, disrupting the values of the old world order and kickstarting a struggle for the new. Creatively terrifying designs are becoming possible."

- Daniel Fraga [7]


There is no such thing as objective data science

"The key thing as you say is separating the objective science of data collection, from the subjective philosophy of data interpretation. We know pathos is needed here, because it is precisely what separates us from the machine. We can interpret data where the machine can only collect it. The question is if data collection can ever be purely objective. Unless we record absolutely everything, making a 1 to 1 reproduction of being, we are subjectively choosing aspects of being to collects, which must rely on something other than the science of the data collection itself A machine can not choose what data to collect. It must collect indiscriminately from its parameters. We choose what to collect from subjective notions of what we find worthy of study for instance. Pathos again. Whenever we choose to collect data, there is also data we are choosing not to collect, thus mixing science with subjectivity from the get go. Is this a problem? No not really. But we should be aware of it."

- Paradox Eleung [8]


Anjana Susarla on the New Algorithmic Divide

"Many people now trust platforms and algorithms more than their own governments and civic society. An October 2018 study suggested that people demonstrate “algorithm appreciation,” to the extent that they would rely on advice more when they think it is from an algorithm than from a human. In the past, technology experts have worried about a “digital divide” between those who could access computers and the internet and those who could not. Households with less access to digital technologies are at a disadvantage in their ability to earn money and accumulate skills. But, as digital devices proliferate, the divide is no longer just about access. How do people deal with information overload and the plethora of algorithmic decisions that permeate every aspect of their lives? The savvier users are navigating away from devices and becoming aware about how algorithms affect their lives. Meanwhile, consumers who have less information are relying even more on algorithms to guide their decisions." {https://www.fastcompany.com/90336381/the-new-digital-divide-is-between-people-who-opt-out-of-algorithms-and-people-who-dont?)


Privacy is a Public Good

"How do we manage consent when data shared by one affects many? Take the case of DNA data. Should the decision to share data that reveals sensitive information about your family members be solely up to you? Shouldn’t they get a say as well? If so, how do you ask for consent from unborn future family members? How do we decide on data sharing and collection when the externalities of those decisions extend beyond the individual? What if data about me, a thirty-something year old hipster, could be used to reveal patterns about other thirty-something year old hipsters? Patterns that could result in them being profiled by insurers or landlords in ways they never consented to. How do we account for their privacy? The fact that one person’s decision about data sharing can affect the privacy of many motivates Fairfield and Engel to argue that privacy is a public good: “Individuals are vulnerable merely because others have been careless with their data. As a result, privacy protection requires group coordination. Failure of coordination means a failure of privacy. In short, privacy is a public good.” As with any other public good, privacy suffers from a free rider problem. As observed by the authors, when the benefits of disclosing data outweigh the risks for you personally, you are likely to share that data - even when doing so presents a much larger risk to society as a whole."

- Anouk Ruhaak [9]


If There’s No AI, What is Being Promoted?

"What follows is a sketch, the foundation of a propaganda model, focused on what I’ll call the ‘AI Industrial Complex‘. By the term AI Industrial Complex, (AIIC) I mean the combination of technological capacity (or the lack thereof) with marketing promotion, media hype and capitalist activity that seeks to diminish the value of human labor and talent. I use this definition to make a distinction between the work of researchers and practical technologists and the efforts of the ownership class to promote an idea: that machine cognition is now, or soon will be, superior to human capabilities. The relentless promotion of this idea should be considered a propaganda campaign. It’s my position there is no existing technology that can be called ‘artificial intelligence’ (how can we engineer a thing we haven’t yet decisively defined?) and that, at the most sophisticated levels of government and industry, the actually existing limitations of what is essentially pattern matching, empowered by (for now) abundant storage and computational power, are very well understood. The existence of university departments and corporate divisions dedicated to ‘AI’ does not mean AI exists; it’s evidence there’s powerful memetic value attached to using the term, which has been aspirational since it was coined by computer scientist John McCarthy in 1956. Once we filter for hype inspired by Silicon Valley hustling (the endless quest to attract investment capital and gullible customers) we are left with promotion intended to shape common perception about what’s possible with computer power."

- Dwayne Monroe [10]


Massively parallel experiments and a ecology of minds, NOT 'One Worldling'

“What is going to come out of the collapse of technocratic globalism is not going to be a single dominant philosophy or worlding, but an ecology of minds. Whether or not you consider artificial intelligence a mind, it’s safe to say that what we’re dealing with here now is… the population and proliferation of new domains and new information structures. And how we proceed in this matters enormously and depends on our ability to run experiments in massive parallel… Life didn’t start as a single good idea but as a web of richly interconnected fluid identities."

- Michael Garfield [11]

Status

Steven Hales:

(links for each item in the original article)

"At this point in the development of artificial intelligence, software is better than nearly every human being at a huge range of mental tasks.

  • No one can beat DeepMind’s AlphaGo at Go or AlphaZero at chess, or
  • IBM’s Watson at Jeopardy.
  • Almost no one can go into a microbiology final cold and ace it, or
  • pass an MBA final exam at the Wharton School without having attended a single class, but GPT-3 can.
  • Only a classical music expert can tell the difference between genuine Bach, Rachmaninov, or Mahler from original compositions by Experiments in Musical Intelligence (EMI).
  • AlphaCode is now as good at writing original software to solve difficult coding problems as the median professional in a competition of 5,000 coders.
  • There are numerous examples of AIs that can produce spectacular visual art from written prompts alone."

(https://quillette.com/2023/02/13/ai-and-the-transformation-of-the-human-spirit/)


Key Resources

Key Articles


  • Frank Pasquale, The Second Wave of Algorithmic Accountability, Law and Poltical Economy, LAW & POL. ECON. (Nov. 25, 2019), https://lpeblog.org/2019/11/25/the-second-wave-ofalgorithmic-accountability/ [14] (“While the first wave of algorithmic accountability focuses on improving existing systems, a second wave of research has asked whether they should be used at all—and, if so, who gets to govern them.”).


Policy and Regulation

* Article / Discussion Paper: The Political Economy of AI: Towards Democratic Control of the Means of Prediction. By Maximilian Kasy. IZA, APRIL 2024 [16]

Key Books

  • To read foundational work on the power of algorithms, see generally FRANK PASQUALE, THE

BLACK BOX SOCIETY:THE SECRET ALGORITHMS THAT CONTROL MONEY AND INFORMATION (2015).

  • The Eye of the Master: A Social History of Artificial Intelligence. by Matteo Pasquinelli. Verso, 2023. [17]: "A “social” history of AI that finally reveals its roots in the spatial computation of industrial factories and the surveillance of collective behaviour."
  • Recursivity and Contingency. By Yuk Hui. Rowman & Littlefield International (2019)

[19]. Recommended by Bernard Stiegler: "Through a historical analysis of philosophy, computation and media, this book proposes a renewed relation between nature and technics." For details see: Towards a Renewed Relation Between Nature and Technics.

  • Algorithms of Resistance: The Everyday Fight against Platform Power. By Tiziano Bonini, Emiliano Treré. The MIT Press, 2024

doi [21]

Pages in category "Protocols and Algorithms"

The following 200 pages are in this category, out of 286 total.

(previous page) (next page)

A

(previous page) (next page)