Benjamin Bratton on the History of Philosophy of AI

From P2P Foundation
Jump to navigation Jump to search

Video via https://antikythera.org/after-alignment

Description

Excerpted from transcript:

"The history of AI and the history of the Philosophy of AI are deeply intertwined, from Leibniz to Turing to Hubert Dreyfus to today. Thought experiments drive technologies, which in turn drive a shift in the understanding of what intelligence itself is and might become, and back and forth.

But for that philosophy to find its way today, and for this phase of AI, it needs to, that finding its way needs to include expanding from the European philosophical tradition of what AI even is, and the connotation of this. From the connotation of artificial intelligence drawn from the Deng era in China was as a kind of relation to industrial mass mobilization. The Eastern European includes what Stanislaw Lem called existential technologies, just as in the Soviet era it meant something more like governance rationalization. All of these contrast with the Western individualized and singular anthropomorphic models that dominate contemporary debates still today.

To ponder seriously the planetary pasts and futures of AI, must extend and alter our notions of artificiality as such, intelligence as such, and must not only draw from this range of traditions, but also, to a certain extent, almost inevitably, also leave them behind.

What Turing proposed in his famous test as a sufficient condition for intelligence, for example, has become instead solipsistic demands and miscrecogntions. To idealize what appears and performs as most “human” in AI, either as praise or as criticism, is to willfully constrain our understanding of what machine intelligence is as it is.

And this includes language itself. Large Language Models and their eerily convincing text prediction capabilities has been used to write novels and screenplays, to make images and movies, songs, voices, symphonies, and are even being used by biotech researchers to predict gene sequences for drug discovery. Here at least, the language of genetics really is a language. LLMs also form the basis of generalist models capable of mixing inputs and outputs from one modality to another, you know, interpreting what an image it sees so it can instruct the movement of a robot arm and so forth. Such foundational models may become a new kind of public utility around which industrial sectors organize what we call cognitive infrastructures.

So whither speculative philosophy then? Well, I honestly don’t think that society at present has the critical and conceptual terms to properly approach this reality head on. As a coauthor and I wrote recently, “reality overstepping the boundaries of comfortable vocabulary is the start, not the end, of the conversation. Instead of groundhog-day debates about whether machines have souls, or can think like people imagine themselves to think, the ongoing double-helix relationship between AI and the philosophy of AI needs to do less projection of its own maxims and instead construct newer, more nuanced vocabularies for analysis, critique, and composition based on the Weirdness right in front of us.”

And that is really the topic of my talk, the weirdness right in front of us and the clumsiness of our language to engage with it."