Comparison of Organism and Algorithmic Capabilities

From P2P Foundation
Jump to navigation Jump to search


Johannes Jäger:

"Here are a few basic things a human (or even a bacterium) can do, which AI algorithms cannot (and probably never will):

Organisms are embodied, while algorithms are not. The difference is not just being located in a mobile (e.g., robot) body, but a fundamental blurring of hardware and software in the living world. Organisms literally are what they do. There is no hardware-software distinction. Computers, in contrast, are designed for maximal independence of software and hardware.

Organisms make themselves, their software (symbols) directly producing their hardware (physics), and vice versa. Algorithms (no matter how "smart") are defined purely at the symbolic level, and can only produce more symbols, e.g., language models always stay in the domain of language. Their output may be instructions for an effector, but they have no external referents. Their interactions with the outside world are always indirect, mediated by hardware that is, itself, not a direct product of the software.

Organisms have agency, while algorithms do not. This means organisms have their own goals, which are determined by the organism itself, while algorithms will only ever have the goals we give them, no matter how indirectly. Basically, no machine can truly want or need anything. Us telling them what to want or what to optimize for is not true wanting or goal-oriented behavior. Organisms live in the real world, where most problems are ill-defined, and information is scarce, ambiguous, and often misleading. We can call this a large world. In contrast, algorithms exist (by definition) in a small world, where every problem is well defined. They cannot (even in principle) escape that world. Even if their small world seems enormous to us, it remains small. And even if they move around the large world in robot hardware, they remain stuck in their small world. This is exactly why self-driving cars are such a tricky business.

Organisms have predictive internal models of their world, based on what is relevant to them for their surviving and thriving. Algorithms are not alive and don't flourish or suffer. For them, everything and nothing is relevant in their small worlds. They do not need models and cannot have them. Their world is their model. There is no need for abstraction or idealization.

Organisms can identify what is relevant to them, and translate ill-defined into well-defined problems, even in situations they have never encountered before. Algorithms will never be able to do that. In fact, they have no need to since all problems are well-defined to begin with, and nothing and everything is relevant at the same time in their small world. All an algorithm can do is find correlations and features in its preordered data set. Such data are the world of the algorithm, a world which is purely symbolic.

Organisms learn through direct encounters, through active engagement, with the physical world. In contrast, algorithms only ever learn from preformatted, preclassified, and preordered data (see the last point). They cannot frame their problems themselves. They cannot turn ill-defined problems into well-defined ones. Living beings will always have to frame their problems for them.

I could go on and on. The bottom line is: thinking is not just "optimizing hard" and producing "complicated outputs." It is a qualitatively different process than algorithmic computation. To know is to live."


The unfounded metaphysical assumptions of techno-transcendentalism

Johannes Jäger:

"The metaphysical assumptions that techno-transcendentalism is based on are extremely dubious. We've already encountered this issue above, but to understand it in a bit more depth, we need to look at these metaphysical assumptions more closely.

Metaphysics does not feature heavily in any of the recent discussions about AGI. In general, it is not a topic that a lot of people are familiar with these days. It sounds a little detached, and old-fashioned — you know, untethered in the Platonic realm. We imagine ancient Greek philosophers leisurely strolling around cloistered halls. Indeed, the word comes from the fact that Aristotle published his "first philosophy" (as he called it) in a book that came right after his "Physics." In this way, it is literally after or beyond ("meta") physics.

In recent times, metaphysics has fallen into disrepute as mere speculation. Something that people with facts don't have any need for. Take the hard-nosed logical positivists of the Vienna Circle in the early 20th century. They defined metaphysics as "everything that cannot be derived through logical reasoning from empirical observation," and declared it utterly meaningless. We still feel the legacy of that sentiment today. Many of my scientist colleagues still think metaphysics does not concern them. Yet, as philosopher Daniel Dennett rightly points out: "there is no such thing as philosophy-free science; there is only science whose philosophical baggage is taken on board without examination."

And, my oh my, there is a lot of unexamined baggage in techno-transcendentalism. In fact, the sheer number of foundational assumptions that nobody is allowed to openly scrutinize or criticize are ample testament to the deeply cultish nature of the ideology.

Here, I'll focus on the most fundamental assumption on which the whole techno-transcendentalist creed rests: every physical process in the universe must be computable.

In more precise and technical terms, this means we should be able to exactly reproduce any physical process by simulating it on a universal Turing machine (an abstract model of a digital computer with potentially unlimited memory and processing speed, which was invented in 1936 by Alan Turing, the man who gave the Turing Prize its name).

To clarify, the emphasis is on "exactly" here: techno-transcendentalists do not merely believe that we can usefully approximate physical processes by simulating them in a digital computer (which is a perfectly defensible position) but, in a much stronger sense, that the universe and everything in it — from molecules to rocks to bacteria to human brains — literally is one enormous digital computer. This is techno-transcendentalist metaphysics.

This universal computationalism includes, but is not restricted to, the simulation hypothesis. Remember: if the whole world is a simulation, then there is a simulator outside it. In contrast, the mere fact that everything is computation does not imply a supernatural simulator.

Turing machines are not the only way to conceptualize computing and simulation. There are other abstract models of computation, such as lambda calculus or recursive function theory, but they are all equivalent in the sense that they all yield the exact same set of computable functions. What can be computed in one paradigm can be computed in all the others.

This fundamental insight is mathematically codified by something called the Church-Turing thesis. (Alonzo Church was the inventor of lambda calculus and Turing's PhD supervisor.) It unifies the general theory of computation by saying that every effective computation (roughly, anything you can actually compute in practice) can be carried out by an algorithm running on a universal Turing machine. This thesis cannot be proven in a rigorous mathematical sense (basically because we do not have a precise, formal, and general definition of "effective computation"), but it is also not controversial. In practice, the Church-Turing thesis is a very solid foundation for a general theory of computation.

The situation is very different when it comes to applying the theory of computation to physics. Assuming that every physical process in the universe is computable is a much stronger form of the Church-Turing thesis, called the Church-Turing-Deutsch conjecture. It was proposed by physicist David Deutsch in 1985, and later popularized in his book "The Fabric of Reality." It is important to note that this physical version of the Church-Turing thesis does not logically follow from the original. Instead, it is intended to be an empirical hypothesis, testable by scientific experimentation.

And here comes the surprising twist: there is no evidence at all that the Church-Turing-Deutsch conjecture applies. Not one jot. It is mere speculation on Deutsch's part who surmised that the laws of quantum mechanics are indeed computable, and that they describe every physical process in the universe. Both assumptions are highly doubtful.

In fact, there are solid arguments that quite convincingly refute them. These arguments indicate that not every physical process is computable or, indeed, no physical processes can be precisely captured by simulation on a Turing machine. For instance, neither the laws of classical physics nor those of general relativity are entirely computable (since they contain noncomputable real numbers and infinities). Quantum mechanics introduces its own difficulties in the form of the uncertainty principle and its resulting quantum indeterminacy. The theory of measurement imposes its own (very different) limitations.

Beyond these quite general doubts, a concrete counterexample of noncomputable physical processes is provided by Robert Rosen's conjecture that living systems (and all those systems that contain them, such as ecologies and societies) cannot be captured completely by algorithmic simulation. This theoretical insight, based on the branch of mathematics called category theory, was first formulated in the late 1950s, presented in detail in Rosen's book "Life Itself" (1991), and later derived in a mathematically airtight manner by his student Aloysius Louie in "More Than Life Itself." This work is widely ignored, even though its claims remain firmly standing, despite numerous attempts at refutation. This, arguably, renders Rosen's claims more plausible that those derived from the Church-Turing-Deutsch conjecture."