From Scientism and the Mechanistic Worldview To Expressive Organicism

From P2P Foundation
Jump to navigation Jump to search

* Article: Cold Reason, Creative Subjectivity: From Scientism and the Mechanistic Worldview To Expressive Organicism. Otto Paans. Borderless Philosophy 5 (2022): 161-212


See also: Expressive Organicism.


Otto Paans:

"In this essay, the issues to which organicist thought is a response will therefore be discussed in detail.

Section II starts with a synoptic survey of a complex of eight commitments that jointly characterize mainstream contemporary Western metaphysics, namely:

(i) physicalism about matter,

(ii) the thesis of universal natural mechanism,

(iii) scientific naturalism,

(iv) covert dualism about the structure of reality,

(v) the mind-body problem,

(vi) epiphenomenalism about consciousness,

(vii) adherence to conceptualism, and

(viii) a disregard for anything that does not fit this structure, in turn betraying some deep-seated assumptions about the nature of thought itself.

This set of eight commitments exacts a heavy cost on thinking:

  • it enforces the

repetition of existing theories in a new form;

  • it epicyclically reiterates the chain of

problems that emerges when two or three commitments lead down the same conceptual alley again and again;

  • it nurtures a blatant disregard for an entire range of

human experiences that does not sit easily with the assumptions underlying the set of basic commitments;

  • and it is unable to overcome its own presuppositions.

Section III, a tripartite section, deals with the philosophical consequences of mainstream contemporary Western metaphysics. Notably, three assumptions that the eight commitments jointly entail cause philosophical problems that lead to gaps in our understanding of the world. Working around these gaps engenders restrictive habits of thought or thought-shapers (Hanna and Paans, 2021), such as

(i) thinking of matter as inherently inert,

(ii) physical reductionism and

(iii) rigid part-whole thinking.

These three habits appear as the only viable way of thinking, but in fact, they can be straightforwardly questioned and undermined once their shortcomings are brought to light.

Section IV provides a description of a future philosophy and more specifically contains the contours of a metaphysical outlook called expressive organicism.

Therefore, its scope will be broad, and there will be ample opportunity for filling out details or deepening themes that surface in the discussion. In the same way that a charcoal sketch expresses the essence of an artistic idea without necessarily illuminating its details in the same way that a fine line drawing would do, so too this section provides a synoptic overview of the areas that organicist metaphysics may touch and indeed revolutionize. As such, it is more a “fragment,” or a “sketch-of-system,” than a fully matured philosophical theory.

It is my aim to demonstrate how an organicist metaphysics can escape the conceptual prison into which scientism and the mechanistic worldview have driven us."



The Open-Endedness of Matter According To Organicism

Otto Paans:

"Recent frameworks of philosophical thought like Object-Oriented Ontology, Actor-Network-Theory, or recent forms of panpsychism, do take the thought that all matter has inherent metaphysical continuity with organismic life and/or the mental seriously. This orientation, then, allows us to rethink all matter as inherently dynamic, by virtue of its being processual, purposive, and spontaneously creative, and then when such processes reach a degree of complexity such that they achieve an operational distinction between their inner/endogenous states and the outer/exogenous states of affairs surrounding them, thereby becoming irritable and responding to their environment, then they are organisms (Bennett 2010). This position, which harks back to Schopenhauer, Bergson, Lloyd Morgan, and Whitehead (Schopenhauer, 1844/1969: vol. 1, §28, vol. 2, ch. XXVI; Bergson, 1907/1944; Lloyd Morgan, 1923; Whitehead, 1929/1978) is a radical and serious alternative to the scientistic mechanistic and materialist/physicalist assumptions that plague contemporary science and philosophy.


The sea does not exhaust the hidden depths of the hailstones that fall in it; and neither do the ants exhaust all the possibilities of the fruit they eat. They touch only partially, and never totally.

If we project this profound thought on the notion of matter as such, and not just on some discrete objects like tables, post-boxes, automobiles, or anthills, we can then envision matter as something that possesses a limitless depth. The very notion of an Object-Oriented Ontology must be driven to its most radical conclusion by taking this step. Instead of asserting that the universe consists of objects on the same ontological footing, we can equally truly say that the basic building blocks of the universe possess all those properties that make the limitless depth of larger, compound objects possible.6

The question then remains: how and why is this so?

How and why is it that matter has this depth, or this open-endedness?

For a possible answer, we must of necessity turn to the notion of organicism.

This doctrine holds that there is a basic metaphysical continuity between the fundamental properties of matter and mind, and also between non-organismic and organismic dynamic processes.

Consequently, what we call “matter” is the result of non-equilibrium energy flows, culminating in both non-organismic and organismic dynamic processes in different complex configurations, including minded animals. Organismic processes, including minded animals, are not reducible to non-organismic processes, although the latter necessarily play a partially constitutive role in the emergence and unfolding of the former. But to assume or assert that biological properties or mental properties are either reducible to or naturally/nomologically supervenient on purely mathematical and/or inherently mechanical physical properties, is an Ur-error, a philosophical Original Sin, according to the organicist thinker.


In fact, all of the eight commitments can be traced back directly or indirectly to this Ur-mistake, and every paradox in the eight commitments is a direct or indirect consequence of this mechanistic and reductionive or non-reductive materialist/physicalist line of thought, whereby the entities, properties, relations, and laws of every “higher” level are held to to be fully explicable in terms of the entities, properties, relations, and laws and properties of a “lower” level, and ultimately the “lowest” or fundamental level. But with every upward or downward translation from one level to the other, crucial contents are lost and/or mysteriously transformed.


If we combine the assumption that matter is open-ended with the core thesis of organicism, that postulates a direct continuity between energy flows going back to the Big Bang singularity, and biological and/or minded animal processes, we end up with a picture of matter that is decidedly radical: all of a sudden, matter is literally filled with limitless potentiality, in a way that is roughly equivalent to Aristotle’s notion of matter or hyle as dunamis. The postulate that matter is open-ended entails that it can become and do anything that is natural—and for all we know, we have seen only the tip of the iceberg here. There can be powers in matter that we as yet cannot fathom or unlock, just as no one realized how explosive and deadly nuclear fission could be. It took until the early 20th century before we possessed the theory and the instruments to unlock a hidden set of potentials that lay dormant within physical particles.

Because there is a direct and necessary metaphysical continuity and connection between non-equilibrium energy flows going back to the Big Bang singularity, and minded animal life, entails that the varieties of kinds of energy flows could lead to many forms of minded animal life. Ours is just one evolutionary tree that developed up to Homo sapiens, but there is no inherent necessity in this developmental direction. Viewed from inside the standpoint of our evolution on Earth, obviously, no natural laws were violated. But seen from outside that standpoint, the spontaneous creativity of nature could have gone another way at any given moment, and we might just as well have ended up with the mushroom people from Ambergris, instead of minded human animals, Homo sapiens. Again, all this—from the destructive power of nuclear fission to the emergence of minded primates—is traceable to the inherent spontanous creativity built into matter.


A radical change in paradigm and “root metaphor” is now in order and long overdue, because the very cognitive and conceptual representational schemes that we have developed in the naïve or sophisticated mechanistic way of thinking have conspired to structure our mainstream, orthodox, standard modes of thought precisely around the disastrously narrowly restricted range of possibilities they offer. If it were not already difficult enough to step outside the perceptual-conceptual prison that Schopenhauer calls the principium individuationis, the cognitive and conceptual representational schemes we have developed within the “high modernist” (Scott, 1998: p. 4) mechanistic worldview that has ideologically dominated since the turn of the 20th century, only reinforce our unargued and uncritical belief in their efficacy and their irreplaceability, as if they really and truly were the “only game in town” (Wilson, 1999). But the high modernist mechanistic worldview has not been distilled from naively observing and then reflecting on the world and then justifiedly believing its self-evident empirical material properties and non-empirical formal properties to be the only ones that actually do or ever possibly could exist; no: the very cognitive and conceptual representational schemes developed on this basis since 1900 have instead taken the mechanistic worldview as a brute and indubitable given, thereby implicitly or explicitly excluding or rejecting all other significantly alternative models of our thinking and the world."


Technology as Artificial Representation

Otto Paans:

"The very idea of “natural representation,” when combined with the 17th-century Cartesian idea of an objective space in which we can represent by means of coordinates, contributed significantly to the emergence of the mechanistic worldview: not only is the natural world nothing but a large-scale complex machine, but also the human perceptual mind is nothing but a small-scale simple machine like a pinhole camera, i.e., a camera obscura. This thought-shaping mental model—the human perceptual mind as a camera obscura—which more or less covertly lies behind the shaped thought that the technology associated with the leading formal and natural sciences are the final answer to the problem of mental representation—whether it is a pinhole camera, a brownie camera, a movie camera, or a digital camera application in a smart phone—has proven to be a remarkably influential and persistent myth. The increasing mathematization of the sciences, the models for problemsolving derived from engineering, the reduction of biology to statistical mathematics, evolutionary genetics, chemistry, and physics, and the reduction of animal behavior to Turing-computable algorithms, as well as the reduction of consciousness to physico-neural processes, all point in the same conceptual direction: the variety of life itself must be brought under one idealizing system of representation. And, not surprisingly, that very idiom is conceptual and limited to the operations of mathematizability and/or formal logic. The fact that science itself speaks in abstractions and idealizations does not in the slightest stop the advance of mechanistic thinking, because it justifies its existence by appeals to its objectivity and practical efficacy. Thereby, it reduces life (and in its wake, Being) to phenomena that are understood once they can be replicated or described in mathematical (and increasingly digital) terms, potentially making them available for artificial reproduction."


Against Atomism and Decompositionism-Recompositionism

Otto Paans:

"Relatively new sciences like ecology and environmental toxicology are already departing from from an exclusive reliance on mathematical modelling, although this still plays a central role. Be that as it may, the underlying issue is now how to replace atomistic thinking and inventing a new form of cognitive representation that adequately addresses the problems that we face at the beginning of the 21st century. In recent years, there have been interesting developments in thinking about objects from a mereological rather than decompositional standpoint.

Examples in this philosophical orientation are Object-Oriented Ontology (OOO), Actor-Network Theory (ANT), Speculative Realism, Onticology, and new ecological thought. All these new developments have one thing in common: they reject the easy, hard-and-fast distinction between part and whole in favor of relationality. Once one lets go of the idea that every compound entity is constituted by easily definable parts that can be removed and re-assembled at will, one has made the first step in leaving the mechanistic worldview behind. This does not mean that its utility as such is questioned, but simply that its area of application is constrained to the domain where it is actually useful, appropriate and informative. All areas in which the scientistic, mechanistic, and materialist/physicalist commitment to atomism led to oversimplification can be revisited with a new, and more appropriate set of concepts in mind. An outstanding example is the field of design science, that, as we saw, had been forcibly shoehorned into the natural mechanist worldview, only to discover its own nature rather recently."



Otto Paans:

"Surveyed from a bird’s-eye point of view, mainstream contemporary Western metaphysics has rejected any form of organicism as foundational principle of the cosmos, and consequently it has rejected this doctrine as a “root metaphor” (Pepper, 1942). A root metaphor is a fundamental explanatory model that is captured in a single complex image. Organicism takes the living organism (as processual, purposive, and self-organizing, in a homeostatic balance and symbiosis with its natural environment) as its root metaphor, as opposed to the mechanistic worldview, which takes the machine (for example, in different eras, the clock, the steam engine, or the digital computer) as its root metaphor.

The contemporary rejection of organicism is ironic, as this was precisely one of the working principles of many German idealists, a philosophical school that may well be regarded as one of the most inventive periods of Western modern philosophy, and moreover, a philosophical movement that still makes its influence felt, even in those areas where contemporary metaphysics reigns supreme. We could easily pursue the pedigree of philosophical organicism backwards in time, encountering earlier formulations of its core concepts in the thought of Spinoza and Duns Scotus; but equally, we might survey its pervasive influence in 18th, 19th, and early 20th century thought, notably in the philosophies of the later Kant, Goethe, Fichte, Hegel, Schelling, Schopenhauer, Bergson, and A. N. Whitehead, although this is not the aim of this essay. Suffice it to say that the organicist worldview has a long history in Western philosophy, and under vastly different forms, also in various Eastern philosophies. However, with the rise of Anglo-American classical or post-classical Analytic philosophy, and its scientistic alliance with the formal and natural sciences, philosophical organicism has been explicitly or implicitly dismissed as anti-scientific."

(Source: Otto Paans, Reason, Subjectivity, Organicism. Borderless Philosophy 5 (2022): 161-212)

Eight Core Commitments of Mainstream Contemporary Western Metaphysics

Otto Paans:

"Mainstream contemporary Western metaphysics can be summarized in a list of eight core commitments. Before detailing each of them individually, it is important to stress that they are linked together as a chain in which one commitment gives rise to another; or in which two commitments are reinforcing one another.

(i) Physicalist Metaphysics

Physicalist metaphysics maintains that all facts in the world, including all mental facts and social facts, are either reducible to (whether identical to or “logically supervenient” on) or else strictly dependent on, according to natural laws (“naturally supervenient” or “nomologically supervenient” on) fundamental physical facts, which in turn are naturally mechanistic.1 Consequently, the nature of matter is held to be material or physical. Matter, as the fundamental ground of the universe consists of material particles and processes that can be reductively analysed into their constituent parts and that interact following natural laws.

(ii) Universal Natural Mechanism and Causal Determinism/Indeterminism

The thesis of universal natural mechanism says that all the causal powers of everything in the natural world are fixed by what can be digitally computed on a universal deterministic or indeterministic real-world Turing machine, provided that the following three plausible “causal orderliness” and “decompositionality” assumptions are all satisfied:

(i) its causal powers are necessarily determined by the general deterministic or indeterministic causal natural laws, especially including the Conservation Laws, together with all the settled quantity-of-matter-and/or-energy facts about the past, especially including The Big Bang,

(ii) the causal powers of the real-world Turing machine are held fixed under our general causal laws of nature, and (iii) the “digits” over which the real-world Turing machine computes constitute a complete denumerable set of spatiotemporally discrete physical objects.

(iii) Scientific Naturalism

Scientific naturalism includes four basic theses:

(i) anti-mentalism and antisupernaturalism, which rejects any explanatory appeal to non-physical or nonspatiotemporal entities or causal powers,

(ii) scientism, which says that the exact sciences are the paradigms of reasoning and rationality, as regards their content and their methodology alike,

(iii) physicalist metaphysics, as described two paragraphs above, and

(iv) empiricist epistemology, which says that all knowledge and truths are a posteriori.

The direct implication of the conjunction of these four theses is that everything which does not fit the scientific image can be safely regarded as epiphenomenal, folkloristic, quaint, superstitious, a matter of taste, or else downright naïve. So, scientific naturalism holds that the nature of knowledge and reality are ultimately disclosed by pure mathematics, fundamental physics, and whatever other reducible natural sciences there actually are or may turn out to be; that this is the only way of disclosing the ultimate nature of knowledge and reality; and that even if everything in the world, including ourselves and all things human (including language, mind, and action), cannot be strictly eliminated in favour of or reduced to fundamental physical facts, nevertheless everything in the world, including ourselves and all things human, is metaphysically grounded on and causally determined by fundamental physical facts. Hence scientific naturalism is committed to providing a “value-neutral set of formulae that express the underlying structure of the natural universe”, just as the Vienna Circle envisioned.

The combination of metaphysical physicalism, natural mechanism/causal determinism/indeterminism and scientific naturalism can be regarded as the core commitments of mainstream contemporary Western metaphysics. As a theoretical package, they naturally lead to the following five peripheral commitments.

(iv) Covert Dualism about the Structure of Reality

While physicalist metaphysics attempts to classify all scientific findings into one overarching explanatory, scientific-naturalist framework—a Vienna-Circle-like “unified science” (Wilson, 1999)—adherence to the ideal of mathematizability leads to The Two Images Problem (Sellars, 1963b; Hanna and Paans, 2020). On the one hand, there is the objective, non-phenomenal, perspectiveless, mechanistic, value-neutral, impersonal, and amoral metaphysical picture of the world delivered by logic, pure mathematics, and the fundamental or “hard” natural sciences. And on the other hand, there is the subjective, phenomenal, perspectival, teleological, value-laden, person-oriented, and moral metaphysical picture of the world yielded by the consciousexperience of human beings. In 1963, Wilfrid Sellars aptly and evocatively dubbed these two sharply opposed world-conceptions “the scientific image” and “the manifest image” (Sellars, 1963b).


By “science” Sellars means not only the formal sciences of logic, mathematics, and computer science, but also and above all the “hard” natural sciences of physics— including cosmology and particle physics—astronomy, and chemistry. Above all, it is clear that Sellars’s appeal to “science” fully includes natural mechanism, and also that the predictive precision of the formal and hard natural sciences is taken as incontrovertible evidence of their reliability and truth, thereby reinforcing the scientistic mindset. Correspondingly, according to the standard construal of scientific theory-reduction, both astronomy and chemistry have a fully mathematically describable and microphysical basis in fundamental physical entities, properties, facts, and processes, and therefore they are both fully grounded in a fundamental, naturally mechanistic physics. Many or even most scientistic philosophers are reductive physicalists, who hold that all worldly facts are nothing over and above the fundamental physical facts. Sellars himself, however, by virtue of his appeal to what he calls “the logical space of reasons” that characterizes concept-driven human thinking in the manifest image (Sellars, 1963c: p. 169), is a scientistic and sophisticated non-reductive physicalist.

However, both reductive and non-reductive physicalism alike introduce a covert dualism that creeps in via the scientific and philosophical backdoor. If the world exists fundamentally as the aggregate of manifestly real objects and relations and as the ideal, mathematized descriptions of these objects and relations, then even if one rejects ontological dualism, nevertheless explanatory dualism re-emerges. This is not a dualism of mind versus matter as equally two basic but essentially distinct cosmic substances, but instead a dualism between object and conceptual description.

Explanatory dualism plagues mainstream contemporary Western metaphysics in various guises. In its platonistic version, it reiterates a two-world version of platonism, in which the Ideas are the metaphysical correlates of ideal mathematical descriptions (Dunham et al., 2014: pp. 19-25). Such a platonizing tendency can be found, for example, in the work of Alain Badiou, when he maintains that “mathematics is ontology,” re-iterating the dualism that has come to characterize metaphysics (Badiou, 2013, 2014: section 1). We can find the same tendency in the work of Quentin Meillassoux when he insists on the ideal character of mathematization as a way to break out of the “Correlationist Circle,” i.e. the thesis that thinking and being can’t be thought apart (Meillassoux, 2008; and for a concise introduction and response to Meillassoux’s thought, see Harman, 2018: pp. 123–155). And on the other side of the Channel, leaving aside longstanding rhetorical and stylistic differences between Anglo-American and French philosophy, Roger Penrose holds very similar platonistic views (Steiner, 2000). In in all three cases, mathematization is seen as a fool-proof way to gain insight into the true, underlying structure of the universe, unencumbered by the deceptive senses, in the best Cartesian sense.

The Meillassoux-style rejection of Correlationism points towards the other mode in which dualism plagues contemporary metaphysics: any “two-world” version of Kant’s or Kantian transcendental idealism. According to the “two-world” reading, Kant re-iterated the platonistic distinction between Idea and reality in a Leibnizian-Wolffian mode by emphasizing in the Critique of Pure Reason how the way we perceive reality is inevitably pre-structured by the structure of our minds, thereby internalizing a distinction that was formerly thought to be a feature of the physical world itself.

Whether we agree or disagree with Kant’s categories and Anschauungsformen does not matter here; what counts is the core “two-worlder” thesis that there is a fundamental gap between phenomena as registered by the human senses and noumena that make up reality as it is, but that can only be inferred rather than proven.

The same worry can be raised in another register: a variety of mathematizing instruments allow us to predict and model physical phenomena that our cognitive apparatus cannot register in an unaided manner (Galison, 1997; Knorr-Cetina, 1999; Hossenfelder, 2018). Electron microscopes, sensors, simulations, and scientific models alike all depart from idealized mathematic descriptions that have a certain predictive power, but that structure our access to the world in much the same way that our senses determine our mode of access to the world. In other words: an argument from technocratic efficacy cannot take any skeptical worries away. It merely re-iterates the initial worry that the “really real world” is out there, while we can only obtain incomplete descriptions of it. The core lesson of the “two-world” version of Kant’s or Kantian transcendental idealism is transposed in yet another register: it resides now in our tools, making idealized mathematic descriptions appear as incomplete copies of the real world.

In all these examples, the traumatic distinction between the noumenon or thingin-itself and phenomenon or mere appearance operates as an insurmountable chasm.

In its platonistic version, the Ideal world is out of reach, while the real world itself is nothing but an imperfect and flawed blueprint of the unreachable Ideal world. This ideal world, it seems, can be approached by the absolute reliability and infallibility of mathematics. According to the “two-world” reading of Kant’s and Kantian transcendental idealism, the world we experience consists exclusively of phenomena that mysteriously causally emerge from an noumenal shadow-world about which we can say nothing that is empirically meaningful or “objectively valid,” for better or worse. In its scientific version, the best tools we have yield only idealized fragments of a reality that is unimaginably deeper than even our best descriptions can reach, thus always eluding us.

(v) The Mind-Body Problem

If the previous commitment of mainstream contemporary metaphysics led to the question of our access to reality, its direct correlate, familiarly called “the mind-body problem,” highlights another consequence of the core commitments. If the universe is entirely conceivable in physicalist terms, and if natural mechanism and causal determinism/indeterminism holds true, then somehow or another matter must give rise to mind if any sort of Cartesian dualism (whether in its substance dualist or property dualist versions) is to be avoided.

The problem here is twofold.

On the one hand, no matter what advances that have been achieved in neuroscience, there remains an ontological and/or explanatory “gap” between

(i) physical events in the brain, or what can be inferred from the experimental measurements of these brain-events, and

(ii) a subject’s conscious experience.

An electrical current that runs from point A to B can cause (or at least be regularly correlated with) a ticklish sensation experienced by a given conscious subject; but to claim that the entire question is settled by this relationship is preposterous. We might as well induce an entire range of mental states and beliefs in a patient, simply by intervening in the electrical currents in the brain. And indeed, that has been the core proposal underlying mind-brain identity theory: for each mental state one can identify a corresponding brain state to which that mental state is identical. But how brains generate conscious and self-conscious agents is a mystery. The physicalist must restrict his range of explanations, because the metaphysical picture that underlies his reasoning does not allow for explanations outside the narrow frame of physicalism - & - natural mechanism.

On the other hand, in an attempt to bridge the “gap,” there is an entire literature that seeks to deal with body and mind in logically independent terms. The most poignant examples in this category may the thought experiments in the Analytic tradition, in which brains are freely moved around between persons, cut up, or connected to computers. The entire idea that a brain can be detached and can be considered apart from the body of which it is—after all is said and done—an integral part has led to an impressive range of theories that are (from a certain point of view) absolutely logically sound and at the same time completely misleading. Thought experiments in which brains are freely moved from one skull to another, bodies are replaced and dismantled, persons are resurrected in someone else’s body, etc., belong to Gothic novels and horror movies and are worlds away from the “cold reason” that the Vienna Circle championed. Yet, such thought-experiments also betray a deep-seated mechanistic view of organisms and the nature of organismic life. The Cartesian view that animals are mere machines, and that their screams of pain are just sounds of components being put under strain underlies this reductive materialist/physicalist line of reasoning (Descartes, 1985: part V). Attempts to avoid dualism and reductive materialism/physicalism have resulted in a third way of trying to overcome the gap: non-reductive materialism/physicalism. But this has led to yet another pernicious commitment.

(vi) Epiphenomenalism about Consciousness

Non-reductive materialism/physicalism, which says that consciousness necessarily depends upon the fundamentally physical world, but does not reduce to it. But this amounts to claiming that consciousness is an epiphenomenon, since all causally efficacious facts are fundamentally physical. Consciousness emerges or naturally/nomologically supervenes on the fundamentally physical world, when enough neurons act together, adding a new level of complexity that is somehow reflexive: i.e. the higher levels can refer back to the underlying levels. In theory, this layer-like structure could be modelled using very powerful computers. A variation on this idea – derived from distributed computing – is connectionism, which claims that an interconnected network stores data everywhere, and the brain functions as a terminal to retrieve it or to combine key terms, a workplace of the mind (Stich, 1988). Variations on this way of approaching the mind have been developed by Stanislas Dehaene and Bernard Baars (Baars, 1996, 1997; Dehaene, 2014). Notwithstanding their fascinating findings, the existence of the “gap” is merely explained away in the connectionist approach: relegated to the sideline because it simply does not fit the explanatory, natural-mechanistic framework that one uses as basis of the argument. We find the same reductive tendency in the work of Daniel Dennett, whose book, Consciousness Explained, was humorously nicknamed Consciousness Ignored (Dennett, 1991).

His slogan “competence without comprehension” nicely captures the core thought that underlies the epiphenomenalist approach: deep down, there is no such thing as consciousness, and therefore we can dispense with all talk about a firstperson, embodied, and lived perspective as merely subjective myth-making or pretence. Again, the “gap” looms. Let’s suppose that we possess all correct scientific information about the brain, yet first-person perspectives, or collective and individual lived experiences, can still be missing: this is the so-called “zombie argument.” Now, suppose that, in addition to the brain about which we have all correct information, we add consciousness as an extra fact, somehow causally related to the brain. This can be nothing but a causally inert shadow of the brain: an epiphenomenon, just as the screams of the tortured animal were nothing but the sounds caused by mechanical operations.

Epiphenomalism turns on one simple thought: just as enough interacting water molecules mysteriously cause the macroscopic quality “wetness,” so can many neurons acting together mysteriously cause the (if you are an eliminitaivist like Dennett, the representational illusion of the) macroscopic quality “consciousness.” Advancing one step further, the presence of a sufficient level of reflexive neuronal loops within one system creates continuous feedback that mysteriously causes self-consciousness, or a kind of first-person view of the world. In this manner, the mind is made a result of acting matter. However, none of this can mysterious causation can be an adequate solution to the “gap.”

First, an essential shortcoming of all such non-reductive materialist/physicalist theories is that, starting out with neurobiological processes we have swarms of explanatory theories postulating some or another causally mysterious emergence or natural/nomological supervenience of the mental on the fundamentally physical, that cannot even in principle account for our very real first-person and existential experiences of beauty, sadness, hope, melancholy, despair, etc. In other words, nonreductive materialism/physicalism as applied to biological life cannot even in principle yield what Michel Henry called “Life”: the fully embodied, self-determining, forward-directed, existential awareness of subjectively experienced being in a given real-world predicament.

Second, the mechanistic premises on which epiphenomenalism is based are embedded or encoded in the model used for thinking about biological life itself. Nevertheless, a simulation or simulacrum of biological life, replicating a few of its functional properties, is not an instance of biological life itself. This is why the reduction of the operations of organisms to computable functions and algorithms falls essentially short as an explanation, and no amount of computing will result in a real-world biological brain, or even a close correlate of it. Moreover, this approach to mimicking life reiterates a fundamental distinction intended to demarcate distinct regions of reality in the universe: namely, the mechanistic assumption that matter, as such, is inherently inert. This assumption is not undermined by the nowadays popular doctrine of panpsychism, which merely ontologically or explanatorily injects epiphenomenal mental facts into fundamental physical facts (Goff, Seager, and Allen Hermanson, 2021). But injecting shadows-of-machines into machines does not make those shadows causally efficacious—any more than injecting Cartesian ghost-souls into machines would make those ghost-souls causally efficacious.

The idea of consciousness as an epiphenomenon re-iterates the fundamental distinction that also underlies the mind-body problem. If the mind is a giant computer, there is no reason why it cannot be uploaded, or even whether a body is needed. But this response creates a special kind of absurdity that is best visible in the work of Paul Churchland, who pushed this thought as far as it would go with an admirable radicality (Churchland, 2013).

According to Churchland, we ordinarily think of mental states as epiphenomenal manifestations. We even use a language that is naively “folky” to speak about them, but with no good reason at all. So, we might just as well eliminate the entire vocabulary that mentions emotions, beliefs, hopes, etc., in favour of a more precise description in terms of neuronal or biochemical interactions. Just as the Wallace-Darwin theory of evolution is a crude tool compared to contemporary microbiological and genetic approaches for identifying organisms and their evolutionary genealogy, or Newton’s theories are but approximate descriptions of facts or phenomena that have been predicted infinitely more precisely in General relativity and/or quantum physics, so too are our mental states best described through a precise coding of the neuronal interactions that give rise to them. We should therefore simply identify the emotion “anxiety” with an electrical pattern of firing synapses and/or chemical interactions in the brain, and eliminate any reference to the epiphenomenon.

In the version of epiphenomenalism espoused by Dennett, consciousness is an eliminable high-level epiphenomenon that arises as part of lower-level interactions of a biological—or indeed digital—system. So, once the number of (reflexive) interactions in a system increases, we will end up with an eliminable epiphenomenon called consciousness or self-consciousness in even further developed systems. But at the end of the day, this is just a way of talking, and ultimately a pragmatic “stance.”

Nevertheless, Dennett’s eliminativism hides another unargued presupposition: that human cognition is reducible to what Kant called “determining judgments,” i.e., logically-guided conceptual-discursive operations, closely associated with the faculty of the understanding or Verstand. In a myriad of different guises, this presupposition is the same as conceptualism—the doctrine that all cognitive content is strictly determined by our conceptual-discursive capacities, and that all cognitive operations are essentially conceptual-discursive operations, which carries with it a biased commitment to rule-based reasoning and propositional activity over the categorically distinct and essentially non-conceptual representational contents and operations of sensibility, where this includes perception, empirical or non-empirical spatiotemporal representation, episodic and skill memory, affect or emotion, and imagination. Descartes’s profoundly skeptical distrust of sense perceptions, memory, and imagination still hovers like a-ghost-in-the-Turing-machine, a ghost that cannot ever be exorcised as long as the mechanistic worldview grips us, with a profoundly impoverished metaphysical picture of the world as an inevitable and lamentable consequence.

But even if we suppose for the purposes of argument, per impossibile, that one day we might succeed in explaining consciousness via some or another version of non-reducctibe materialism/physicalism and epiphenomenalism, then we are still not out of the forest. To explain (away) consciousness will itself be impossibly hard, but once we supposedly succeeded at that, then we would find ourselves confronted with the categorically harder task of explaining (away) human rationality.

Or, as Thomas Nagel concisely puts it:

- "[Human rationality] cannot be conceived of, even speculatively, as composed of countless atoms of miniature rationality. The metaphor of the mind as a computer built out of a huge number of transistor-like homunculi will not serve the purpose, because it omits the understanding of the content and grounds of thought and action essential to reason." (Nagel, 2012: p. 87)

Indeed, the mechanistic root metaphor of “the mind as a computer” is not only deeply flawed but strictly impossible, in view of the formal facts, proved in the 1930s by Alonzo Church and Kurt Gödel, that not even all proofs in classical first-order predicate logic, far less the proofs of all mathematical truths in uniquely formalized first-order Peano arithmetic, can be carried out by computers (Boolos and Jeffrey, 1989). How then could we ever seriously think that computers could exactly replicate and also improve upon all (or even any) of the cognitive, affective/emotional, or practical activities of rational human agents, as the thesis of strong artificial intellgence asserts? This ultra-mechanistic thesis is nothing but a fantasy, and indeed a reprehensible fantasy. What is required, then, is a new organicist approach to formal science, natural science, and philosophy alike, that not only non-mechanistically and irreducibly fully incorporates human consciousness and self-consciousness but also non-mechanistically and irreducibly fully incorporates human rationality.

(vii) Adherence to Conceptualism

The thesis of conceptualism holds that all mental content is necessarily and sufficiently determined by our conceptual or discursive capacities, which include our judgment-making or propositional capacities, our inferential capacities, and our logical capacities more generally. By sharp contrast, the thesis of non-conceptualism holds that at least some mental content is necessarily and sufficently determined by our nonconceptual or sensible capacities, which include our perceptual capacities, our capacities for representing empirical or non-empirical spatiotemporal content, our capacities for episodic and skill memory, and our capacities for affect or emotion, including feelings, desires, and passions.

The upshot of the conceptual vs. non-conceptualism contrast might seem to be that everything not determined by concepts cannot be theorized at all, but this is a mistake that overlooks the essential role of spatiotemporal representation in theorizing of all kinds. But, since conceptualists consistently overlook this crucial point, it is as if this core commitment of contemporary metaphysics reduces Hegel’s famous assertion, “the real is rational, the rational is real,” to the application of concepts. According to this view, all and only what can be captured in rational, discursive and conceptual terms is real; what remains is either nothing but a brute, non-normative “given” or else something that’s merely subjective and epiphenomenal.

Conceptualism remains a default position that tacitly underpins the conception of the mind and personhood that characterizes contemporary metaphysics in general and mechanistic metaphysics in particular. In its neo-Hegelian version, it is promoted nowadays most forcefully by the Analytically-trained philosophers of the so-called “Pittsburgh School,” especially Wilfrid Sellars, John McDowell, and Robert Brandom (Maher, 2012).

The wider effects of this conceptualist commitment can be discerned in two views that have had a tremendous impact throughout 20th-century Anglo-American philosophy. First, in philosophy of mind, conceptualism underwrites the idea that other minds are represented by an innate “theory-theory” possessed by every human individual; and second, in political philosophy, conceptualism underwrites the idea of persons as purely egoistic, instrumental reasoners, i.e., “rational optimizers.” The conceptualist idea of a theory-theory of the representation of other minds entails that all human beings possessed an innate theory or proto-theory of how the mind works; or else, that they would make judgements about other minds by applying rules that structure the theory. And the application of such theoretical rules is deeply conceptual. One must possess a conceptual structure already, even if its application is currently non-manifest. The upshot of this commitment is that human mind is conceptual all the way down. Often, this assertion is paired with its dialectical corollary: that we can safely extrude from philosophy of mind or cognitive science or anything that is non-conceptual.

Notably, in political philosophy of a classical liberal or neoliberal orientation, the idea of a person as a rational optimizer presupposes a version of conceptualism which holds that

(i) persons are by nature egoistic, instrumental reasoners and

(ii) that all instrumental reasoning is strictly determined by our conceptual capacities.

Decision theory is the paradigm of such a view. What is at work here is at bottom a politicized version of the thesis of natural mechanism, as specifically applied to human animals. Both Dawkins’s idea that we are “survival machines” and Dennett’s ideas of us as “moist robots” and of our “competence without comprehension” play crucial supporting roles. If we analyze the idea of the homo economicus, we see that it is a reduction of a human person in evolutionary natural-mechanistic terms – and that it leads towards a natural mechanist decision theory. In turn, these assumptions lead to a mistaken picture of thinking as such, as will be discussed in section III.

(viii) A Disregard for Anything that does not fit this Template

As I mentioned earlier, the combination of the three core commitments, combined with an adherence to conceptualism in one of its myriad forms, easily leads to a dismissal of philosophical questions that are hard to address within the mainstream contemporary Western metaphysical framework, simply by formulating and promoting certain questions that automatically push other questions to the periphery.

A striking illustration is discussed in a fine essay by Doug Mann, who notes that in The Oxford Companion to Philosophy, Ted Honderich visualizes philosophy as a core consisting of (a naturally mechanistic, physicalist, scientific) metaphysics, logic, and epistemology. Around this core, other philosophical specializations are listed like planets concentrically orbiting a central star.

The questionable and overweaning two-part assumption here is

(i) that a few areas of specialization rightly dominate all the others, and

(ii) that many areas of philosophy are rightly taken from the outset to be derivative or peripheral (Mann, 2018).

An added complication is that this image of philosophy as a neatly and hierarchically ordered system of inquiry gives rise to a highly insulated view of the discipline, as if philosophy should at its most fundamental level, concern itself with the “deep questions” located in the core. However, this intellectual commitment easily gives rise to a “Ivory Tower” or “Glass Bead Game” model of inquiry, in which increasingly abstruse and Scholastic (in the pejorative sense of the term) arguments and debates are carried out, “full of sound and fury, [but] signifying nothing.” Correspondingly, it gives rise to philosophy’s disengagement from the world and its irrelevance to the central concerns of humanity, a view that is most closely associated with The Vienna Circle’s theoretical obsession with “the icy slopes of logic,” but generalizes over all of mainstream contemporary professional academic philosophy, not always as an obsession with logic, but

.. often enough nowadays as an obsession with social justice theory and identitarian multiculturalism, as if that somehow captured the core of all human morality and politics, and were not nothing but a moralistic mirror of the collapse of the civil rights movement in the USA, the fragmentation of the American Left, and its retreat into the ivory bunker of the professional academy, during the roughly twenty-five years following the assassination of Martin Luther King Jr. in 1968 (Rorty, 1994; Kazin, 2011: chs. 6-7).

Not coincidentally, philosophy that has attempted to address questions like mortality, the human condition, the natural world and the meaning and/or purpose of life (if any) also had close connections to art and literature, and very often a turbulent relationship with the professional academy. Here I am thinking of philosophers like (to a certain degree) Schelling, Schopenhauer, Kierkegaard, and Nietzsche, as well as Sartre and Wittgenstein, and more recently figures like Rorty. Philosophy cannot be confined to questions of logic, or to debates about epistemology or metaphysics, or confined to a certain moral and political “party line.” Indeed, as Mann argues, the metaphysical picture that underlies this conception of philosophy is in itself deeply problematic, but it paves the way for the commitments outlined above.

Every question that is pushed to the forefront of individual or social cognition pushes other questions into the cognitive periphery. Taken together, the eight commitments function like a veritable philosophical-institutional phalanx, suppressing those views that cut at the emptiness underlying the thorny wreath of philosophical presuppositions that has grown around it."

(Source: Otto Paans, Reason, Subjectivity, Organicism. Borderless Philosophy 5 (2022): 161-212)