Eight Core Commitments of Mainstream Contemporary Western Metaphysics

From P2P Foundation
Jump to navigation Jump to search

Otto Paans:

"Mainstream contemporary Western metaphysics can be summarized in a list of eight core commitments. Before detailing each of them individually, it is important to stress that they are linked together as a chain in which one commitment gives rise to another; or in which two commitments are reinforcing one another.

(i) Physicalist Metaphysics

Physicalist metaphysics maintains that all facts in the world, including all mental facts and social facts, are either reducible to (whether identical to or “logically supervenient” on) or else strictly dependent on, according to natural laws (“naturally supervenient” or “nomologically supervenient” on) fundamental physical facts, which in turn are naturally mechanistic.1 Consequently, the nature of matter is held to be material or physical. Matter, as the fundamental ground of the universe consists of material particles and processes that can be reductively analysed into their constituent parts and that interact following natural laws.

(ii) Universal Natural Mechanism and Causal Determinism/Indeterminism

The thesis of universal natural mechanism says that all the causal powers of everything in the natural world are fixed by what can be digitally computed on a universal deterministic or indeterministic real-world Turing machine, provided that the following three plausible “causal orderliness” and “decompositionality” assumptions are all satisfied:

(i) its causal powers are necessarily determined by the general deterministic or indeterministic causal natural laws, especially including the Conservation Laws, together with all the settled quantity-of-matter-and/or-energy facts about the past, especially including The Big Bang,

(ii) the causal powers of the real-world Turing machine are held fixed under our general causal laws of nature, and (iii) the “digits” over which the real-world Turing machine computes constitute a complete denumerable set of spatiotemporally discrete physical objects.

(iii) Scientific Naturalism

Scientific naturalism includes four basic theses:

(i) anti-mentalism and antisupernaturalism, which rejects any explanatory appeal to non-physical or nonspatiotemporal entities or causal powers,

(ii) scientism, which says that the exact sciences are the paradigms of reasoning and rationality, as regards their content and their methodology alike,

(iii) physicalist metaphysics, as described two paragraphs above, and

(iv) empiricist epistemology, which says that all knowledge and truths are a posteriori.

The direct implication of the conjunction of these four theses is that everything which does not fit the scientific image can be safely regarded as epiphenomenal, folkloristic, quaint, superstitious, a matter of taste, or else downright naïve. So, scientific naturalism holds that the nature of knowledge and reality are ultimately disclosed by pure mathematics, fundamental physics, and whatever other reducible natural sciences there actually are or may turn out to be; that this is the only way of disclosing the ultimate nature of knowledge and reality; and that even if everything in the world, including ourselves and all things human (including language, mind, and action), cannot be strictly eliminated in favour of or reduced to fundamental physical facts, nevertheless everything in the world, including ourselves and all things human, is metaphysically grounded on and causally determined by fundamental physical facts. Hence scientific naturalism is committed to providing a “value-neutral set of formulae that express the underlying structure of the natural universe”, just as the Vienna Circle envisioned.

The combination of metaphysical physicalism, natural mechanism/causal determinism/indeterminism and scientific naturalism can be regarded as the core commitments of mainstream contemporary Western metaphysics. As a theoretical package, they naturally lead to the following five peripheral commitments.

(iv) Covert Dualism about the Structure of Reality

While physicalist metaphysics attempts to classify all scientific findings into one overarching explanatory, scientific-naturalist framework—a Vienna-Circle-like “unified science” (Wilson, 1999)—adherence to the ideal of mathematizability leads to The Two Images Problem (Sellars, 1963b; Hanna and Paans, 2020). On the one hand, there is the objective, non-phenomenal, perspectiveless, mechanistic, value-neutral, impersonal, and amoral metaphysical picture of the world delivered by logic, pure mathematics, and the fundamental or “hard” natural sciences. And on the other hand, there is the subjective, phenomenal, perspectival, teleological, value-laden, person-oriented, and moral metaphysical picture of the world yielded by the consciousexperience of human beings. In 1963, Wilfrid Sellars aptly and evocatively dubbed these two sharply opposed world-conceptions “the scientific image” and “the manifest image” (Sellars, 1963b).


By “science” Sellars means not only the formal sciences of logic, mathematics, and computer science, but also and above all the “hard” natural sciences of physics— including cosmology and particle physics—astronomy, and chemistry. Above all, it is clear that Sellars’s appeal to “science” fully includes natural mechanism, and also that the predictive precision of the formal and hard natural sciences is taken as incontrovertible evidence of their reliability and truth, thereby reinforcing the scientistic mindset. Correspondingly, according to the standard construal of scientific theory-reduction, both astronomy and chemistry have a fully mathematically describable and microphysical basis in fundamental physical entities, properties, facts, and processes, and therefore they are both fully grounded in a fundamental, naturally mechanistic physics. Many or even most scientistic philosophers are reductive physicalists, who hold that all worldly facts are nothing over and above the fundamental physical facts. Sellars himself, however, by virtue of his appeal to what he calls “the logical space of reasons” that characterizes concept-driven human thinking in the manifest image (Sellars, 1963c: p. 169), is a scientistic and sophisticated non-reductive physicalist.

However, both reductive and non-reductive physicalism alike introduce a covert dualism that creeps in via the scientific and philosophical backdoor. If the world exists fundamentally as the aggregate of manifestly real objects and relations and as the ideal, mathematized descriptions of these objects and relations, then even if one rejects ontological dualism, nevertheless explanatory dualism re-emerges. This is not a dualism of mind versus matter as equally two basic but essentially distinct cosmic substances, but instead a dualism between object and conceptual description.

Explanatory dualism plagues mainstream contemporary Western metaphysics in various guises. In its platonistic version, it reiterates a two-world version of platonism, in which the Ideas are the metaphysical correlates of ideal mathematical descriptions (Dunham et al., 2014: pp. 19-25). Such a platonizing tendency can be found, for example, in the work of Alain Badiou, when he maintains that “mathematics is ontology,” re-iterating the dualism that has come to characterize metaphysics (Badiou, 2013, 2014: section 1). We can find the same tendency in the work of Quentin Meillassoux when he insists on the ideal character of mathematization as a way to break out of the “Correlationist Circle,” i.e. the thesis that thinking and being can’t be thought apart (Meillassoux, 2008; and for a concise introduction and response to Meillassoux’s thought, see Harman, 2018: pp. 123–155). And on the other side of the Channel, leaving aside longstanding rhetorical and stylistic differences between Anglo-American and French philosophy, Roger Penrose holds very similar platonistic views (Steiner, 2000). In in all three cases, mathematization is seen as a fool-proof way to gain insight into the true, underlying structure of the universe, unencumbered by the deceptive senses, in the best Cartesian sense.

The Meillassoux-style rejection of Correlationism points towards the other mode in which dualism plagues contemporary metaphysics: any “two-world” version of Kant’s or Kantian transcendental idealism. According to the “two-world” reading, Kant re-iterated the platonistic distinction between Idea and reality in a Leibnizian-Wolffian mode by emphasizing in the Critique of Pure Reason how the way we perceive reality is inevitably pre-structured by the structure of our minds, thereby internalizing a distinction that was formerly thought to be a feature of the physical world itself.

Whether we agree or disagree with Kant’s categories and Anschauungsformen does not matter here; what counts is the core “two-worlder” thesis that there is a fundamental gap between phenomena as registered by the human senses and noumena that make up reality as it is, but that can only be inferred rather than proven.

The same worry can be raised in another register: a variety of mathematizing instruments allow us to predict and model physical phenomena that our cognitive apparatus cannot register in an unaided manner (Galison, 1997; Knorr-Cetina, 1999; Hossenfelder, 2018). Electron microscopes, sensors, simulations, and scientific models alike all depart from idealized mathematic descriptions that have a certain predictive power, but that structure our access to the world in much the same way that our senses determine our mode of access to the world. In other words: an argument from technocratic efficacy cannot take any skeptical worries away. It merely re-iterates the initial worry that the “really real world” is out there, while we can only obtain incomplete descriptions of it. The core lesson of the “two-world” version of Kant’s or Kantian transcendental idealism is transposed in yet another register: it resides now in our tools, making idealized mathematic descriptions appear as incomplete copies of the real world.

In all these examples, the traumatic distinction between the noumenon or thingin-itself and phenomenon or mere appearance operates as an insurmountable chasm.

In its platonistic version, the Ideal world is out of reach, while the real world itself is nothing but an imperfect and flawed blueprint of the unreachable Ideal world. This ideal world, it seems, can be approached by the absolute reliability and infallibility of mathematics. According to the “two-world” reading of Kant’s and Kantian transcendental idealism, the world we experience consists exclusively of phenomena that mysteriously causally emerge from an noumenal shadow-world about which we can say nothing that is empirically meaningful or “objectively valid,” for better or worse. In its scientific version, the best tools we have yield only idealized fragments of a reality that is unimaginably deeper than even our best descriptions can reach, thus always eluding us.

(v) The Mind-Body Problem

If the previous commitment of mainstream contemporary metaphysics led to the question of our access to reality, its direct correlate, familiarly called “the mind-body problem,” highlights another consequence of the core commitments. If the universe is entirely conceivable in physicalist terms, and if natural mechanism and causal determinism/indeterminism holds true, then somehow or another matter must give rise to mind if any sort of Cartesian dualism (whether in its substance dualist or property dualist versions) is to be avoided.

The problem here is twofold.

On the one hand, no matter what advances that have been achieved in neuroscience, there remains an ontological and/or explanatory “gap” between

(i) physical events in the brain, or what can be inferred from the experimental measurements of these brain-events, and

(ii) a subject’s conscious experience.

An electrical current that runs from point A to B can cause (or at least be regularly correlated with) a ticklish sensation experienced by a given conscious subject; but to claim that the entire question is settled by this relationship is preposterous. We might as well induce an entire range of mental states and beliefs in a patient, simply by intervening in the electrical currents in the brain. And indeed, that has been the core proposal underlying mind-brain identity theory: for each mental state one can identify a corresponding brain state to which that mental state is identical. But how brains generate conscious and self-conscious agents is a mystery. The physicalist must restrict his range of explanations, because the metaphysical picture that underlies his reasoning does not allow for explanations outside the narrow frame of physicalism - & - natural mechanism.

On the other hand, in an attempt to bridge the “gap,” there is an entire literature that seeks to deal with body and mind in logically independent terms. The most poignant examples in this category may the thought experiments in the Analytic tradition, in which brains are freely moved around between persons, cut up, or connected to computers. The entire idea that a brain can be detached and can be considered apart from the body of which it is—after all is said and done—an integral part has led to an impressive range of theories that are (from a certain point of view) absolutely logically sound and at the same time completely misleading. Thought experiments in which brains are freely moved from one skull to another, bodies are replaced and dismantled, persons are resurrected in someone else’s body, etc., belong to Gothic novels and horror movies and are worlds away from the “cold reason” that the Vienna Circle championed. Yet, such thought-experiments also betray a deep-seated mechanistic view of organisms and the nature of organismic life. The Cartesian view that animals are mere machines, and that their screams of pain are just sounds of components being put under strain underlies this reductive materialist/physicalist line of reasoning (Descartes, 1985: part V). Attempts to avoid dualism and reductive materialism/physicalism have resulted in a third way of trying to overcome the gap: non-reductive materialism/physicalism. But this has led to yet another pernicious commitment.

(vi) Epiphenomenalism about Consciousness

Non-reductive materialism/physicalism, which says that consciousness necessarily depends upon the fundamentally physical world, but does not reduce to it. But this amounts to claiming that consciousness is an epiphenomenon, since all causally efficacious facts are fundamentally physical. Consciousness emerges or naturally/nomologically supervenes on the fundamentally physical world, when enough neurons act together, adding a new level of complexity that is somehow reflexive: i.e. the higher levels can refer back to the underlying levels. In theory, this layer-like structure could be modelled using very powerful computers. A variation on this idea – derived from distributed computing – is connectionism, which claims that an interconnected network stores data everywhere, and the brain functions as a terminal to retrieve it or to combine key terms, a workplace of the mind (Stich, 1988). Variations on this way of approaching the mind have been developed by Stanislas Dehaene and Bernard Baars (Baars, 1996, 1997; Dehaene, 2014). Notwithstanding their fascinating findings, the existence of the “gap” is merely explained away in the connectionist approach: relegated to the sideline because it simply does not fit the explanatory, natural-mechanistic framework that one uses as basis of the argument. We find the same reductive tendency in the work of Daniel Dennett, whose book, Consciousness Explained, was humorously nicknamed Consciousness Ignored (Dennett, 1991).

His slogan “competence without comprehension” nicely captures the core thought that underlies the epiphenomenalist approach: deep down, there is no such thing as consciousness, and therefore we can dispense with all talk about a firstperson, embodied, and lived perspective as merely subjective myth-making or pretence. Again, the “gap” looms. Let’s suppose that we possess all correct scientific information about the brain, yet first-person perspectives, or collective and individual lived experiences, can still be missing: this is the so-called “zombie argument.” Now, suppose that, in addition to the brain about which we have all correct information, we add consciousness as an extra fact, somehow causally related to the brain. This can be nothing but a causally inert shadow of the brain: an epiphenomenon, just as the screams of the tortured animal were nothing but the sounds caused by mechanical operations.

Epiphenomalism turns on one simple thought: just as enough interacting water molecules mysteriously cause the macroscopic quality “wetness,” so can many neurons acting together mysteriously cause the (if you are an eliminitaivist like Dennett, the representational illusion of the) macroscopic quality “consciousness.” Advancing one step further, the presence of a sufficient level of reflexive neuronal loops within one system creates continuous feedback that mysteriously causes self-consciousness, or a kind of first-person view of the world. In this manner, the mind is made a result of acting matter. However, none of this can mysterious causation can be an adequate solution to the “gap.”

First, an essential shortcoming of all such non-reductive materialist/physicalist theories is that, starting out with neurobiological processes we have swarms of explanatory theories postulating some or another causally mysterious emergence or natural/nomological supervenience of the mental on the fundamentally physical, that cannot even in principle account for our very real first-person and existential experiences of beauty, sadness, hope, melancholy, despair, etc. In other words, nonreductive materialism/physicalism as applied to biological life cannot even in principle yield what Michel Henry called “Life”: the fully embodied, self-determining, forward-directed, existential awareness of subjectively experienced being in a given real-world predicament.

Second, the mechanistic premises on which epiphenomenalism is based are embedded or encoded in the model used for thinking about biological life itself. Nevertheless, a simulation or simulacrum of biological life, replicating a few of its functional properties, is not an instance of biological life itself. This is why the reduction of the operations of organisms to computable functions and algorithms falls essentially short as an explanation, and no amount of computing will result in a real-world biological brain, or even a close correlate of it. Moreover, this approach to mimicking life reiterates a fundamental distinction intended to demarcate distinct regions of reality in the universe: namely, the mechanistic assumption that matter, as such, is inherently inert. This assumption is not undermined by the nowadays popular doctrine of panpsychism, which merely ontologically or explanatorily injects epiphenomenal mental facts into fundamental physical facts (Goff, Seager, and Allen Hermanson, 2021). But injecting shadows-of-machines into machines does not make those shadows causally efficacious—any more than injecting Cartesian ghost-souls into machines would make those ghost-souls causally efficacious.

The idea of consciousness as an epiphenomenon re-iterates the fundamental distinction that also underlies the mind-body problem. If the mind is a giant computer, there is no reason why it cannot be uploaded, or even whether a body is needed. But this response creates a special kind of absurdity that is best visible in the work of Paul Churchland, who pushed this thought as far as it would go with an admirable radicality (Churchland, 2013).

According to Churchland, we ordinarily think of mental states as epiphenomenal manifestations. We even use a language that is naively “folky” to speak about them, but with no good reason at all. So, we might just as well eliminate the entire vocabulary that mentions emotions, beliefs, hopes, etc., in favour of a more precise description in terms of neuronal or biochemical interactions. Just as the Wallace-Darwin theory of evolution is a crude tool compared to contemporary microbiological and genetic approaches for identifying organisms and their evolutionary genealogy, or Newton’s theories are but approximate descriptions of facts or phenomena that have been predicted infinitely more precisely in General relativity and/or quantum physics, so too are our mental states best described through a precise coding of the neuronal interactions that give rise to them. We should therefore simply identify the emotion “anxiety” with an electrical pattern of firing synapses and/or chemical interactions in the brain, and eliminate any reference to the epiphenomenon.

In the version of epiphenomenalism espoused by Dennett, consciousness is an eliminable high-level epiphenomenon that arises as part of lower-level interactions of a biological—or indeed digital—system. So, once the number of (reflexive) interactions in a system increases, we will end up with an eliminable epiphenomenon called consciousness or self-consciousness in even further developed systems. But at the end of the day, this is just a way of talking, and ultimately a pragmatic “stance.”

Nevertheless, Dennett’s eliminativism hides another unargued presupposition: that human cognition is reducible to what Kant called “determining judgments,” i.e., logically-guided conceptual-discursive operations, closely associated with the faculty of the understanding or Verstand. In a myriad of different guises, this presupposition is the same as conceptualism—the doctrine that all cognitive content is strictly determined by our conceptual-discursive capacities, and that all cognitive operations are essentially conceptual-discursive operations, which carries with it a biased commitment to rule-based reasoning and propositional activity over the categorically distinct and essentially non-conceptual representational contents and operations of sensibility, where this includes perception, empirical or non-empirical spatiotemporal representation, episodic and skill memory, affect or emotion, and imagination. Descartes’s profoundly skeptical distrust of sense perceptions, memory, and imagination still hovers like a-ghost-in-the-Turing-machine, a ghost that cannot ever be exorcised as long as the mechanistic worldview grips us, with a profoundly impoverished metaphysical picture of the world as an inevitable and lamentable consequence.

But even if we suppose for the purposes of argument, per impossibile, that one day we might succeed in explaining consciousness via some or another version of non-reducctibe materialism/physicalism and epiphenomenalism, then we are still not out of the forest. To explain (away) consciousness will itself be impossibly hard, but once we supposedly succeeded at that, then we would find ourselves confronted with the categorically harder task of explaining (away) human rationality.

Or, as Thomas Nagel concisely puts it:

= [Human rationality] cannot be conceived of, even speculatively, as composed of countless atoms of miniature rationality. The metaphor of the mind as a computer built out of a huge number of transistor-like homunculi will not serve the purpose, because it omits the understanding of the content and grounds of thought and action essential to reason. (Nagel, 2012: p. 87)

Indeed, the mechanistic root metaphor of “the mind as a computer” is not only deeply flawed but strictly impossible, in view of the formal facts, proved in the 1930s by Alonzo Church and Kurt Gödel, that not even all proofs in classical first-order predicate logic, far less the proofs of all mathematical truths in uniquely formalized first-order Peano arithmetic, can be carried out by computers (Boolos and Jeffrey, 1989). How then could we ever seriously think that computers could exactly replicate and also improve upon all (or even any) of the cognitive, affective/emotional, or practical activities of rational human agents, as the thesis of strong artificial intellgence asserts? This ultra-mechanistic thesis is nothing but a fantasy, and indeed a reprehensible fantasy. What is required, then, is a new organicist approach to formal science, natural science, and philosophy alike, that not only non-mechanistically and irreducibly fully incorporates human consciousness and self-consciousness but also non-mechanistically and irreducibly fully incorporates human rationality.

(vii) Adherence to Conceptualism

The thesis of conceptualism holds that all mental content is necessarily and sufficiently determined by our conceptual or discursive capacities, which include our judgment-making or propositional capacities, our inferential capacities, and our logical capacities more generally. By sharp contrast, the thesis of non-conceptualism holds that at least some mental content is necessarily and sufficently determined by our nonconceptual or sensible capacities, which include our perceptual capacities, our capacities for representing empirical or non-empirical spatiotemporal content, our capacities for episodic and skill memory, and our capacities for affect or emotion, including feelings, desires, and passions.

The upshot of the conceptual vs. non-conceptualism contrast might seem to be that everything not determined by concepts cannot be theorized at all, but this is a mistake that overlooks the essential role of spatiotemporal representation in theorizing of all kinds. But, since conceptualists consistently overlook this crucial point, it is as if this core commitment of contemporary metaphysics reduces Hegel’s famous assertion, “the real is rational, the rational is real,” to the application of concepts. According to this view, all and only what can be captured in rational, discursive and conceptual terms is real; what remains is either nothing but a brute, non-normative “given” or else something that’s merely subjective and epiphenomenal.

Conceptualism remains a default position that tacitly underpins the conception of the mind and personhood that characterizes contemporary metaphysics in general and mechanistic metaphysics in particular. In its neo-Hegelian version, it is promoted nowadays most forcefully by the Analytically-trained philosophers of the so-called “Pittsburgh School,” especially Wilfrid Sellars, John McDowell, and Robert Brandom (Maher, 2012).

The wider effects of this conceptualist commitment can be discerned in two views that have had a tremendous impact throughout 20th-century Anglo-American philosophy. First, in philosophy of mind, conceptualism underwrites the idea that other minds are represented by an innate “theory-theory” possessed by every human individual; and second, in political philosophy, conceptualism underwrites the idea of persons as purely egoistic, instrumental reasoners, i.e., “rational optimizers.” The conceptualist idea of a theory-theory of the representation of other minds entails that all human beings possessed an innate theory or proto-theory of how the mind works; or else, that they would make judgements about other minds by applying rules that structure the theory. And the application of such theoretical rules is deeply conceptual. One must possess a conceptual structure already, even if its application is currently non-manifest. The upshot of this commitment is that human mind is conceptual all the way down. Often, this assertion is paired with its dialectical corollary: that we can safely extrude from philosophy of mind or cognitive science or anything that is non-conceptual.

Notably, in political philosophy of a classical liberal or neoliberal orientation, the idea of a person as a rational optimizer presupposes a version of conceptualism which holds that

(i) persons are by nature egoistic, instrumental reasoners and

(ii) that all instrumental reasoning is strictly determined by our conceptual capacities.

Decision theory is the paradigm of such a view. What is at work here is at bottom a politicized version of the thesis of natural mechanism, as specifically applied to human animals. Both Dawkins’s idea that we are “survival machines” and Dennett’s ideas of us as “moist robots” and of our “competence without comprehension” play crucial supporting roles. If we analyze the idea of the homo economicus, we see that it is a reduction of a human person in evolutionary natural-mechanistic terms – and that it leads towards a natural mechanist decision theory. In turn, these assumptions lead to a mistaken picture of thinking as such, as will be discussed in section III.

(viii) A Disregard for Anything that does not fit this Template

As I mentioned earlier, the combination of the three core commitments, combined with an adherence to conceptualism in one of its myriad forms, easily leads to a dismissal of philosophical questions that are hard to address within the mainstream contemporary Western metaphysical framework, simply by formulating and promoting certain questions that automatically push other questions to the periphery.

A striking illustration is discussed in a fine essay by Doug Mann, who notes that in The Oxford Companion to Philosophy, Ted Honderich visualizes philosophy as a core consisting of (a naturally mechanistic, physicalist, scientific) metaphysics, logic, and epistemology. Around this core, other philosophical specializations are listed like planets concentrically orbiting a central star.

The questionable and overweaning two-part assumption here is

(i) that a few areas of specialization rightly dominate all the others, and

(ii) that many areas of philosophy are rightly taken from the outset to be derivative or peripheral (Mann, 2018).

An added complication is that this image of philosophy as a neatly and hierarchically ordered system of inquiry gives rise to a highly insulated view of the discipline, as if philosophy should at its most fundamental level, concern itself with the “deep questions” located in the core. However, this intellectual commitment easily gives rise to a “Ivory Tower” or “Glass Bead Game” model of inquiry, in which increasingly abstruse and Scholastic (in the pejorative sense of the term) arguments and debates are carried out, “full of sound and fury, [but] signifying nothing.” Correspondingly, it gives rise to philosophy’s disengagement from the world and its irrelevance to the central concerns of humanity, a view that is most closely associated with The Vienna Circle’s theoretical obsession with “the icy slopes of logic,” but generalizes over all of mainstream contemporary professional academic philosophy, not always as an obsession with logic, but

.. often enough nowadays as an obsession with social justice theory and identitarian multiculturalism, as if that somehow captured the core of all human morality and politics, and were not nothing but a moralistic mirror of the collapse of the civil rights movement in the USA, the fragmentation of the American Left, and its retreat into the ivory bunker of the professional academy, during the roughly twenty-five years following the assassination of Martin Luther King Jr. in 1968 (Rorty, 1994; Kazin, 2011: chs. 6-7).

Not coincidentally, philosophy that has attempted to address questions like mortality, the human condition, the natural world and the meaning and/or purpose of life (if any) also had close connections to art and literature, and very often a turbulent relationship with the professional academy. Here I am thinking of philosophers like (to a certain degree) Schelling, Schopenhauer, Kierkegaard, and Nietzsche, as well as Sartre and Wittgenstein, and more recently figures like Rorty. Philosophy cannot be confined to questions of logic, or to debates about epistemology or metaphysics, or confined to a certain moral and political “party line.” Indeed, as Mann argues, the metaphysical picture that underlies this conception of philosophy is in itself deeply problematic, but it paves the way for the commitments outlined above.

Every question that is pushed to the forefront of individual or social cognition pushes other questions into the cognitive periphery. Taken together, the eight commitments function like a veritable philosophical-institutional phalanx, suppressing those views that cut at the emptiness underlying the thorny wreath of philosophical presuppositions that has grown around it."

(Source: Otto Paans, Reason, Subjectivity, Organicism. Borderless Philosophy 5 (2022): 161-212)