Towards a History and Pre-History of Knowledge

From P2P Foundation
Jump to navigation Jump to search

* Article: Andreas Keller, Towards a History and Pre-History of Knowledge. Borderless Philosophy 4 (2021): 124-168


Contextual Quote

"The system observing the world is no longer the single human brain that is a result of biological evolution but instead, it is our culture as a whole, with subsystems like science and scholarship as embedded parts of it. This larger entity is a dissipative structure in its own right and it is the structure in which our knowledge arises. The entity that observes the world is not a brain with ears and eyes, it is no longer the proverbial individual philosopher sitting in his proverbial armchair, but a system consisting of people (including philosophers in their armchairs), libraries, computers, classrooms, conferences, the internet, telescopes, microscopes, particle accelerators, journals, publishers etc. This entity does not have a fixed structure and has been restructured and changed permanently in the course of history. It is producing knowledge and it is constantly restructuring itself as needed. The development of “higher” systems of observation and knowledge generation has already been taken out of the hands of biological evolution. The interface of the observing system with reality is no longer limited by our biology and has been widened by all kinds of instruments."

- Andreas Keller [1]


Andreas Keller:

"Active information or “knowledge” in the wider sense always arises inside dissipative structures, be it organisms, ecosystems, people, groups and organizations or institutions of people, civilization as a whole or parts of it, or the combined ecosphere/civilization.

This thermodynamic aspect of epistemology is missing from classical epistemology.

All processes in which active information/knowledge arises are processes within dissipative structures. The knowledge in our libraries and in our electronic storage devices, on the internet and inside our “clouds” develops in dissipative structures of our human bodies and brains, our cultures, our science, our economy, society, and civilization. Processes of knowledge-generation require physical systems. The formalisms (formal theories, algorithms) of logic, mathematics, and classical epistemology are implemented in physical systems and can arise only in physical systems, and these physical systems are always dissipative structures or, at some level, part of dissipative structures."



Andreas Keller:

"This essay could be placed in the spectrum of what Robert Hanna and Otto Paans in (Hanna and Paans 2020) call new wave organicism.

In the graphics shown on page 35 of that article, my essay covers the innermost two layers,

(i) “asymmetric non-equilibrium matter-energy flows” (I would put everything up to the level of Kauffman’s autocatalytic webs of reactions here),

(ii) “organismic life” (from the RNA-world to complex organisms like ourselves), and some parts of the third layer

(iii) “minded animality (including rational human mind).

I have made no attempt here to advance to the outer layer that Hanna and Paans call “free agency.” The concept of “active information” I have discussed here in connection with knowledge and belief can definitely be linked to the realm of action and will. However, attempting to investigate how that higher level may be realized (one could also use the terms “implemented,” “realized,” or “emulated” here, all deriving from computer science) in terms of the lower ones—or whether this higher level exists or is even possible—is outside the scope of the present essay."


Informational Complexity of the Innate Knowledge of Human Beings

Andreas Keller:

"The following is a very crude estimate of the informational complexity of the innate knowledge of human beings. To be more exact, it is a crude estimate of an upper limit to the information content of this knowledge. It might be off by an order of magnitude or so. So, this is a “back of an envelope” or “back of a napkin” kind of calculation. It just gives a direction into which one would have to go to try to get a more accurate calculation. To get a more exact number, the parameters put in here as mere estimates must be determined with more precision. This can be done by doing some research of the relevant literature and, where the required information has not yet been determined by science, by doing additional genetic and neurological research.

According to the human proteome project, 58 roughly 67 % of the human genes are expressed in the brain. Most of these genes are also expressed in other parts of the body, so they probably form part of the general biochemical machinery of all cells. However, 1,223 genes have been found that have an elevated level of expression in the brain. In one way or the other, the brain-specific structures must be encoded in these genes, or mainly in these genes, or a sub-set of them. Some of these genes are probably not directly involved in determining the distribution and connectivity of neurons and might form part of the brain-specific cellular infrastructure underlying those networks, but in one way or the other, the innate knowledge must be encoded in these genes, whose combined activity somehow leads to the development of the brain. One possible source of error might be that the study might not have looked at genes active in the fetus or in the small child’s developing brain (I do not know), so it is possible there are some more genes involved here; but for the sake of this estimate, I assume that the innate information is somehow represented in these genes.

There are roughly 20,000 genes in the human genome. So, the 1,223 genes form about 6.115 % of our genes (by number). So about 6.115 % of our genes are brain specific. Probably, we share many of these with primates and other animals, like rodents, so the really human-specific part of the brain-specific genes or the humanspecific part of their sequences might be much smaller. However, I am interested here only in an order of magnitude result for an upper limit.

I have no information about the total length of these brain-specific genes, hence I will assume that they have average length. According to some sources, 59 the human genome has 3,095,693,981 base pairs.

60 According to the same source, only roughly 2% of this is coding DNA. There is also some non-coding DNA that has a regulating function, or is involved in the production of some types of RNA, but let us assume that the functional part of the genome is perhaps 3%. That makes something in the order of 92 to 93 million base pairs with a function (probably less), or 30 million to 31 million triplets (remember that base pairs are working in groups of three, each group coding for an amino acid or acting as a start- or stop-signal for transcription). If the brain genes have average length, 6.115 % of this would be brain specific. This makes for something like 1.89 million triplets.

The different triplets code for 20 different amino acids. There are also start- and stop-signals. The exact information content of a triplet would depend on how often it appears, and they are definitely not equally distributed, but let us assume that each of them codes for one out of 20 possibilities. Calculating the exact information content of a triplet will require much more sophisticated reasoning and specific information about the frequency distribution of triplets and hence of amino acids, but for our purposes, this is enough. The information content of a triplet can then be estimated as the dual logarithm of 20. You need 4 bits to encode 16 possibilities and 5 bits to encode 32 possibilities, so this should be between 4 and 5 bits. A more exact value for the dual logarithm of 20 is 4.322. So, we multiply this with the number of triplets and get 8,200,549 bits. This is 1,025,069 bytes, or roughly a megabyte, comparable to the information content of a typical book. These genes might contain a lot of redundancy in the sense that it might be possible to compress a complete description of these sequences (i.e., to “zip” them) to a smaller amount of information. Hence, the information content of the brain coding genes that determine the structure of the brain is in the order of a megabyte, 61 and possibly much smaller.

The structure of the brain is somehow generated out of the information contained in these genes. This is probably an overestimate, because many of these genes might not be involved in the encoding of the connective pattern of the neurons, but instead, for example, in the glial immune system of the brain or other brain specific, “nonneuronal” stuff, and many of them might be the same or nearly the same in apes, monkeys and rodents, so the human-specific part could even be much smaller.

Therefore, in comparison to the complexity of our civilization, the innate part of our knowledge is tiny. It is probably much smaller than that of some specialized animals. What probably happened on the way from specialized animals to human beings was that on one side, specialized innate knowledge disappeared and on the other hand non-specialized, but “plastic” or “reworkable” parts of the brain were expanded.

The cognitive development of humans, therefore, starts with the innate core, but is perhaps mostly guided by the cultural environment."