[p2p-research] is the mind a computer

Paul D. Fernhout pdfernhout at kurtz-fernhout.com
Mon Nov 9 01:59:02 CET 2009

J. Andrew Rogers wrote:
> On Sat, Nov 7, 2009 at 10:37 PM, Michel Bauwens <michelsub2004 at gmail.com> wrote:
>> Amongst the few extraordinary claims that require extraordianory evidence:
>> [snip]
>> 3) that there is nothing scientific about Howard Gardner's hypothesis, it is
>> just new age bunk
> This was battled out in the cognitive sciences a long time ago, and
> multiple intelligence theory (MIT) lost because there are a number of
> pretty obvious weaknesses to it.  Popular flame war material back in
> my CogSci days. It ultimately was discarded because it failed any kind
> of rigorous scrutiny to support it and the evidence for it could be
> explained much more simply in single intelligence terms.  It was very
> much an "epicycle" model of intelligence from back in the days when we
> knew less about it.
> When mathematics got around to formalizing the concept of
> intelligence, MIT was already considered a dead hypothesis. The
> mathematics just put the final nail in the coffin.

Please cite your evidence.

Are you referring to criticisms like also on the page I cited?

As I understand it, the brain is easily thought of physically as a layered 
collection of one hundred or more specialized subsystems for various types 
of information processing (although they interact in various ways, whether 
by neural connections or more widespread hormones, including bodily aspects 
to all this changing the nature of emotional thought).

There may be some functions that are more generalized than others, but it is 
very straightforward to talk in terms of different brains working more or 
less better in some subsystems.

There is also a factor to which good nutrition (and enough sunlight for 
Vitamin D) and a healthy and interesting environment (especially early 
social interactions and natural interactions) may in general lead to brains 
that work better (like turning up or down a volume control on a stereo that 
also has a mixer with lots of separate controls).

And, it is not always clear there is one best form of intelligence, since 
there are almost always tradeoffs. For example in relation to memory:
"Those with the genetic quirk performed 20% worse on all of the tests. ... 
On the bright side, the gene appears to limit damage to memory when people 
contract certain memory related diseases. About 30-35% of the population may 
have this quirk."

But, just because advocates for there only being one general intelligence 
factor make some points, does not mean these other issues go away, ones that 
are very visible in the evolution of the brain as a collection of somewhat 
independent systems.

The other issue is, in practice, the theory of Multiple Intelligences has 
proved very useful in improving education. So, when you say something is 
"dead", you make a sweeping claim, but leave out the nuances there. As from 
that Wikipedia page: "Gardner argues that by calling linguistic and 
logical-mathematical abilities intelligences, but not artistic, musical, 
athletic, etc. abilities, the former are needlessly aggrandized. Many 
critics balk at this widening of the definition, saying that it ignores "the 
connotation of intelligence...[which] has always connoted the kind of 
thinking skills that makes one successful in school."[10]"

So, there are deep social and political issues to some of this as well.

Your point, intended to or not, seems to be coming from a mainstream view in 
that sense. Oftentimes the mainstream is right; sometimes it is wrong.

Another writer on a related topic, Jerry Fodor, and others:
   "Modularity of mind"
According to Fodor, a module falls somewhere between the behaviorist and 
cognitivist views of lower-level processes.
   Behaviorists tried to replace the mind with reflexes which Fodor 
describes as encapsulated (cognitively impenetrable or unaffected by other 
cognitive domains) and non-inferential (straight pathways with no 
information added). Low level processes are unlike reflexes in that they are 
inferential. This can be demonstrated by poverty of the stimulus arguments 
in which the proximate stimulus, that which is initially received by the 
brain (such as the 2D image received by the retina), cannot account for the 
resulting output (for example, our 3D perception of the world), thus 
necessitating some form of computation.
   In contrast, cognitivists saw lower level processes as continuous with 
higher level processes, being inferential and cognitively penetrable 
(influenced by other cognitive domains, such as beliefs). The latter has 
been shown to be untrue in some cases, such as with many visual illusions 
(ex. Müller-Lyer illusion), which can persist despite a person’s awareness 
of their existence. This is taken to indicate that other domains, including 
one’s beliefs, cannot influence such processes.
   Fodor arrives at the conclusion that such processes are inferential like 
higher order processes and encapsulated in the same sense as reflexes.
   Although he argued for the modularity of 'lower level' cognitive 
processes in Modularity of Mind he also argued that higher level cognitive 
processes are not modular since they have dissimilar properties. The Mind 
Doesn't Work That Way, a reaction to Steven Pinker's How the Mind Works, is 
devoted to this subject.
   Fodor (1983) states that modular systems must - at least to "some 
interesting extent" - fulfill certain properties:
    1. Domain specificity, modules only operate on certain kinds of inputs – 
they are specialised
    2. Informational encapsulation, modules need not refer to other 
psychological systems in order to operate
    3. Obligatory firing, modules process in a mandatory manner
    4. Fast speed, probably due to the fact that they are encapsulated 
(thereby needing only to consult a restricted database) and mandatory (time 
need not be wasted in determining whether or not to process incoming input)
    5. Shallow outputs, the output of modules is very simple
    6. Limited accessibility
    7. Characteristic ontogeny, there is a regularity of development
    8. Fixed neural architecture.
Pylyshyn (1999) has argued that while these properties tend to occur with 
modules, one stands out as being the real signature of a module; that is the 
encapsulation of the processes inside the module from both cognitive 
influence and from cognitive access.[2] This is referred to as the 
"cognitive impenetrability" of the module.

The referenced book:
   "The Mind Doesn't Work That Way" by Jerry Fodor
"Criticism from within always stings more sharply. When one of computational 
psychology's peppiest cheerleaders questions the enthusiasm of his fellows, 
we can expect some juicy, if civil, dialogue ahead. Jerry Fodor does just 
this in The Mind Doesn't Work That Way: The Scope and Limits of 
Computational Psychology. Named to answer Steven Pinker's How the Mind 
Works, this short, focused, and heavy book calls Pinker and others to task 
for claiming too much for CP. While acknowledging that it's "by far the best 
theory of cognition that we've got," he expresses concern about the 
popularizations--and privately held beliefs--that imply that the strongly 
nativist computational theory explains, or will explain, our conscious and 
intentional being in toto. Using scholarly, diplomatic, and sometimes 
hysterically funny language, Fodor demolishes the notion that CP has 
anything to say about large-scale or global thinking, and casts doubt on its 
future prospects. Proceeding more scientifically than his scientist 
colleagues, he proposes that a better theory of mind is looming, and will 
encompass CP much as relativity encompassed classical mechanics. Encouraging 
debate on the fundamentals of this increasingly popular theory, especially 
within the ranks of its adherents, can only be good for the theory and for 
cognitive science itself. The Mind Doesn't Work That Way follows in the 
great philosophical tradition of clobbering ideas in order to make them 
stronger, and provides a great mental workout for the reader. --Rob Lightner"

The other review there:
How does the mind really work? We don't yet know, but in his previous 
writings, prolific Rutgers philosopher Fodor (Modularity of Mind; The Elm 
and the Expert) helped provide cognitive science with what he calls a 
Computational Theory of Mind (CTM). (The theory in brief: the mind works 
like a certain kind of computer, with built-in modes of operation; some of 
these modes are involved in language, as predicted by Noam Chomsky.) Fodor 
still supports such a theory of mind, but other scientists, he thinks, have 
misused the model: popular writers and influential thinkers like Steven 
Pinker (How the Mind Works) have hooked up CTM to sociobiology to give an 
inaccurate picture of thoughts and feelings -- one that, Fodor argues, 
relies on wrong generalizations, unreliable assumptions and an unsupportable 
confidence that we already have the whole picture. This picture is called 
the New Synthesis, and Fodor writes to refute it. He also wishes to show, by 
contrast, what remains useful about computational models of biologically 
based mental processes. One of Fodor's arguments distinguishes between local 
and global cognition. Local cognition -- like understanding the word "cat" 
-- can be explained by CTM, studied by linguists and traced to particular 
parts of the brain. Global cognition -- like deciding to acquire a cat -- 
generally can't and may never be explained. The New Synthesis, Fodor says, 
has confused the two, and he sets out to untangle them. His prose is 
informal, exact and aimed at fairly serious nonspecialists: those who don't 
know who Chomsky or Alan Turing are, or what a syntactic structure is, 
aren't the audience for this book. Those who do know, you may read Fodor's 
case in one sitting, and with intense interest -- whether or not they find 
his logic persuasive.

I'm sure Michel would like that book. :-)

But you might like it too. :-)

 From one of the comments, something that reconciles aspects of what both of 
you say:
Fodor also shreds what he calls "neo-Darwinist anti-intellectualism," the 
view (he is quoting from Patricia Churchland) that "looked at from an 
evolutionary point of view, the principal function of nervous systems is to 
get the body parts where they should be in order that the organism may 
survive...Truth, whatever that is, definitely takes the hindmost."
   Fodor counters that for humans "a cognitive system that is specialized 
for the fixation of true beliefs interacts with a conative system that is 
specialized to figure out how to get what one wants from the world that the 
beliefs are true of..." or, in simple English, humans engage in "rational 
actions predicated on true beliefs."
   We are designed to pursue both truth and our own well-being -- there is 
no contradiction here. Not action instead of truth, but action based on truth.

Or, you might prefer this rebuttal to it by Steven Pinker: :-)

But, deeper than all that, I think you are incorrectly discounting the 
social nature of intelligence, and how p2p cooperation can affect it.

So, maybe a better question than "Is the Mind a Computer?" is, "Is the Mind 
a Society?" And, I don't mean Minsky's Society of Mind, I mean, that our 
mind or minds emerge from our social interactions, so in that sense, our 
mind is our society to some extent, like "the network is the computer" to 
borrow Sun's marketing slogan. :-)

--Paul Fernhout

More information about the p2presearch mailing list