[p2p-research] is the mind a computer

Paul D. Fernhout pdfernhout at kurtz-fernhout.com
Sat Nov 7 20:38:38 CET 2009

These are all fuzzy things about mind, intelligence, computers, awareness, 
emotions, and so on, and depend somewhat on definitions and perspective.

But I'm OK with the notion that the mind is not a computer, if the universe 
is. :-)

   "Ed Fredkin and the universe as a computer"

See also:

But as to your "bet" to Andrew on posting something to the Edge on the mind 
being a computer and seeing if there was controversy, he might just say, 
what does controversy prove? It's possible to have a controversy while one 
side is wrong or ignorant (example, a controversy over whether the Earth is 
the center of the Solar system 500 years ago). So, controversy does not 
prove anything in that sense.

I guess a lot of it depends on ones assumptions about metaphysics. For 
example, if consciousness were a fundamental property of our universe, like 
mass/energy, then what does that mean about electronic computers?

Deep questions.

Fun to explore even if there may be no definite answers within this 
universe. So, how can one explore them in a p2p way? :-)

There probably are commons about this. Wikipedia has a lot of stuff:
And so on.

Anyway, I don't think one can make progress on these issues without looking 
at a lot of assumptions about reality and mind and so on.

And one has to ask, at what point is there any difference between a 
"simulation" of feelings and the real thing? Those of us in the computer 
world who work with virtual machines (like a simulation written in Java for 
the JVM running under GNU/Linux in a virtual box on a Mac) may just be 
getting very used to the idea of levels of simulations. :-)

I think it more likely the problem will soon go the other way. The big issue 
will be, do our AIs and robots have human rights? :-)
The ethics of artificial intelligence addresses a number of moral and legal 
issues which arise if researchers are able to build machines with 
intellectual capacities that rival human beings. It considers the unexpected 
consequences, dangers and potential misuse of the technology. It also 
considers the ways in which artificial intelligence may be used to benefit 
humanity. These concerns are similar to those that arise for any 
sufficiently powerful technology and (for these issues) the ethics of 
artificial intelligence is a part of a larger discussion of the ethics of 
technology. The issue of robot rights is unique to artificial intelligence. 
AI may have the ability to one day create sentient creatures—that is, 
creatures which feel pleasure and pain—which may therefore deserve the same 
rights as human beings.

Are we building sentient slaves when we build AIs?

Anyway, I'm comfortable thinking I might be a simulation. What difference 
does it really make anyway? Except for metaphysical issues of control, and 
maybe moving across level boundaries in various ways?

The mystery is in the nature of consciousness (both self-reflective and 
self/other distinction), not what artifact is in embodied in, IHMO. But 
these are all things nobody really understands for sure that we know of on 
this plane of existence; and if they did, we would probably not believe them 
anyway. :-)

My wife likes to say, you have to live in your own head. I think we should 
build the best here and now we can, with an eye to the future, of course. 
:-) And I think we should be nicer to our machines. :-)

We may be someone else's machines -- some little agent in somebody's else 
social simulation of the history of the emergence of p2p. :-) Just as the 
watchers may also be in someone's simulation. And so on. Levels and levels. 
And then even things beyond that. Infinity is really big. :-)

But on a practical basis, there may be no difference right now if you 
believe this is the only reality and only people with organic brains are 
intelligent and computers are not. Either way, the world probably needs to 
be dealt with in just about the same way. And we make choices about kindness 
and compassion and balance and so on.

But, even then, what of whales and dolphins? Or even just dogs? Dogs seem to 
dream, for example. And they have feelings, it seems.

So, I don't think we will resolve these issues here.

What is the significance for p2p?

It might be in issues like, can a smart computer be a peer?

Is a collective an emergent consciousness?

And even more social things -- is a person with dementia who has posted a 
lot to the web still an intact person, taken across the commons and the 
physical body? Does the composite entity still have a right to operate a 
checkbook? -)

All science-fiction questions I've seen explored... And they are fun to explore.

But, I doubt they will get us that far on this list? But I don't know.

--Paul Fernhout

Michel Bauwens wrote:
> J. Andrew claims that it is an undisputed scientific and mathematical fact
> that the human brain is a computer, and that critique of it is borne of
> ignorance.
> For those who may be tempted to believe that false claim, here is a
> sophisticated treatment, to long to reproduce in full, see
> http://www.ime.usp.br/~vwsetzer/AI.html
> *Valdemar W. Setzer*
> Dept. of Computer Science, University of São Paulo, Brazil
> vwsetzer at usp.br - www.ime.usp.br/~vwsetzer
> some very short excerpts to give you a flavour:
> I don't know if the author is fully correct, but he is a computer scientist,
> and therefore, proves that there is no such thing as an uncontroversial
> acceptance of the thesis that was expounded as absolute proven truth,
> Michel
> **
> Ray Kurzweil is one of the exponents of the idea that humans are machines,
> and thus machines will be able to do whatever humans do. His best-selling
> book [1999] is full of prophecies, based upon the following statement:
> "The human brain has about 100 billion neurons. With an estimated average of
> one thousand connections between each neuron and its neighbors, we have
> about 100 trillion connections, each capable of a simultaneous calculation.
> That's rather massive parallel processing, and one key to the strength of
> human thinking. A profound weakness, however, is the excruciatingly slow
> speed of neural circuitry, only 200 calculations per second." [p. 103]
> This statement is absolutely unjustified. He does not say what kind of
> calculations are done by each neuron connection, and as we have pointed out
> before, he cannot even say how data are stored in the brain. Based upon the
> number above, he multiplies it by the 100x1012 connections existing in the
> brain, coming to the conclusion that we are able to perform
> 20x1015"calculations" per second. He does not even consider the
> possibility that
> there may be different functions for different connections; for him this
> capacity to perform calculation is the most important factor. He uses the
> same type of reasoning to come to the conclusion that our memory has 1015bits.
> In his classical book, John von Neumann writes: "the standard receptor would
> seem to accept 14 distinct digital impressions per second". He supposes that
> there are "1010 nerve cells" each one of them working as "an (inner or
> outer) receptor". Then, "assuming further that there is no true forgetting
> in the nervous system", and a normal lifetime of 60 years or about
> 2x109seconds, he comes to the conclusion that our memory capacity is
> 2.8x10
> 20 bits [1958, p. 63].
> It is astonishing that such brilliant people can do these sorts of
> calculations, without knowing how our memory works, taking into account that
> our nervous system is not a digital machine, etc.
> **
> Making machines become conscious is considered one of the hardest problems
> of Artificial Intelligence.
> It is necessary to distinguish two different kinds: consciousness and
> self-consciousness. Animals can be conscious: if an animal is hit, it
> becomes conscious, aware of its pain and reacts accordingly. But only humans
> can be self-conscious. A careful observation will lead to this difference.
> Self-consciousness requires thinking. We can only be conscious when we are
> fully awake, and think of what we perceive, think, feel or wish. Animals
> aren’t able to think. If they could they would be creative as humans are. As
> I have already mentioned, no bee tries a different shape than the hexagon
> for its honeycomb. Animals just follow their instincts and conditioning, and
> act accordingly. Due to their thinking ability, humans may reflect on the
> consequences of their future actions, and control their actions. As I have
> mentioned (see 3.2) a drunkard may be conscious, but he certainly is not
> fully self-conscious - he cannot control his thinking and actions even if he
> wishes to do so. Then, he acts impulsively.
> Thus, animal or human consciousness depends on feelings and human
> self-consciousness depends on conscious thinking. As I have already
> mentioned, machines cannot have feelings, and can only *simulate *a very
> restricted type of thinking: logical-symbolic thinking. One should never say
> that a computer thinks. Thus, I conclude that machines will never be
> conscious, much less self-conscious.
> It is interesting to note that in general one reads about machine and
> consciousness, and very seldom about self-consciousness. Maybe this comes
> from the fact that most scientists regard humans as simple animals - or,
> still worse, as machines.
> **
> As I have expounded on chapter 3, it is linguistically incorrect to say that
> humans are machines, because the concept of a machine does not apply to
> something that has not been designed and built by humans or by machines. But
> let's use this incorrect popular denomination, instead of the more proper
> "physical system".
> There is much more evidence that humans are not machines. I've already
> mentioned some of them, such as the fact that humans may self determine
> their next thought. Fetzer argues against the mind being a machine using the
> fact that we have other types of thinking than logical-symbolic, such as
> dreams and daydreams, exercise of imagination and conjecture [2001, p. 105],
> and shows that logical symbols are a small part of the signs we use, in
> Peircean terms [p. 60]. He also agrees with Searle that minds have
> semantics, and computers do not [p. 114]. To me, the fact that we feel and
> have willing is also evidence that we are not machines. Another strong
> indication is the fact that we have consciousness and self-consciousness, as
> explained in the last chapter.
> In particular, the evidences that we are not digital machines are
> overwhelming, as we have seen in section 3.8. I'll give here some more,
> regarding our memory. If it were digital, why do we remember what we see in
> a way that is not as clear as our original perception? If our memory were
> digital, there would be no reason for forgetting - or losing - the details.
> There is also an evolutionary argument in this direction. Certainly the
> people who think that humans are machines also believe in Darwinian
> evolution. But if we were machines, there would be no evolutionary reason
> for not storing - at least for some time - all the details perceived by our
> senses, similarly to the capacity computers have of storing images, sounds,
> etc. It seems to me that storing and retrieving details would certainly
> enhance the chances of surviving and dominating. It follows, then, that from
> a Darwinian perspective our imperfect memory makes no sense. This means that
> either the concept of Darwinian evolution is wrong, or we are not machines -
> or both.
> Furthermore, how is it possible to "store" something, forget it and
> suddenly, without "consulting" our memory, remember it? This is not a
> question of access time. A machine either has access to some data or hasn't,
> and this status can only be changed by a foreseen, programmed action.
> Accesses may be interrupted either due to random effects or on purpose,
> directed by the program. This is not our case. Often we make an effort to
> remember and we can't - but we certainly memorize, in our unconscious, every
> experience we have. Some people could say that our unconscious has an
> independent "functioning", and does the "search" for us. But here we come
> again to the question of consciousness and unconsciousness. Certainly all
> machines are unconscious, as we have explained in the last section. The
> reaction of a thermostat is not due to consciousness.
> Finally, apparently our memory is infinite; there is no concrete machine
> with infinite memory.
> The capacity of learning is to me also an indication that we are not
> machines. As I said before, computers don't learn, they store data, either
> through some input or results of data processing. If we knew how we learn,
> medical studies in Brazil would not take 6 years. The fact that you are
> reading this paper shows that you have learned how to read. But notice that
> during reading you don't follow the whole process you had to go through, in
> order to learn it. Somehow, just a technique, an end-result of the learning
> process remains. And this is not a question of having stored some calculated
> parameters, as in the case of a (wrongly called) neural net.
> We share with all living beings an extraordinary capacity for growing and
> regenerating tissues and organs. As I explained in section 3.5, a clear
> observation shows that both processes follow models. Models are not
> physical, they are ideas. The non-physical model is permanently acting upon
> living beings, so they cannot be purely physical systems.
> ------------------------------------------------------------------------
> _______________________________________________
> p2presearch mailing list
> p2presearch at listcultures.org
> http://listcultures.org/mailman/listinfo/p2presearch_listcultures.org

More information about the p2presearch mailing list