perjantai 23. tammikuuta 2009


What is mind?

The material world consists of entities like particles, waves,
forces, and fields. Biological materials, including our brains, belong to
this material world. According to a basic principle in physics all events
in the material world are caused by the four known interactions
(gravitation, electromagnetic, strong, and weak). As the brain activity
causes all my conscious muscle movements, in my speech my brain
communicates information from the neural network of my brain.
The brain has an exceptional, non-physical property: it can call
for conscious experiences: sensations, feelings, thoughts, will, and self-
consciousness. As such mental entities do not belong to the material
world, they do not situate anywhere in the material world, especially not
in the brain. While awake, the brain "transmits" the essential current
information into consciousness in the form of conscious experiences.
The concept mind seems to be a useful concept, as it is used in
practically all human cultures. However, the word has many meanings
and definitions. My definition:

Mind is all that brain information, which can be brought back to
consciousness. It consists of memories of previous conscious
and thoughts.

This definition is in agreement with the everyday use of that word:
1. Mind does not belong to the things of the physical world, but neither
is it just conscious experiences.
2. Mind is connected to the brain activity. Your mind is active only
when you are awake.
3. Your mind today is nearly the same as it was yesterday. The mind
exists during whole life. As it is the information of the previous and
presentconscious experiences, it "grows" with experiences from a
primitive mindof babyhood to full maturity, and deteriorates during
4. You are acting "out of your mind", if your behaviour is based on
strong unconscious motives.
5. Mind is not exclusively a private personal experience. Your speech
and your other behaviour tell about your mind.

When you tell what you see, your brain communicates to the listener
the image which it had produced on the basis of visual input. However,
you areconvinced that you describe your conscious sensations of that
moment. This dichotomy is possible, because the brain information
communicated in speechis exactly the same as the information content of the
conscious sensation ofspeaking. The dichotomy disappears, when we realize
that the consciousness is also brain information. It is at each moment
that information in the neuronal
network which is included in the
conscious experiences when the brain calls for
them. The conscious-
ness is that part of mind
(mind information) which is also present in the
conscious experience at that moment.

When you speak about your own consciousness, your speech refers both
to your brain information and to your present conscious experience. When you
speak about the consciousness of others, you speak about their brain
information, as their conscious experiences are strictly private.

My blog "Conscious Experiences and Mind" describes in more detail the
features of conscious experiences and the connections of brain activities and
conscious experiences.

perjantai 16. tammikuuta 2009



In a debate "Could a machine think" both John Searle (1990) and Paul
& Patricia Churchland (1990) use the word "thinking" as a synonym of
"conscious thinking". However, many of the greatest scientific
discoveries are results of unconscious brain activities. Thus the
answer to this question depends on the definition of "thinking".
I ask here: "Which animals have conscious experiences" and "Could a
machine have conscious experiences like sensations or pain".

My analysis of conscious experiences (CEs) in animals and robots is
based on two assumptions, which are in line with the knowledge of
modern neurophysiology (see my blog "Conscious experiences and

1. Based on sensory information and stored neural information my
brain produces an on-line model of my surroundings and my body. My
conscious bodily activities are based on this neural model for action.
A specific fraction of the information in the neural network which
constitutes this model for action is denoted here the apparent
2. Conscious experiences do not belong to the material world; they
are mental entities. The brain calls for CE using a non-material
consciousness evoking process and this CE has the same
information as the apparent information in the brain (Chalmers,
1995). The neural model for action and the simultaneous CE are two
presentations of the same event (Von Wright, 1998). Because the
information contents of the model for actions in the brain and the
corresponding CE are the same, referring to one representation
necessarily implies the other. "I feel pain" is another way of
saying that "My brain has come to the conclusion that tissue damages

Is it possible that primitive animals or some robots in the future
have CEs? No one can be sure. My suggestions of the preconditions for
the consciousness are based on two assumptions. First, human beings
have CEs. Second, information is crucial: a CE and the corresponding
brain activity have the same, apparent information. Based on these
assumptions, I suggest the following four preconditions for CEs.

1. The system is cybernetic. It has sensors or detectors which collect
information of the surroundings and the state of the system itself. It
has memory and data processing capacity. Finally, it has servo
mechanisms to respond appropriately to external stimuli. Living things
and robots are cybernetic and fulfill this condition.

2. The system is intentional. It has goals, and it can evaluate which
structures, mechanisms, or events are useful and which are harmful when
the system pursues its goals. All animals act in ways which we call
intentional. During evolution animals and to some extent also plants
have obtained mechanisms to identify valuable and harmful things or
events and to react appropriately; things and events have meanings.
Those meanings are information and cognitive scientists now work hard
to clarify that information. "Thinking" may be defined as information
processing when meanings are attached to symbols (e.g. Searle, 1990).
This definition implies that e.g. an earthworm thinks. When an
earthworm receives a chemical signal from food or from a predatory
animal and reacts appropriately, its simple neuronal circuit apparently
attaches meanings to those signals.
Robots may be programmed to pursue given goals, and they could
classify external events and objects as "bad" or "good" according to
given principles. However, this "intentionality" may not be genuine,
as it is programmed by an outsider.

3. The system is able to construct into its information storages a
current model for actions which includes its present environment and its
inner structure. Animals with a nervous system do construct such models,
but plants and the most primitive animals do not have the necessary infor-
mation processing capacity. On the other hand, a robot with an effective
electronic computer and optical or other sensors may construct such a

4. The information processing system includes the consciousness
evoking process. Human beings have this process, and because animal
evolution has been continuous, apparently also the animal species which
have advanced brains have it. Unfortunately, we do not have any idea of
the nature of this consciousness evoking process.

I assume that in some stage of evolution of animals with nervous system
the consciousness evoking process appeared. As the information content
of the neural model for actions of a primitive animal is small, also the
information content of the corresponding CE is small and thus the mental
experiences are primitive. The most primitive CE may have appeared
quite early in evolution. Senses warning of tissue damages and chemical
senses have been the first to evolve in primitive animals. The sensations
which evoke the strongest emotions in humans are also caused by tissue
damages (pains) or chemical substances warning of dangers (nauseous
smells and tastes). This suggests that even quite primitive animals have
conscious experiences like pains and smells.

An analogy principle can also be applied to the conscious experiences in
man and other animals. Similar brain structures apparently have similar
consciousness evoking processes and thus call for similar CEs. The
similarity of brains can be evaluated either on the basis of anatomy and
electrophysiology, or on the basis of the bodily activities evoked by the
brain. The brains of different humans are fairly similar and we also
express ourselves rather similarly in equal situation. On these grounds it
is generally assumed that adult people have CEs, and those experiences
are usually fairly similar in equal situations. However, quite often the
similarity of CEs is illusory. In a given situation the actual experiences
may be quite different but the behaviours are similar because of social

The sensory systems and brains of a new-born baby are quite different
from those of adults. Moreover, he does not yet have in his brain many
internal representations needed to interpret current sensory signals.
Thus his experiences are apparently very different from the experiences
of adult people. However, when the child grows up, the CEs change
continuously, approaching the adult type CEs. The brains of great apes
are rather similar to human brains, and many aspects of their behaviour
are qualitatively quite similar to human behaviour. Thus according to
analogy principle the apes have quite versatile CEs.

In principle, advanced robots could fulfil the suggested four
preconditions and have CEs. Even then, I am skeptical about the
possibility that a conscious robot will turn up. The robots are not
genuinely intentional, and as long as we do not know anything about the
consciousness evoking process, it would be an improbable coincidence
that a computer "brain" would accidentally produce such a process.

However, let us assume that in the future an advanced robot is genuinely
intentional and has the consciousness evoking process. Because of
different data processing structures the analogy principle is not useful.
How could such a robot demonstrate to human beings that it has CEs? As
I see it, the robot has to be able for introspection. In that case it
could discover that in addition to the information in its electrical
circuits it possesses a coded representation of the external world, which
is non-material in the sense that it cannot be included in the physical
description of the world. If it is able to communicate such information
to the people (and if it is to be trusted), it apparently has CEs.

Chalmers, David J. (1995). The puzzle of conscious experience.
Sci. Am., 273, 62-8.
Churchland, Paul M. & Churchland, Patricia Smidth (1990).
Sci. Am., 262, 26-31.
Searle, John R. (1990). Is the brain’s mind a computer program?
Sci. Am., 262, 20-25.
Von Wright, Georg Henrik (1998). In the shadow of Descartes.
Dordrecht, Boston, and London: Kluwer Academic Publishers.


Tietoja minusta

D. Techn. Docent in biophysics, retired