Worlds Within Worlds
- The Holarchy of Life
(Chapter 3)
by Andrew P. Smith, Oct 24, 2005
(Posted here: Sunday, May 27, 2007)
3. TRANSLATION , TRANSCRIPTION AND COMPRESSION
"We have processes inside one E. coli
that do what we used to think required many complete nerve
cells. E. coli is about the size of a single spine of a
pyramidal cell [a type of neuron], and each pyramidal cell has
about five thousand spines; its total volume is equivalent to
about a quarter of a million E. coli, so these facts
should bring about a revolution in the way we think about
computation in the brain."
-Horace Barlow
1
"I have a hunch that there's some deep way in
which IBM and E. coli know their worlds in the same way."
-Stuart Kauffman
2
In Chapter 2, we saw that the forms of
existence within a cell--beginning with atoms and comprising
ever more complex molecular structures--can be said to
constitute a single level of existence, which is usually called
the physical level, or the level of matter. In this chapter, we
will examine the next level of existence, the biological,
beginning with the cell and culminating with the organism. This
level is also called the level of life.
Viruses: Agents of Evolutionary Change?
Cells are generally considered the simplest
form of existence that can genuinely be said to be alive. Before
discussing them, however, I want to say something about viruses,
a still simpler form of existence composed of just nucleic acids
(DNA or RNA) and a few proteins. Viruses are usually considered
an intermediate form of life, more complex than any large
molecule, yet lacking most of the properties of cells. Most
scientists believe, however, that viruses first appeared after
the evolution of cells, and so are not a true transitional form
between the latter and complex molecules.
More specifically, the predominant scientific
view is that viruses evolved from intracellular pieces of DNA
that are to some degree separate and autonomous from the main
genetic material (Campbell 1981). These semi-autonomous
sequences of DNA include plasmids, found in many bacteria and in
simple eukaryotic cells like yeast; and transposons, found in
the cells of some multicellular forms of life, including our own
species. Plasmids are, in effect, smaller, satellite sequences
of DNA, which can be transferred from one bacterial cell to
another. Transposons are sequences within the genome that can
under some circumstances change their position, moving or
"jumping" from one area to another of the genome. Thus both
plasmids and transposons are mobile genetic elements, travelling
pieces of information that can modify the DNA sequences within
cells.
Viruses in some respects combine the features
of both plasmids and transposons. Like the former, they can be
transferred from cell to cell; like the latter, they can
integrate themselves into the genome of the host cell. In this
way they spread from cell to cell and organism to organism,
using the genetic material of the host cell to reproduce
themselves. One could speculate, therefore, that they originally
evolved as a means of transferring information from one cell to
another.
Lynn Margulis is best known for her theory,
now widely accepted, that mitochondria, subcellular organelles
found in all eukaryotic cells, evolved when these cells
assimilated smaller prokaryotes like bacteria (Margulis 1971).
More recently, she has suggested that plasmid transfer among
prokaryotes makes possible a virtually world-wide web of genetic
information, shareable among all bacteria on the planet
(Margulis and Sagan 1986). Viruses might be regarded as part of
a similar web among eukaryotic organisms. Though viruses are now
considered, at best, harmless, and at worst, of course, lethal
to organisms, at one time they may have contributed to the
evolution of the latter by bringing new information into the
genome. That is, they would pick up a piece of DNA from one
organism and transfer it to another.
When multicellular organisms were first
evolving, this kind of process might have been capable of
creating a great deal of genetic diversity. However, it's not at
all clear how much evolutionary change would be possible from
such a mechanism today. Most organisms reproduce sexually, using
gametes that are segregated from the other cells of the body.
Thus viral information transfer into the somatic (non-sexual)
cells of organisms would have no effect on subsequent
generations of organisms. Furthermore, viruses today are usually
species-specific. A given type of virus generally does not
infect more than one kind of organism, or several
closely-related organisms (Singer and Berg 1997), though given
their great capacity for change through mutation, this is not
out of the question.
Viruses might affect the evolution of our own
species in another way, however. Recent research has revealed
that susceptibility to AIDS, one of the most devastating viral
diseases to have afflicted human beings, is correlated with the
presence of certain receptor proteins on the surface of immune
cells. That is, individuals with certain types of these
receptors are less likely to become infected with the virus than
individuals with a different kind of receptor (Michael et al.
1997). It's not inconceivable, therefore, that AIDS, or some
other viral disease, could exert a powerful selective force for
human beings of a certain genetic composition. To be sure, for
this to occur, a much larger fraction of people would have to
die than the already enormous numbers that have so far. In
addition, for the selection to have much significance
biologically, the genetic composition of the survivors would
have to confer on them differences in properties other than
simply resistance to a particular virus. AIDS, therefore,
probably is not nor will it become a selective agent in this
sense. However, as the population density of our species
increases, together with a still greater increase in
interactions among people all over the planet, the emergence of
new, still deadlier viral diseases is a serious possibility
(Preston 1995; Garrett 1995).
Furthermore, many viral diseases, including
AIDS, could result in selection at levels of existence other
than the biological. By significantly reducing the number of
people in particular areas of the world, these scourges may
drastically alter the nature of the societies these people live
in. Indeed, in some places, such as parts of Africa, this has
already occurred, where a large fraction of the most productive
members of society have died (Bennett and Blaikie 1992; Bond et
al. 1999). Even developed countries like the U.S., where a much
smaller fraction of the population has been affected, have
experienced significant social change, for example, in new
attitudes towards sex (Feinleib and Michael 1998). These
observations suggest that widespread viral diseases might have a
major impact on the evolution of the higher, social stages of
the holarchy. I will discuss human societies in the following
chapter, and in Part 2 of this book, we will examine how
evolution may occur at other levels of existence besides the
genetic/biological.
In conclusion, from the holarchical point of
view, some viruses might be considered to be a component of
organisms, or of societies of organisms. In the early stages of
evolution, they may have transferred genetic information from
one organism to another, greatly increasing the diversity of
life. In our present era, their impact on evolution is most
likely greatest in the social structure of our societies.
Nevertheless, they are not an autonomous form of life like
cells. The latter, which I will now discuss, are much more
fundamental to the development of holarchy.
Properties of Cells
We can see in cells many of the same
properties that we see in ourselves and other living organisms,
albeit in a very rudimentary form. As I suggested in Chapter 2,
most of these properties can be grouped into one of three
fundamental features: assimilation or growth; adaptation or
self-maintenance; and communication. We will briefly consider
each of these properties as they are manifested by cells, then
look at a fourth property, new in the holarchy with cells:
reproduction.
1. Assimilation (growth). All cells
have the ability to grow, that is, to assimilate substances or
energy from their environment. This quality distinguishes them
fairly clearly from lower-order holons. As we saw in Chapter 2,
atoms and molecules have the ability to grow in a very simple
sense, by assimilating other atoms or electrons. However, this
growth is limited. In the case of atoms, one or a few electrons
are incorporated, at which point no further assimilation can
occur. Molecules, especially large, complex ones, may
incorporate many atoms, but in a sense, this is not true
assimilation, because the identity of the original molecule is
not retained. That is, in the process of incorporating atoms, a
molecule becomes some other kind of molecule.
In contrast, a cell incorporates many kinds
of substances without changing its basic identity as a cell.
These substances include most of the basic kinds of holons found
within cells. That is, cells can assimilate atoms, such as
sodium ions; small molecules such as amino acids and simple
sugars; and in some cases, macromolecules such as proteins, and
even small organelles. Some unicellular organisms, such as
Paramecia, can even ingest other cells, and as I noted
earlier, certain organelles within cells seem to have originated
from such an assimilation process. In all cases, the substances
are transformed--or digested as we would say of the analogous
process within organisms--within the cell so that they become
part of the cell.
This process of transformation clearly
distinguishes assimilation in cells from the primitive process
we identified in atoms in Chapter 2. Whereas an electron that is
captured by an atom retains its basic identity as an electron, a
substance ingested into a cell is usually converted into another
substance. This transformation may result in either lower-order
holons, through a process of metabolism or degradation, or
higher-order holons, through synthesis. An example of the first
process is the conversion of a sugar molecule into carbon
dioxide and water, while the second is illustrated by
incorporation of an amino acid molecule into a protein. Only
atoms, like sodium and calcium ions, retain their original
properties following assimilation into the cell.
Thus when a holon is assimilated into a cell,
it may move, so to speak, in one of three possible directions in
the holarchy: down to a lower form of existence; up to a higher
form; or it may retain its original identity and position in the
holarchy. Processes that involve a downward movement ordinarily
result in the extraction or accumulation of energy. When sugar
molecules are metabolized, energy is released from the chemical
bonds that are broken, and some of this energy is captured and
stored by the cell. When the movement is upward, in contrast,
energy is required, to form the new chemical bonds that result
in the synthesis of a higher order holon.
This suggests that energy is a useful way to
understand the concept of directionality in the holarchy, that
is, to distinguish up from down. If energy is required to create
higher-order holons, and is released during the formation of
lower-order holons, it follows that higher-order holons have
more energy than lower-order holons. I will discuss this
important idea further later in this chapter.
2. Adaptation (self-maintenance). In
Chapter 2, I defined adaptation as the interaction of a
lower-order holon with a higher-order holon. A cell in an
organism, for example, adapts to conditions surrounding it,
which are part of the organism, a higher form of existence. When
considering autonomous cells, however, the higher-order holon
may need to be understood in fairly broad terms. A single-celled
organism swimming in a body of water adapts to this surrounding
liquid environment, which may not actually be a higher-order
system. However, in nature, the environment of any cell, in the
broadest sense, is part of the earth, which as we will see
later, may be viewed as an emerging higher-order system.
A very common example of adaptation found in
many cells occurs in response to a change in sodium ion (salt)
concentration. All cells have a characteristic concentration of
sodium inside them, which is vital to their growth and
functioning. Many cells, though, inhabit an environment in which
the sodium ion concentration is much higher than that within the
cell. In a simple physical system, the sodium outside the cell
would diffuse into the cell, until the concentration of this ion
inside and outside of the cell was the same. Cells, however,
have special molecules called "sodium pumps" on their surface
which actively extrude sodium ions from the cell, and so
maintain their low internal concentration of this ion (Matthews
1998). This process requires energy.
Another example of an adaptive process in
cells is provided by their response to chemical substances used
in cell-cell communication. Many cells in the body respond to
special chemical agents, such as neurotransmitters or hormones,
which induce the cell to grow, to reproduce itself, or to
secrete its own chemical messengers. Sometimes, however, the
cell may be overexposed to the substance. Under these
conditions, the cell may alter its sensitivity to the substance,
so that it does not become overwhelmed by it. It does this by
reducing the number of receptor molecules on its surface that
interact with the chemical messenger, or by altering the degree
of fit or affinity that the receptor has for the messenger (Law
et al. 1984). This process is called desensitization.
Overexposure of a cell to a chemical
messenger may result from either a higher than normal
concentration, or to a normal concentration that is maintained
for a longer than normal period of time. The latter situation is
particularly interesting. Under these conditions the cell is
responding to not its immediate environment--the presence of a
certain substance--but to the past history of that
environment. Its behavior is shaped by experience.
We can call this kind of adaptation a
primitive form of learning. What distinguishes this kind of
adaptation from the simpler forms a cell is capable of is the
dimension of time. Whereas in the simpler form of
adaptation a cell responds only to the presence of a substance
in space, in this form it's responding to this presence over a
particular length of time. In other words, the cell is capable
of recognizing a distinction not only in the physical dimension
of some holon, but also its temporal dimension. As I discussed
in the previous chapter, the ability to function in temporal
dimensions is usually a property of higher stages of existence.
Cells that can learn in this sense are usually found in
organisms, and particularly in the brain, where they participate
in these higher stages of multicellular organization. I will
return to this point later.
3. Communication. Communication, we
saw in Chapter 2, is the property of holons manifested in their
interaction with other holons of the same or similar type. Thus
atoms communicate with each other by forming chemical bonds
between themselves. Communication, defined in this way, is
critical to the development of the holarchy, for it allows
higher-order holons to be created from simpler ones. Atoms
communicating create molecules; molecules communicating create
polymers; polymers communicating create larger molecular
structures.
Communication at the level of the cell is,
again, more complex, more sophisticated, than communication at
the level of atoms and molecules. There are a variety of ways
cells can communicate with each other. The simplest forms of
communication involve direct physical interaction between the
cells, as occurs in all tissues in the body. Even this
interaction is not really simple, however, as it involves some
very specialized macromolecules on the surface of each cell,
which recognize and bind to each other (Edelman 1984; Lander
1989; Tessier-Lavigne 1996). This binding often triggers
internal metabolic processes within the interacting cells. Cells
may also communicate physically by means of gap junctions, tiny
channels that connect adjacent cells (Dulbecco 1987).
As I noted in the previous section, some
cells can also communicate at a distance, by secreting certain
chemical substances or messengers that trigger specific
responses in other cells. This kind of communication may occur
over very short distances, as occurs between neurons during
synaptic transmission, and over very great distances, such as
when cells release hormones into the blood stream that act at
cells in other parts of the body. Sometimes this communication
is hierarchical, as when cells in the hypothalamus in the brain
release hormones that activate cells in the anterior pituitary,
another brain nucleus. The pituitary in turn, releases another
set of hormones that activates tissues such as the adrenal
glands, the sexual organs, and the thyroid (Dulbecco 1987).
Hierarchical communication also is very commonly carried out by
nervous transmission. For example, cells in the brain send
signals to cells in the spinal cord, which in turn send signals
to muscles.
As I discussed in the previous chapter,
hierarchical organization of this is non-nested, occurring
between holons of the same type which participate to different
extents in higher, social holons. As with nested or holarchical
relationships, however, this organization is not completely
unilateral. While the brain sends signals to the muscles via the
spinal cord, we also know that the activity of the muscles
influences the brain, through feedback loops (Baker 1999).
In Chapter 2, we saw that there was a
fundamental division between reactive atoms, which are capable
of forming bonds with each other, and inert atoms, which do not.
Only reactive atoms can create molecules and thus become part of
higher stages and levels of existence. The same fundamental
division exists on the biological level, between prokaryotes
and eukaryotes. Prokaryotes, which include most bacteria
3,
live a fairly autonomous existence. While some prokaryotes are
capable of interacting with one another, through chemical
signalling, and may even form loose aaggregations of cells, only
eukaryotic cells are capable of the highly complex, hetarchical
associations that make up organisms. Thus eukaryotes, like
reactive atoms, are highly communicative.
What is it about eukaryotes that makes them
so adept at communicating, at interacting with other holons of
the same type? One of the major differences between prokaryotic
cells and eukaryotic cells is that the latter have much larger
and more complex genomes. The genome is the cell's repository of
information; this is contained in the sequences of DNA, which
code for all the proteins in the cell. As we just saw, when one
cell communicates with another, it generally does so by
presenting a certain kind of molecule to the latter, either
directly in physical association, or indirectly, as through
communication by hormones or neurotransmitters. The second cell
must recognize this molecule with another molecule of its own. A
large genome thus enhances the ability of cells to communicate
with one another, by increasing their repertoire of signalling
molecules. Every molecule that a cell uses to communicate with
another cell must be encoded by a distinct gene
4,
so the more genes a cell has, the greater the potential variety
of communicative interactions it can make with other cells.
Indeed, it appears that a great deal of the extra genetic
information present in eukaryotes is devoted to communication.
"Half the genes that we have are involved in intracellular
communication,"5,
estimates molecular biologist Tony Hunter--intracellular
communication being the final step in intercelluar
communication.
Not only is the genome of the eukaryotic cell
larger than the genome of prokaryotic cells, but there appears
to be a further division among eukaryotic cells. Bernardi (1993)
has pointed out that the genomes of the highest vertebrates,
including birds and mammals, are much larger than the genomes of
lower vertebrates and invertebrates. He calls the latter the
paleo- (evolutionarily old) genome and the former the neogenome,
a distinction that suggests a parallel at the next level of
existence between the paleocortex and neocortex in vertebrates.
In the next chapter, we will see that the brain in fact plays an
analogous role in the organism to that the gene plays in the
cell. For now, we just observe that there is a correlation
between the size of the cell's genome and the ability of the
cell to form complex multicellular holons.
I also pointed out in Chapter 2 that among
reactive atoms, carbon is of primary importance, because of its
ability to form chemical bonds with four other atoms
simultaneously. This property is what makes carbon the so-called
building block of life. The analogous holon on the biological
level is represented by the neuron, or nerve cell.
Neurons are the most communicative of all eukaryotic cells,
because, like carbon atoms, they can interact simultaneously
with many other cells.
6
To summarize, there is a correlation between
the amount of information a cell has, and its ability to
communicate with other cells. Not only do eukaryotes contain
more genetic information than prokaryotes, but eukaryotic cells
of higher organisms--which have more complex multicellular
organizations, particularly in the brain--generally contain more
information than the genomes of cells in simpler organisms.
There are some very significant exceptions to this rule, which
we will consider later, but for now I want to emphasize the
general correlation.
As we will see in the next chapter, a similar
correlation holds with organisms, on the next level of
existence. On that level, there is a strong correlation between
the size of the brain and the extent to which the organism forms
social organizations. A traditional definition of communication,
of course, is the transmission of information, so this
correlation should not be surprising. Some systems theorists,
such as Valera and Maturana, prefer to define communication in a
way that does not involve reference to information, such as
"coordination of behavior."
7
While this definition is quite consistent with the one I'm using
here, I believe the relationship of information to communication
is critical, as I will discuss at some length later in this
chapter.
4. Reproduction. In addition to the
trio of assimilation, adaptation and communication, cells also
have the ability, of course, to reproduce themselves. This
property perhaps most clearly distinguishes cells from all lower
forms of existence. Atoms, molecules and complex molecular
structures can't reproduce themselves. DNA, to be sure, can
reproduce itself in the test tube, but this requires special
conditions, including the presence of certain enzymes, which are
not found in nature (Kornberg and Baker 1992).
As I pointed out earlier, each of the three
universal properties of holons can be defined in terms of
interaction with other holons. In assimilation, a holon
interacts with a lower-order holon; in adaptation, it interacts
with a higher-order holon; and in communication, it interacts
with a holon on its own stage and level of existence.
Reproduction (of the cell) is unique in that it's the only
property of a holon that does not involve interaction with
another holon. Alternatively, it's the only property of a holon
that involves simultaneous interaction with holons above, below,
and on the same plane of existence as itself.
What do I mean by this paradoxical statement?
Consider the first definition. When a cell divides, to be sure,
numerous interactions occur among the holons it contains
(Campbell et al. 1999). The chromatin material in the nucleus
condenses into chromosomes, which pair up along the mitotic
spindles, molecular structures that hold the chromosomes in
place. The nucleus then divides, along with the rest of the
cell. The molecular details of these processes are quite
complex. But from the point of view of the reproducing entity,
the cell, these interactions are with itself. In this
sense, its reproduction does not involve interaction with any
"other" holons.
On the other hand, cells in the organism (and
even cells living autonomously outside of organisms) normally
don't divide in an uncontrolled fashion--when they do, it's
generally a sign of a disorder, for example, the growth of a
tumor. Indeed, for any eukaryotic cell to divide, it must
undergo a characteristic sequence of processes called the
cell cycle. This cycle includes meiois, or halving of the
chromosome number, as well as cell division into two daughter
cells. At several places in this cycle, called checkpoints,
the process can be aborted, if certain conditions aren't
fulfilled (Murray and Hunt 1993; Stein et al. 1999).
What sort of conditions must be fulfilled?
The division of cells within healthy organisms is often
regulated by the tissue they compose. For example, certain
growth factors (hormones or related molecules) secreted from
surrounding cells may signal the cell to divide; if these
factors aren't present, it won't divide (Stein et al. 1999).
Cell division is also regulated by direct, physical contact with
neighboring cells. This is important in preventing a tissue from
growing too large. So reproduction of the cell involves both
adaptation (interaction with a higher-order holon, which as
we will see shortly, is represented by tissues and organs), and
communication, interaction with other cells. Furthermore,
reproduction of cells is also generally triggered by size--the
cell must double in its mass in order to produce two daughter
cells of its orginal. So reproduction is also a process of
assimilation, a means of allowing it to continue
incorporating lower holons into itself. As Teilhard de Chardin,
poor on explanations but sharp on what needs explaining, put it:
"the cell, continually in the toils of assimilation, must split
in two to continue to exist."
8
So in an important sense, reproduction of a
cell involves all three fundamental processes that holons engage
in--assimilation, adaptation and communication. In Chapter 2,
however, we saw that any interaction among holons can be thought
of as involving all three of these processes, with the
particular one defined depending on the holon's point of view.
When an atom and a molecule form a bond, for example,
assimilation occurs from the point of view of the molecule;
adapatation occurs from the point of view of certain electrons
in the atom and in the molecule; and communication occurs from
the point of view of the atom.
Incorporating this notion into the preceding
discussion, we could say that reproduction is a process by
which the holon's point of view, or identity, is expanded, in
such a way that it participates in all three of the other
processes. When a cell reproduces, it becomes an entity
which is capable of perceiving itself simultaneously as a)
assimilating, which is the immediate trigger or cause for
reproduction; b) communicating, which gives it permission to
reproduce; and 3) adapting, which allows no other response. To
put it another way, assimilation makes reproduction necessary;
communication makes it possible; and adaptation makes it
sufficient.
Transcription and Translation: the Deep
Structure and Surface Structure of Information
So far, I have discussed reproduction of
cells in very general terms, as a series of interactions with
other holons. Now we will consider some of its specifics. In
particular, we will examine the role of information, which is a
key concept in the process.
About half a century ago, the physicist John
von Neumann (with the help of Stanislaw Ulam) developed a scheme
by which a computer could reproduce itself. Von Neumann's work
was the beginning of a theory called cellular automata (Casti,
1992; Wolfram, 1994), which I will be discussing in more detail
in Part 2 of this book. What is relevant to our discussion here
is how van Neumann solved this problem. At the outset, he
recognized that since a computer can, in principle, carry out
any procedure that can be broken down into a series of logical
steps, it was not difficult to write a program which, when run
on a computer, would enable that computer to construct a copy of
itself. The program would tell the computer how to make each
component (presumably using robots or other forms of automated
technology), and then how to assemble these components into an
identical computer.
That part is straightforward enough. The more
difficult problem is that if the duplicate computer is to be
able to copy itself, too, it would have to have this program as
well. That is, the program that the first computer ran in order
to construct the second computer would somehow have to go into
the latter's construction. The second computer, in addition to
being a complete hardware copy of the first, would also have to
have access to the program that the latter ran in order to
duplicate itself. This would allow the second computer, in turn,
to reproduce itself, by operation on the program in the same way
that the first computer did.
This problem led von Neumann to grasp a key
principle about reproduction using programmed instructions. The
program that enables a computer to reproduce itself must be run,
or operated upon, in two different ways. First, it must be
translated, that is, its rules followed to create the
duplicate computer. And second, it must be transcribed,
that is, copied, so that the duplicate computer also has this
program. Conventional computers only translate programs; they
generally don't transcribe them. So van Neumann's computer had
to have a special construction so that it could distinguish
between the two processes, and know how, and when, to change its
mode of operation from one to the other.
The insightfulness of von Neumann's reasoning
became clear just a few years later, when biochemists discovered
that cells and organisms reproduce themselves in a manner that
follows the same basic principles. The DNA in the genome of
every cell is the program that allows the cell to reproduce
itself, and it, too, is operated on in two different ways. This
program is translated by the synthesis of messenger RNA,
which in turn instructs amino acids to join in a particular
sequence to form all the different proteins of the cell. This is
what enables a cell to create a copy of itself. The DNA is also
transcribed, by the pairing of nucleotide bases that
allows one sequence of DNA to make an exact copy of itself. This
provides the duplicate cell with the ability to reproduce
further.
The reproduction of cells and organisms,
however, differ somewhat, both from each other and from von
Neumann's reproducing computer, with respect to the way these
two processes are emphasized. When a cell reproduces itself, it
divides in half (mitosis), so that its contents are distributed
equally between the two daughter cells. This process is
therefore primarily one of transcription. The DNA in the
genome makes a copy of itself, with one of each of the two total
copies going to each daughter cell. The two daughter cells must,
of course, translate this DNA to replenish their supplies of
proteins, but this translation process is not an essential part
of the reproduction of cells--as it would be in von Neumann's
computer. In the cell, translation comes after reproduction, as
a means by which the cell replaces components that die or turn
over.
When an organism reproduces itself, on the
other hand, it creates a single cell (the fertilized egg) that
reproduces itself many times. The resulting progeny cells then
differentiate into all the various tissues of the body; some
cells become muscle tissue, some become liver tissue, some
become brain tissue, and so on. This differentiation process
occurs by the translation, or as molecular biologists
say, expression, of different portions of the genome.
Muscle cells express one set of genes; liver cells express
another set of genes; brain cells express still another set of
genes. Thus reproduction of the organism, to the extent that
it's more than just reproduction of the cell, is primarily a
process of translation.
Another way to understand reproduction of the
organism is as a process of reproduction and communication of
cells. It begins with reproduction of the cell
(transcription), and is followed by communication of the
dividing cells with each other. This allows us to see that
communication of cells is primarily a process of translation of
information.
So while both reproduction and communication
of the cell involve operation on the genome, they operate on it
in different ways. Reproduction of the cell is primarily a
process by which the genome is transcribed; communication is
primarily a process by which the genome is translated. These
two processes, in effect, point the genome in two directions,
towards the stages below it, and to the level above it. On the
one hand, the genome contains all the information necessary to
organize or actualize all the stages below it, through the
synthesis of all the proteins in the cell. This is the
information transcribed during reproduction of the cell. On the
other hand, the genome also contains the potential to create a
still higher fundamental system on the next level of existence.
This is the information translated during communication of
cells.
The information in the genome which the cell
makes use of in translation and transcription corresponds,
respectively, to its deep structure and surface
structure. Every cell in the organism contains all the
genetic information that every other cell contains. The sum
total of all this genetic information is the genome's deep
structure. When a cell reproduces, it transcribes this
information; that is, it reproduces the genome's deep structure.
But cells in different parts of the body differ, as I just
noted, according to which genes they translate or express. Cells
in the heart express certain genes, and don't express certain
other genes; cells in the liver express a different set of
genes; and so on. Furthermore, even a particular type of cell
may express certain genes at one time, other genes at another
time. The particular pattern of genes expressed by any given
cell at any given time represents its genetic surface structure.
When a higher, multicellular stage of existence is formed,
through reproduction or evolution of an organism, the cells
translate the genome, that is, they express its surface
structure.
To reiterate, the deep structure of the
genome is what allows the cell to reproduce itself, by a process
of transcription; this process actualizes the physical stages of
existence within another cell. The surface structure of the
genome is what allows the cell to interact with other cells, by
a process of translation; this process allows cells to
communicate with one another, crreating higher-order holons
containing the cell. As we will see in the next chapter, an
analogous role is played in the organism by the brain, which
also has a deep structure and a surface structure. And in the
second part of this book, when I discuss evolution in the
holarchy, we will see that each type of structure can change--in
the cell and the organism--and that depending on which does
change, a different evolutionary process results.
The Higher Biological Stages
Keeping this in mind, let's now look at the
higher stages of the biological level, the new holons created by
the process of communication among cells. Just as we saw, in
Chapter 2, that higher physical stages were created by atoms
combining into molecules, which in turn combined into more
complex molecules, we can see an analogous organization of
cells. We all know that cells in the body form tissues, and
tissues form organs. These terms, however--tissues and
organs--though useful for a very general discussion of the kinds
of holons within the organism, are rather imprecise. A closer
examination of the anatomy of the human organism--which is the
most highly evolved organism, and should therefore feature all
the possible biological stages, in their greatest degree of
development--suggests that we can again identify a series of
holarchically organized holons that feature increasingly more
complex associations of cells (Table
3 ).
As with the physical level, each stage
consists of a homogeneous group of the holons directly below it,
and each higher stage has emergent properties not found in the
lower. Biologist Rudolf Raff, who refers to biological holons as
modules, suggests that they have four general properties (Raff
1996): 1) a genetically discrete identity, that is, each cell in
a module expresses an identical set of genes (same genetic
surface structure); 2) they are composed of lower-order modules
and are in turn part of higher-order modules--i.e., are true
holons; 3) have a distinct physical location in the organism;
and 4) can have various degrees and kinds of interconnectivity.
Let's now consider the relationships of these
holons in light of what we learned in Chapter 2. Recall that
atoms and cells, unlike the stages between them, are capable of
an autonomous existence; that is, they can survive outside of
larger-order holons. The same is true of organisms, obviously,
and conversely, is not true for the stages between cells and
organisms. The various kinds of multicellular holons presented
in Figure 2 are found only within organisms; they can't exist on
their own. So in this important respect, the biological level is
analogous to the physical.
Another important analogy between organisms
and cells concerns the way they're organized. We saw in Chapter
2 that cells contain all the lower physical stages, both in
semi-autonomous and combined form; thus cells contain free atoms
as well as atoms within molecules; free molecules as well as
molecules within polymers; and so on. The same is true of
organisms. Within the organism are free cells, such as red blood
cells, and cells combined into tissues; simple tissues, as well
as simple tissues combined into higher-order tissues; and so on
for other social holons. Thus the organism transcends its
biological stages in the same way that the cell transcends its
physical stages. In both cases, all the properties of the lower
holons are preserved (in autonomous forms of these holons),
alongside with the emergence of entirely new properties.
Conversely, higher-order multicellular organizations do not
preserve all the properties of their individual cells. Thus the
ability of cells within tissues and organs to grow and divide is
regulated by the tissue or organ.
We also saw in Chapter 2 that the emergent
properties of the higher physical stages can be understood in
terms of new dimensions, of both space and time. A small
molecule exists in one more dimension than an atom; a polymer
exists in two or three dimensions; and still more complex forms
of molecules may exist in one or more dimensions of time as well
as space. The higher biological stages can be viewed in the same
manner. A complex cell unit, for example, is a one-dimensional
array of cells, while tissues and organs can be understood in
terms of two or three dimensions.
The dimensions of time of higher biological
organization may be understood in several ways. Most basically,
time is inherent in the process of cell turnover; any biological
tissue or organ is constantly undergoing a process of
self-renewal, in which cells die and are replaced by new ones.
This renewal process gives the stage one or more dimensions of
existence in time as well as in space. In fact, the dimensions
of time and space are somewhat interchangeable. A simple cell
unit, for example, can be understood as an array of cells in
space, or as the life of a single cell--as it reproduces itself
and forms progressively more cells--over time. In Raff's words,
"a cell lineage can be considered a temporally connected series
of cellular modules."
9
In the brain, the most complex and highly
developed biological stage, most cells don't divide, though
there are some exceptions (Lichtmann 1999; Oppenheim 1999). In
this case, however, the temporal dimensions of holons can be
understood in terms of patterns of electrical activity, which
change in time as well as in space. When we think, remember,
feel or express emotions, engage in certain physical activities,
certain patterns of nervous activity occur in the brain, which
science is now beginning to explore using procedures that follow
the metabolic activity of the cells involved (Magistretti 1999).
These patterns result from the synaptic connections between
neurons, which enable them to communicate with each other in
complex networks.
As with higher-order physical stages, the
emergent properties of higher-order biological stages result
directly from the new dimensions in which they exist. These
emergent properties include not only the various well-known
functions of tissues and organs--digestion, circulation, and so
forth--but greater stability and lifetime. Thus tissues and
organs have a length of existence that far exceeds that of their
individual cells; while the cells die, they are replaced by new
ones that sustain the tissue or organ. For the same reason,
multicellular holons are stable to the removal of a few of their
components.
In the brain, where most cells don't die, we
can't talk about different lifetimes, but we can still observe
that higher-order holons function over longer periods of time.
For example, groups of highly interconnected cells in the
cerebral cortex can function as units which have a much longer
duration of electrical activity than individual cells, due to
the ability of neurons to excite one another through recurrent
loops (Amari, 1977; Lu et al. 1992). Thus the existence of these
multicellular stages has an extension in time that individual
cells lack. As a general rule, the larger the holon in the
brain, the longer it may maintain a particular pattern of
electrical activity. In this sense, it is both more stable and
longer-living than its individual cells.
Finally, we note that cells within
higher-order holons can have higher-order properties. That is,
they can participate in the existence of higher dimensions, just
as atoms in large molecular structures can. I discussed some
examples of these properties earlier. Individual cells, when
part of a tissue, can communicate with one another in ways that
independent cells cannot. Chemical factors released or presented
by one cell can regulate the growth or physiological activity of
adjacent or even distant cells in very specific ways (Becker and
Deamer 1991). Such regulation is not possible, or possible only
to a very limited extent, by interacting cells outside of
organisms. When cells are part of the brain, the highest and
most developed organ, they may take on still even more
sophisticated properties. Certain identifiable neurons in the
visual cortex can respond to relatively complex stimuli, such as
specific visual patterns (Wiesel and Hubel 1963; Crick and Koch,
1994). Their ability to do this depends on the existence of a
complex network of connections with other neurons, and for just
this reason, no cell existing outside of an organism could
exhibit this kind of behavior.
Indeed, the process of perception offers one
of the clearest examples of how emergent features of existence
operate in different dimensions, and provides early hints of the
most important and mysterious of all properties, consciousness.
In Chapter 2 I suggested that we might define a level of
existence in terms of six dimensions, three of space and three
of time. In addition to functioning in a certain set of
dimensions, a holon also has some kind of experience of
these dimensions. For example, zero-dimensional experience sees
the world as a point; that is, it views itself as everything
there is. One-dimensional experience can make a distinction
between self and other; it sees itself as a point, but
has some knowledge of existence beyond this point.
Zero- and one-dimensional experience are
typical of autonomous cells, that is, unicellular organisms.
Cells within organisms, in contrast, may have an experience of
higher dimensions
10.
This is particularly well-illustrated by cells in the visual
pathway, where a holarchy of awareness can be observed (Baron
1987; Reid 1999). This pathway begins with the retina of the
eye, where cells typically respond in a simple on or off fashion
to flashes of light; that is, they increase their electrical
actvitiy when a light either comes on or goes off, depending on
the particular cell. This indiscriminate response to light is
one-dimensional, or self-other, experience. Many neurons in the
visual cortex of the brain, in contrast, can respond to lines of
different orientations. So-called simple cells respond to a line
of a characteristic angle, each cell responding to a line of a
different angle. This is two-dimensional experience. Complex
cells also respond to oriented lines, but unlike simple cells,
can respond to a line present anywhere in the visual field. Thus
they have some three-dimensional experience . Still other cells
in the visual cortex can respond to lines that move in different
directions. This kind of perception requires some awareness of
time, so such cells exhibit some degree of four-dimensional
experience11.
As I pointed out earlier, another example of multi-dimensional
experience is the ability of certain cells to learn, as this
involves discrimination of a temporal dimension to a stimulus.
I want to reiterate that when I speak of
dimensions, I'm not necessarily using them in the strict
mathematical sense, in which a higher dimension has a
relationship of infinity to a lower dimension. Dimensions as
defined here emerge through the repetition of holons on one
stage of existence, forming a new and higher stage. The reader
should also remember that any set of dimensions is relative to
one particular level. Though a cell may have zero-dimensional
perception on the biological level, it's a six-dimensional holon
(or at any rate, contains numerous dimensions) relative to its
component atoms.
In the next chapter, we will see that
organisms, too, differ in the number of dimensions in which they
perceive the world, and this number is likewise related to the
stage on their level of existence, the mental, with which they
are associated. I regard this dimensionality of perception as
extremely significant, because it suggests that holons on
different levels of the holarchy are analogous with respect to
not only what might be called their exterior
properties--how they appear from the outside--but also their
interior properties, what they actually experience (Wilber
1995; the distinction between exterior and interior will be
discussed in more detail in the next chapter). Some theorists
have protested that whatever laws or rules we determine from
observing lower forms of life will be totally inapplicable to
the latter kind of properties. "There is nothing in [lower level
principles] that will tell us how to resolve an Oedipus complex,
or why pride can be wounded, or what honor means, or why life is
worth living," insists Ken Wilber.
12
Likewise, in assessing the relevance of a model based on certain
observations of physical systems to human societies, Rupert
Sheldrake comments:
"A mathematical model of urbanization
may shed light on the factors affecting the rate of
urban growth, but it cannot account for the different
architectural styles, cultures and religions found in,
say, Brazilian and Indian cities."
13
To the extent that architecture, culture and
religion are interior properties--that is, an expression of the
thoughts and feelings of people--we would not expect most models
derived from physical or even biological systems to apply to
them, because such models are usually based on exterior
properties of holons. That is, when scientists study atoms,
molecules and cells, they generally only observe their exterior
properties. Therefore, any model based on such observations can
only apply to exterior properties on higher levels, as Sheldrake
suggests it might indeed do.
This does not mean, however, that
there are no lower-level analogies applicable to our
thoughts or feelings. It means that we would have to search for
such analogs among the interior properties of lower
levels--in other words, in the way these lower level holons
perceive their world. The notion that perception can have
differing degrees of dimension is, I think, a starting point for
such a search; it is a way of getting at the question of the
structure of the experience of lower forms of life. Further
insight, perhaps, might come from understanding the quality
of experience lower level holons have when they perceive their
world
14.
This experience is obviously not the usual subject of scientific
study, and perhaps is not even possible to access. However, it
might be possible under certain conditions, for example, through
the use of drugs that inactivated our higher level mental
structures (see also the discussion in Chapter 7).
On the other hand, to the extent that human
culture consists of exterior forms, a model based on lower forms
of life might apply. Consider architectural styles, for example.
A model based on cell phenomena might very well have some
analogs at this level. If we haven't seen them, it's because we
haven't looked for them. If the only thing that seems to be
analogous in the particular model Sheldrake was discussing is
urban growth, it's presumably because growth is all the model
was meant to apply to, on any level. A study of physical
or biological processes that focussed on diversity of shapes
might well find some meaningful analogies between holons on
these levels and architectural designs. Certainly the
development of architecture has followed some evolutionary
principles that apply to lower forms of life
15.
Why Holarchy?
We have now seen that both cells and
organisms are holarchically organized; they are holons
containing several stages of lower-order holons within
themselves. In the next two chapters, we will see that still
higher levels of existence have a similar organization. Before
we proceed further, though, we might ask, why is holarchical
organization such a pervading theme in nature? Why are the same
kinds of relationships found again and again?
The short answer is because holarchy is a
very efficient means of organization; that is, it's a way of
creating a tremendous amount of variety and novelty from a
relatively few basic plans, structures or processes. Many
biologists have pointed out that both molecular structures,
within the cell, and multicellular arrays, within the organism,
function to a great extent as interchangeable parts, or modules,
which can be combined in many different ways (Raff 1996).
"Relatively few medium-sized molecules are made by living
cells,"
16
notes Francis Crick, because they are require more synthetic
steps than either small molecules, or large polymers.
Modern human technology uses the same
principle. When a new car is designed, for example, the designer
does not create an entirely new vehicle from scratch; rather,
she combines established parts or themes--four wheels, front and
back seats, head and tail lights, engine, transmission, and so
on--in a new way. Just as there are a rather small number of
different car designs, compared to the total number of cars,
there are a relatively small number of body types, or bauplans
(about 30) relatively to the immense number of organisms
(Campbell et al. 1999). But an immense amount of variety is
still possible within these basic plans. So while many questions
remain about their origin (Mayr 1988), it's fairly clear that
once they were established, it became easier for life to adapt
by making modifications in them, rather than designing new ones
from scratch.
Perhaps the most dramatic example of the
great variety that can be generated from simplicity in this
manner is the protein molecule. Proteins, as I discussed in
Chapter 2, are composed of amino acids, of which there are about
twenty in nature. Even a very small protein, containing just one
hundred amino acids, can therefore exist in 20
100
~ 10130
varieties! Most of these possibilities are never
created--indeed, this number is greater than the total of all
atoms in the universe--but it illustrates the awesome amount of
novelty that can be created from a few interchangeable parts.
Physical and biological holons, then, are
something like the familiar children's lego set, in which a few
building blocks are combined in diverse ways. But unlike lego
units, which are simple physical structures, physical and
biological holons, as we have seen, have dimensions in time as
well as space. Furthermore, in addition to the simple contact
interactions of lego blocks, physical and especially biological
holons can be connected through a variety of different
processes, again, over time as well as space. These added
factors greatly multiply the variety that holarchical
organization can produce.
Holarchical organization, in short, is a way
of maximizing efficiency, of creating the most from the least.
Indeed, I believe that efficiency, properly understood, is as
good a candidate as any for a central organizing principle of
the universe. It comes into play at all levels of existence. The
physical chemist Manfred Eigen, whose hypercycles of mutually
catalyzing metabolic reactions we will consider in the
discussion of evolution in Part 2, notes that such
self-organizing processes minimize energy usage by recycling the
products of one reaction as starting materials of another (Eigen
1993; Lee 1997). The same principle was articulated by Alfred
Lotka, a founding father of the science of ecology and the first
scientist to treat populations and communities of organisms as
thermodynamic systems:
"Evolution proceeds in such direction
as to make the total energy flux through the system a
maximum compatible with the constraints of the system."
17
In other words, the system organizes itself
so that everything is used, nothing is wasted.
Why is efficiency so critical? As I discussed
earlier in this chapter, the organization of physical and
biological holons is guided by transcription and translation of
information contained in the genome. Transcription of the genome
enables the cell to reproduce itself; translation of this
information enables it to create a new organism. The latter
process, in particular, poses a major problem. The amount of
genetic information may seem vast, but it really is nowhere
nearly adequate, by itself, to account for all the information
that an organism effectively represents (Riedl 1978).
Consider: there are roughly one hundred
thousand genes in the chromosomes of even complex organisms like
ourselves, but billions of cells that must be arranged in a
precise order. Obviously, if every cell-cell interaction had to
be specified by even one gene, let alone many by genes,
organisms in anything like we know them would not be possible.
The amount of genetic material required would dwarf the
resources of the cell. Organisms become possible because genes
do not embody the detailed plan of the organism, but rather a
few fundamental units--different kinds of cells--and a
relatively few rules that determine how those parts will be
assembled. Molecular biologists are gradually learning about
these rules as they study how a few coding sequences of DNA,
called regulatory genes, control the expression of many other
genes during development (Dulbecco 1987; Kauffman 1993; Raff
1996; Gearhart and Kirschner 1999).
A fundamental advantage of holarchical
structure, therefore, is that it can express, or unfold, a very
large amount of information from a much smaller amount of
information. Information that can be reduced in this manner is
said to be compressible. The concept of compressibility
was originally developed by the mathematician Gregory Chaitin
(1988,1999) as a way of determining whether computer programs
could be streamlined, that is, written with less code. Most
computer programs proceed by making a series of logical, or
"either-or" choices. As the mathematician Alan Turing showed,
any program of this kind can therefore be represented by a
string of digits consisting of just "0's" and "1's" (Hofstadter
1979). If the sequence of 0's and 1's is random--like that which
would be generated by flipping a coin, for example, with heads
considered 1 and tails 0--there is no way that the information
it contains can be conveyed in any message shorter than the
entire string itself. Some strings, however, have certain
patterns, such as the one that goes 01010101...All the
information present in this string can be conveyed by a shorter
string or program that says, in effect, "repeat "01" n times".
The information in such a string or program is therefore said to
be compressible.
Chaitin's theory (called algorithmic
information theory, or AIT) was originally used to demonstrate
that randomness is a problem in a wider variety of mathematical
operations than was previously appreciated. But it may have
important implications for our understanding of living things,
and particularly in the holarchical model of existence. As I
noted above, it's clear that the information present in an
organism is compressed in its genome. While all the information
in an organism could be stored as a complex description of every
cell's interaction with every other cell, the genome compresses
this information into a few thousand sequences of nucleotide
bases, and perhaps a few dozen or hundred rules governing when,
where and how these genes are to be expressed. This
compressibility, much like that of Chaitin's strings, results
from the highly repetitive, patterned nature of life that is the
hallmark of holarchy. Thus protein molecules are composed of
repeating units of amino acids; tissues are composed of
repeating units of cells. The information needed to specify such
patterned interactions can obviously be compressed greatly from
a program that specified each and every interaction.
The compressibility of the information
represented in the whole organism is therefore evidence of the
efficiency of holarchical development. But what about the
information in the genome itself? Could that be further
compressed? In other words, could we in principle design a set
of genes that could generate a real organism, using fewer
nucleotide bases than are actually present in the genome of that
organism? The available evidence suggests we probably could. We
know, to begin with, that most of the DNA in the genome does not
code for any protein. Most coding sequences are broken up by
non-coding sequences, sometimes immensely long, called introns.
In most cases these introns have no known function, though some
scientists speculate that they may play a role in creating
chromosomal structure (Brown 1999). Their presence also makes it
possible for different coding sequences to combine into new
kinds of proteins, which may have been vital to the early
evolution of variety (Gilbert 1987; Gilbert and Glynias 1993).
Very recently, it has been suggested that introns may contain
information in the same sense that coding sequences (exons) do
(Flam 1994; Moore 1996).
Another observation strongly suggesting that
the information in the genome is not completely compressed is
the apparent existence of redundancy of genes. If every gene
expressed a unique and critical piece of information for the
organism, we might expect that mutation or elimination of that
gene would result in the death of the organism, or at least in a
major change in some function. Studies have shown, however, that
not all genes are critical in this sense. While the exact
proportion of vital genes in human is not known, it has been
estimated to be around 40% (Lewin 1997). This doesn't mean that
the other 60% of the genes could be eliminated en masse,
but it does indicate that many genes don't do anything that is
not done by some other gene.
The lack of complete compressibility of the
genome, in the view of many scientists, is a powerful argument
that the genome, at least in its modern form, is a product of
Darwinian evolution. I will discuss evolution in detail in Part
2, but here I want to point out that because Darwinism does have
a strong element of chance, it may create forms of existence
that are not the most efficient, at least from our point of
view. Furthermore, because this kind of evolution frequently,
though perhaps not always, proceeds in small steps, it becomes
committed to certain directions; as life becomes more complex,
it becomes more difficult for it to back up, so to speak, to
move in a new direction.
Information, Energy and Complexity
Another very significant implication of
Chaitin's work is that it may enable us to integrate three key
concepts that appear again and again in discussions of
holarchical relationships: complexity, information and energy.
Most holarchical theorists assume that higher forms of life are
more complex than lower. Thus a cell is said to be more complex
than an atom or molecule, and a multicellular organism more
complex than a cell. Complexity, as applied to holons in this
way, is virtually synonomous with "level (or stage) of
existence". We say that one form of life is more complex than
another when it is higher than the latter, and includes the
lower within it. The higher is by definition more complex than
the lower.
We have also seen that energy, too, is
closely associated with holarchical status. As I noted earlier,
higher-order holons contain more energy than lower ones. Thus
energy is required to synthesize larger molecules (such as
proteins) from smaller ones (such as amino acids), while energy
is provided by breaking down larger molecules (such as sugar)
into smaller ones (carbon dioxide and water). Likewise, energy
is required for creating tissues and other multicellular arrays
from cells.
More specifically, energy seems to be
synonomous with the hetarchical interactions formed between
fundamental holons. These interactions can be said to store the
energy, or to make it manifest in a structural form. At its most
fundamental physical level (or subphysical in this model of the
holarchy), energy is embodied in the interactions of subatomic
particles. Within the cell energy is represented by chemical
bonds between atoms; in the organism, by cellular interactions
in tissues and organs. On a still higher level, as we shall see
in the following chapter, energy is embodied in the interactions
between organisms, and particularly between human beings.
Into this mix, Chaitin brings his concept of
information. We have already seen that in AIT, a noncompressible
string or program is considered to have more information than a
compressible one of the same length. This information, in AIT,
is directly related to complexity. Thus Chaitin defines the
complexity of a binary string as "the minimum quantity of
information needed to define the string."
18.
So in a general sense, complexity, energy, and information seem
to parallel each other, each increasing as the holarchy is
ascended. These are all candidate concepts, along with that of
freedom that I introduced in Chapter 2, by which to define, and
perhaps even measure, what it means to be higher in the
holarchy.
However, the relationship is almost certainly
more complicated than this. Information, in Chaitin's terms, is
associated with randomness. A completely random sequence of
digits--or of anything else--is considered to have more
information, and more complexity--than a patterned one of the
same length. This seems counter-intuitive. When we see a random
string of digits, we think of it as having no information at
all, whereas a highly patterned string does seem to have
information. Or, to put the notion in more famiiar terms, when
we speak or write language, which is based on certain rules
which give our words some pattern, we transmit information,
whereas there seems to be no information at all in a purely
random sequence of words or letters (gibberish). Likewise,
randomness in physics is conventionally associated with entropy,
a lack of complexity or order. A random mixture of two gasses
such as nitrogen and oxygen, for example, has more entropy than
two, separated quantitites of pure oxygen and pure nitrogen.
The reason for this apparent discrepancy
between our common-sense (and indeed, conventional scientific)
understanding of randomness, on the one hand, and Chaitin's
definition, on the other, is that Chaitin is considering one
particular random state, while we are considering all
possible states. Consider again a random sequence of words. What
we normally mean by this notion is any random sequence of
words--that is, anyone of a very large number of
sequences, each of which is random. By Chaitin's definition, the
information involved in expressing this notion is indeed highly
compressible. That is, a much shorter string or program could be
written to express the notion of any random sequence. The
program would simply say: generate all possible strings of
such-and-such a length. So randomness in this sense does have
low informational content, and low complexity, in Chaitin's AIT.
Chaitin, however, is primarily concerned with
the informational content of one particular random sequence of
words. This is not really "gibberish" or "nonsense" because to
pick out one particular such sequence out of the astronomically
large number of possible random sequences is not what we
normally call a random process. In order to define such a
sequence, a great deal of information would be necessary.
Nevertheless, some information theorists have
argued that complexity is not a simple function of randomness,
even understood in this manner. Charles Bennett has attempted to
define complexity in terms of something he calls "logical depth"
(Davies 1992; Norretranders 1998). In Bennett's view, complexity
is directly related to the degree of neither randomness nor of
order, but emerges in an area somewhere between them.
Perhaps the best example of complexity in
this sense is provided by language. If our language were
perfectly ordered, it would consist of say, a single letter,
repeated a number of times. If it were perfectly disordered, it
would consist of every possible combination of letters. Neither
type of language, obviously, is practical. A language with high
order doesn't have enough information; if all words are spelled
with the same letter, we can't create enough words. A language
with high disorder, however, would be too difficult to use. For
one thing, many words would be unpronounceable; for another,
there would be more words than anyone could possible remember.
So human language can be thought of as a kind of compromise
between randomness and order. Words can be composed of many
different combinations of letters, but there are some rules,
which provide a degree of order. Such a compromise, Bennett
believes, is what true complexity is all about.
Another example of such a compromise, on a
lower level of existence, is provided by protein molecules. If
proteins were highly ordered, they would all consist of one kind
of amino acid; if they were completely random, they would
consist of any possible combination of the twenty or so amino
acids found in nature. Again, both ends of this spectrum are
impractical. There is not enough information in proteins
composed of just one amino acid, and there is too much
information in proteins that are completely random. Real
proteins are composed of many different kinds of amino acids,
but they are probably not completely random. The must fold up
into highly specific three-dimensional shapes to carry out their
functions, and the interactions underlying such folding put some
constraints on the kinds of amino acids that can be present in
particular positions of the sequence. Most proteins that have
been identified fall into one of probably no more than a few
thousand "superfamilies", members of which share many amino
acids in common (Creighton 1993).
Bennett's view of complexity is thus a
common-sense one; it seems to reflect what we intuitively mean
when we say something is complex. Roughly speaking, complexity
corresponds to what we call "useful information", while the
extra information in his sense we would call "junk". Another way
of putting it is to say that complexity is information
constrained by order.
Like many definitions based on common sense,
though, this one runs into problems when it's treated
rigorously. The only way that Bennett has come up with to
quantitate his complexity is by the amount of time or effort
that it takes to compress or filter information--to take out all
the "junk", so to speak, and order the remainder into useful
information. For example, one would measure the complexity of a
book not by counting the bits of information it contained, but
by the time or effort it took the author to write it. If
complexity is defined in terms of time or effort, however, it
can only be measured empirically, that is, by observing the
information-containing entity as it actually comes into
existence. This may be possible for a book, but what about an
evolving cell or organism? How are we going to measure its
complexity?
Even in the case of the book, a problem
arises in that there is always the question of whether the time
or effort that it took a certain pattern of information to be
created was the least possible. Two people might be capable of
writing the same book using greatly different amounts of time
and effort. Even if we somehow defined a "standard" author, we
could never be sure that she couldn't have written the book
faster or more easily, using some different approach.
One very important idea that has emerged from
the work of Bennett and others, though, is that the discarding
or erasing of information may be as critical to the
understanding of complexity as information acquisition. If
complexity lies between maximum information and maximum order,
then to convert information to complexity, it seems that we have
to throw out a lot of junk. The Danish writer Tor Norretranders,
whose book The User Illusion (1998) explores this idea in
detail, notes that we human information processors are conscious
of but a vanishly small fraction of all the information that
impinges on our senses. Our mental activity thus results from
what Norretranders calls exformation, a process of discarding
information. Extrapolating to other levels of existence, we
might argue that a similar problem faced lower forms of life.
I will discuss the concept of information
further in Chapter 8, when we examine evolution; and I will
return to the subject of consciousness in Chapters 4 and 5. For
now, I want to emphasize that information, as it becomes better
defined, is poised to play an extremely powerful role in our
understanding of life. Some theorists have suggested that
information may be a fundamental feature of the universe
(Chalmers 1996; Davies 1999), and that a better grasp of it
could help us fill in the critical gaps in our understanding of
how evolution occurred. Our ability to measure, in a meaningful
way, the amount of information in the genome seems a very
reasonable possibility following the total sequencing of it,
which will be completed in a few years (Cooper 1994; Norman
1999; Haddad et al. 1999). This may allow us to understand
evolution in a new and powerful way.