Chapter 4: Free Will in a Hierarchic Context by Arthur Koestler, Responses by Charles Hartshorne and Bernhard Rensch

Arthur Koestler has published 30 books, ranging from political novels to theoretical works on creativity, evolution and parapsychology. His works are now being republished in a collected edition (The Danube Edition) by Hutchinson in London and by Macmillan and Random House in New York.



It is a common experience that you hear or read a sentence which remains implanted in your memory like a seed beneath the snow. Such an implantation happened to me many years ago when I read Bergson’s phrase: "The unconsciousness of a falling stone is something quite different from the unconsciousness of a growing cabbage." Comparing this to the Cartesian brand of dualism which restricts consciousness to human thinking, we are faced with two diametrically opposed conceptions: the first implies pan-psychism in one form or another, a continuum stretching from the rudiments of sentience in the cabbage to human self-awareness, the second places a kind of iron curtain between matter and mind.

This curtain seemed to be raised by a few inches in the nineteen twenties, in those heroic days when de Broglie and Schroedinger de-materialized matter like the stage magician who makes the lady vanish from the box, while Heisenberg (1969) eased her out of the straitjacket of determinism and proclaimed that the principle of complementarity agreed "very nicely" with the mind-body dualism -- the implication being that the particle aspect of the electron was analogous to the body, its wave aspect to the mind.

This led to a quasi-schizophrenic split in the Zeitgeist to which David Bohm has also alluded in his paper: I mean the contrast between the developments in physics on the one hand, and of academic psychology on the other. While the dominant school in physics emphasized the unpredictability of the fundamental processes in nature, the dominant school in psychology proclaimed its programme "to predict and to control human activity" (Watson 1925, p. 11). At the time when Heisenberg (1969, pp. 63-64) wrote that "when we descend to the atomic level, the objective world in space and time no longer exists" -- at the same time Skinner (1953, pp. 30-31) proclaimed that "since mental events. . . . are asserted to lack the dimensions of physical science, we have an additional reason for rejecting them." And when Eddington coined his celebrated aphorism "the stuff of the world is mind-stuff," the Behaviourists declared the mind to be a dirty four-letter word.

Now the Zeitgeist seems to be moving away from naive Reductionism, but as yet no coherent alternative system seems to be in sight. The indeterminacy of the micro-level cannot be transferred, through a short-cut, to the macro-level. God may be playing dice with the world, but He sees to it that it should be a fair game where the capers of the dice must conform to the Gaussian curve or the Poisson distribution. This may be an even more miraculous achievement than to control the balls on the Newtonian billiard table -- John von Neumann called the laws of probability sheer black magic. But they nevertheless prevent rash extrapolations from the indeterminacy on the quantum level to the robust determinacy of our everyday world.

A possible way out of this cul-de-sac is offered by the concept of multi-leveled hierarchic organisation. It seems to provide the missing link between pan-psychism and Cartesianism. The hierarchic model replaces the pan-psychist’s continuously ascending curve from protoplasmic to human consciousness by a series of discrete steps -- a staircase instead of a slope; and replaces the single Cartesian discontinuity by a series of transitions.

In his paper on biological hierarchy, H. H. Pattee (1970, pp. 117-136) wrote that hierarchic organisation is the central problem whether we consider the origin of life, philogeny or ontogeny. He went on: "It is the central problem of the brain, when there appears to be an unlimited possibility for new hierarchic levels of description. These are all problems of hierarchical organisation. Theoretical biology must face this problem as fundamental, since hierarchical control is the essential and distinguishing characteristic of life." Or, to quote Paul Weiss (1969, pp. 3f.): "The phenomenon of hierarchical structure is a real one, presented to us by the biological object, and not the fiction of one speculative mind." It is at the same time a conceptual tool, a way of thinking, an alternative to the linear chaining of events torn from their multi-leveled context -- like a thread extracted from a Persian carpet and suspended in a vacuum. One might say that hierarchical thinking relates to linear causality as the theory of algebraic functions relates to elementary arithmetic.

At this point I must introduce a bit of jargon, essential to the particular conception of hierarchy that I have described elsewhere (1967) and summarised in a paper read at the symposium "Beyond Reductionism." If I may quote very briefly from it (Koestler 1969, pp. 1 92f.):

The organism in its structural aspects is not an aggregation of elementary parts, and in its functional aspects not a chain of elementary units of behaviour; it is to be regarded as a multi-leveled hierarchy of semi-autonomous [i.e., self-regulating] sub-wholes, branching into sub-wholes of a lower order, and so on. Sub-wholes on any level of the hierarchy are referred to as holons. Parts and wholes in an absolute sense (as envisaged respectively by Behaviourists and Gestaltists) do not exist in the domains of life. The holon, as its name indicates, derived from the Greek holos, whole, with the suffix on, indicating a particle or part, is meant to be descriptive of a Janus-faced entity which displays both the quasi-autonomous, self-governing properties of wholes and the dependent properties of parts. The term holon may be applied to any stable sub-whole in a biological, social or cognitive hierarchy which displays rule-governed behaviour and/or structural Gestalt constancy.

I must refrain from going further into the cognitive framework of this particular model and confine myself to those basic aspects of the hierarchies of behaviour which are relevant to our present context.

All instinctive and learnt behaviour may be described in terms of hierarchies of sub-skills or subroutines, i.e., behavioural holons. The activities of these holons are rule-governed, but also adaptable within the constraints imposed by the rules. At this point it is essential to make a categorical distinction between (a) the fixed rules, and (b) the flexible strategies which between them control all organised activities of animal and man, regardless whether we consider instinctive behaviour, sensory-motor skills, or creative problem solving. It seems that the neglect of this fundamental distinction between invariant rules and variable strategies has led to much confusion in psychology, although the distinction becomes obvious when we turn to concrete examples. Take two creatures at opposite ends of the scale: a common spider spinning its web and a chess champion pondering his next move. The chess-player is controlled by the fixed rules of the game which define the permissible moves, but at the same time he is applying flexible strategies in his search for the most promising move among the permissible ones, guided by past experience and feedbacks from the environment, i.e., the position on the board. Thus the canon determines the rules of the game, strategy decides the actual course of the game. Now, the common spider too employs flexible strategies -- it will suspend its web from three, four, or more points of attachment, according to the lay of the land, but it will always arrive at the well-known symmetrical pattern where the radial threads bisect the laterals at equal angles according to genetically fixed rules of the game.

Let me now turn to the deceptively trivial experience that one and the same activity, such as tying my shoelaces, can be performed automatically, without conscious awareness of my own actions, or accompanied by various degrees of awareness. Driving along a familiar road with little traffic, I can hand control over to the ‘automatic pilot’ in my nervous system and think about eternity. In other words, there is a shift of control of the ongoing activity from higher to lower levels of the hierarchy. Vice versa, overtaking another car requires an upward shift to partial or full alertness and a rapid evaluation of risks which involves higher levels of the cognitive hierarchy.

The acquisition of a new skill, such as touch-typing, requires a high degree of concentration, whereas with increasing mastery and practice my fingers can be left to look after themselves ‘automatically,’ as we say. This condensation of learning transforms mental into mechanical activities -- mind processes into machine processes. It starts with infancy and never stops. To paraphrase Ludwig von Bertalanffy: we are not machines, but most of the day we act as machines. We do so even in such complex activities as adding up a column of figures. Karl Lashley once quoted a colleague of his, a Professor of Psychology, who told him: "When I have to give a lecture I turn my mouth loose and go to sleep."

Thus consciousness may be described in a negative way as that special attribute of an activity which decreases in direct proportion to habit formation. This is less paradoxical than it sounds, if one remembers that Norbert Wiener, following in Schroedinger’s footsteps, defined information as "essentially a negative entropy."

The transformation of learning into routine is accompanied by a dimming of the lights of awareness. We expect therefore that the opposite process will take place when routine is disturbed by running into some unexpected obstacle or problem: that this will cause a sudden switch from ‘mechanical’ to ‘minding’ or ‘mindful’ behaviour. Let a kitten suddenly cross the road on which you have been driving absentmindedly, and your previously absent mind will return in a flash, to take over control, to make a rapid decision whether to run over the kitten or risk the safety of your passengers. What happens in a crisis, or in any less dramatic problem situation involving unexpected, puzzling or discordant experiences, is this sudden shift of control of an ongoing activity to a higher level in the many-leveled hierarchy, from a semi-automatic to a more conscious performance, because the decision to be made or the problem to be solved is beyond the competence of the automatic pilot and must be referred to higher quarters. Needless to say, the implementation of the decision must still rely on automatised sub-routines -- braking or swerving -- which are triggered into activity by the command post.

It would seem that this sudden transfer of control of behaviour from a lower to a higher level of the hierarchy -- analogous to a quantum jump -- is the essence of conscious decision-making and of the subjective experience of free will. Its opposite is the mechanisation of routine, the enslavement to habit. We thus arrive at a dynamic view of a continual two-way traffic up and down in the hierarchy of behaviour.

Habit-formation implies a steady downward traffic, as on a moving escalator. There is a positive and a negative aspect to this phenomenon. On the positive side, it conforms to the principle of parsimony, or least action. If the laws of grammar and syntax did not function unconsciously, in the gaps between the words, we could not attend to meaning.

On the negative side, the downward traffic on the escalator tends to turn men into slaves of habit. The mechanisation of behaviour, like rigor mortis, affects first the extremities -- the lowest, subordinate branches of the hierarchy, comparable to the ethnologist’s fixed action patterns -- one’s characteristic gait, handwriting and accent of speech. However cleverly you try to disguise your handwriting, you cannot fool the expert. But mechanisation also tends to spread into the cognitive hierarchy, and force the stream of consciousness into fixed riverbeds -- Bergson’s homme automate programmed by a computer with built-in prejudices.

We can now turn from this all too familiar phenomenon to the upward traffic in the hierarchy: the abrupt transfer of the controls of an ongoing activity from a mechanical to a mindful level, and the concomitant experience of free will governing choice and decision. A striking description by Wilder Penfield of an experiment during a brain-operation may be relevant in this context (Penfield 1961):

When the neurosurgeon applies an electrode to the motor area of the patient’s cerebral cortex causing the opposite hand to move, and when he asks the patient why he moved the hand, the response is: "I didn’t do it. You made me do it". . . .

Once when I warned such a patient of my intention to stimulate the motor areas of the cortex, and challenged him to keep his hand from moving when the electrode was applied, he seized it with the other hand and struggled to hold it still. Thus one hand, under the control of the right hemisphere driven by an electrode, and the other hand, which he controlled through the left hemisphere, were caused to struggle against each other. Behind the ‘brain action’ of one hemisphere was the patient’s mind. Behind the action of the other hemisphere was the electrode.

It is interesting to compare the reaction of Penfield’s patient with the reaction of subjects who are made to carry out a post-hypnotic suggestion -- like taking off a shoe while having tea at 5 o’clock in the afternoon. In both cases the subject’s actions have been caused by the experimenter; but whereas the subject who does not know that he is obeying a post-hypnotic command automatically finds a more or less plausible rationalisation why he took off his shoe, Penfield’s patients realise that they are obeying a physical compulsion: "I never had a patient say, ‘I just wanted to do that anyway!’"

One must conclude that the hypnotist’s intervention affects a higher level of the mind-brain hierarchy than the surgeon’s needle. And to convince an opponent in a discussion that his theory is mistaken affects an even higher, more complex level. The step down from the level of reasoned persuasion to the level of hypnotic suggestion, or operant conditioning, means a transition from a highly mental to a comparatively mechanical level, and the same applies to the further step down to the neurosurgeon’s needle. But each transition is a relative, not an absolute, affair. To quote Thorpe (1966) at the Eccles Symposium: "The evidence suggests that at the lower levels of the evolutionary scale consciousness, if it exists, must be of a very generalised kind, so to say unstructured; and that with the development of purposive behaviour and a powerful faculty of attention, consciousness associated with expectation will become more and more vivid and precise."

However, these gradations in ‘structuring, vividness and precision’ exist not only in philogeny but also among members of the same species at different stages of development and in different situations. Each upward shift in the hierarchy produces more vivid and structural states of consciousness; each downward step is a transition from the mental to the mechanical. Classical dualism knows only a single mind-body barrier. The hierarchic approach implies a serialistic instead of a dualistic view; the transformation of physical into mental events, and vice versa, is effected not by a single leap over the barrier, but by a series of operational steps up or down the hierarchy.

Consider for a moment how we convert variations of air pressure arriving at the ear-drum, which are physical events, into ideas, which are mental events. It is not done in one go. In order to understand language we must perform a series of quantum jumps, so to speak, from one level of the speech hierarchy to the next higher one: phonemes are meaningless and can only be interpreted on the level of morphemes, words must be referred to context, sentences to a larger frame of reference. The spelling out of a previously unverbalised idea or image involves the reverse process: it converts a mental event into the mechanical motions of the organs of speech. This again is achieved by a series of steps, each of which triggers off pre-set neural mechanisms, activates linguistic holons of a more and more automatised type; the canons of grammar and syntax exert their rule unconsciously, automatically; and the last step in articulating the sounds of speech is performed by entirely mechanised patterns of muscle contractions. Chomsky’s phycholinguistic hierarchy was anticipated in A Midsummer Night’s Dream.

As imagination bodies forth

The forms of things unknown, the poet’s pen

Turns them to shapes, and gives to airy nothing

A local habitation and a name.

Each step downward in the conversion of airy nothings into spatial motions of the vocal chords entails a handing over of control to more automatised automatisms: each step upward in the reverse conversion leads to more mentalistic processes of mentation. All this seems to indicate that the mind-machine dichotomy is not localised along a single boundary, but is present on every level of the hierarchy.

On this view, the categorical distinction between mind and body fades away and instead of it ‘mental’ and ‘mechanical’ become complementary attributes of behavioural holons on every level. The dominance of one aspect or the other -- whether the holons’ activities are preformed mindfully or mechanically -- depends of course partly on its position or rank within the hierarchy, but also, and perhaps even more, on the momentary flow of traffic in the hierarchy, whether the shifts of control proceed in an upward or downward direction. Thus even the lower reaches of the hierarchy regulated by the autonomic nervous system can apparently be brought under mental control by bio-feedback or meditation; and vice versa, I can perform the supposedly mental activity of reading a printed page -- without taking in a single word.

We are in the habit of talking of mind as if it were a thing, which it is not -- nor is matter, for that matter. Minding, thinking, mentating are processes in a complementary or reciprocal relation to mechanical processes. Let me hasten to say that I am using here the physicist’s term ‘complementary’ as a very vague analogy, and not as an explanation. But as an analogy it has its uses. An electron will behave as a wave or a corpuscle according to circumstances. Andrew Cochran (1972, pp. 235-249) has suggested that the quantum-mechanical wave properties of electrons reflect a rudimentary consciousness of matter, and that the "dual aspects of man are a direct result of the dual aspects of matter." Here Cochran seems to be guilty of the sin which Whitehead called ‘misplaced concreteness,’ but at least the physicist’s perplexities are a comfort to the biologist; and though the complementarity principle leaves a host of problems unanswered, at least it poses a few new questions.

We know that the manifestations of the electron’s wave aspect or corpuscular aspect depends on the experimental set-up. In the cloud chamber it behaves as a corpuscle; in other experiments it is refracted as a wave. With animals and men it also depends to a large extent on the experimental situation -- the contingencies of the environment -- whether their mentalistic or mechanical aspect will dominate. Thus a skill practised and repeated in a monotonous environment in the absence of novelty tends to degenerate into mechanical routine. And vice versa, we have seen that a challenging environment which contains unexpected features or obstacles, and poses new problems which must be referred to higher echelons in the hierarchy, will cause a shift from mechanical to mindful behaviour. Such sudden upward shifts from mechanical routine towards originality and improvisation have been observed by ethologists throughout the animal kingdom from insects upward to birds, rats and chimpanzees. They may be regarded as precursors of human creativity, and point to the existence of unsuspected potentials in the organism which are dormant in the normal routines of existence but emerge in response to new challenges offered by the environment -- a zoological analogy to Toynbee’s paradigm of Challenge and Response.

To return to man: the ‘spelling out’ of an intention -- whether it is the expression of an idea or just the lighting of a cigarette -- is a process of triggering patterns of sub-routines into action, behavioural holons on subordinate levels; it is a process of particularisation of a general intent. On the other hand, the referring of decisions to higher levels is an integrative process which tends to establish a higher degree of coordination and wholeness of experience. Thus every upward shift would represent a quasi-holistic move, every downward shift a particularising move, the former characterised by heightened awareness and mentalistic attributes, the latter by diminishing awareness and mechanistic attributes.

How does free will fit into this tentative model? To repeat what has been said before: the subroutines on which our thoughts and actions are based are governed by fixed rules and more or less flexible strategies. The rules of chess define the permissible moves; strategy determines the choice of the actual move. The question is, how are these strategic choices made? Down on the visceral level, they are guided by the closed feedback loops of homeostatic regulations. Even such a complex sensory-motor skill as riding a bicycle is self-regulating in the sense that the cyclist’s strategy is governed by kinesthetic and visual feedbacks without the necessity of referring decisions to superior levels -- except if the road is barred. But when we enter the higher levels of the hierarchy, the relative importance of rules and strategies undergoes a subtle change. Compare playing noughts and crosses (tick-tack-toe) with playing chess. In both cases my strategic choice of the next move is ‘free’ in the sense of not being determined by the rules of the game. But noughts and crosses offers only a few alternative choices, guided by relatively simple strategies, which can even be codified to form a secondary set of non-statutory constraints, as it were. The chess-player, on the other hand, is guided by considerations on much higher levels of complexity with an incomparably larger variety of alternative choices, that is, more degrees of freedom. Moreover, the strategic precepts which guide his choice again form an ascending hierarchy. On the lowest level are tactical precepts such as occupying the centre squares, avoiding loss of material through forks and pins, protecting the king -- precepts which every duffer can master, but which the master is free to overrule by shifting his attention to the next higher levels of strategy, where material may be sacrificed and the king exposed in an apparently crazy move which, however, is more promising from the viewpoint of the game as a whole. Thus in the course of a game decisions have to be constantly referred to higher echelons with more degrees of freedom, and each shift upward is accompanied by a heightening of awareness and the experience of making a free choice. Generally speaking, in these sophisticated domains the constraining code of rules (whether of chess or of the grammar of speech) operate more or less automatically, on unconscious or preconscious levels, whereas the strategic choices are aided by the beam of focal awareness. In writing down a sentence, I do not have to worry about the rules of spelling, but ah, for the choice of the fitting adjective, the mot juste!

To repeat: the degrees of freedom in the hierarchy increase with ascending order, and each upward shift of attention to higher levels, each handing over of decision to higher echelons, is accompanied by the experience of free choice. But is it merely a subjective experience? I think not. After all, freedom cannot be defined in absolute, only in relative, terms, as freedom from some specific constraint. The ordinary prisoner has more freedom than one in solitary confinement; democracy allows more freedom than tyranny; and so on. Similar gradations are found in the multi-leveled hierarchy, where with each step upwards the relative importance of the constraints decreases and the number of choices increases.

However, this model will only be found useful if we assume that the hierarchy is open-ended toward infinite regress, both in the upward and downward direction. And there seems to be some justification for this assumption. Matter is no longer an ultimate concept; the hierarchy of macroscopic, molecular, atomic, subatomic levels trails away without hitting rock-bottom until matter dissolves into patterns of energy-concentration, and then perhaps into tensions in space. In the opposite direction we are faced with the same situation: there is an ascending series of levels, leading from automatic and semi-automatic reactions, through awareness and self-awareness, to the self’s awareness of its awareness of itself, and so on, without hitting a ceiling.

To put it in a different way: the higher the level to which the decision is referred, the less predictable the choice. We tend to assume that the ultimate decision rests with the apex of the hierarchy -- but the apex itself is not at rest. It keeps receding. The self, which has the ultimate responsibility for a man’s actions, eludes the grasp of its own awareness. Looking downward in the hierarchy of behaviour, a man is only aware of the task in hand, an awareness that fades with each step down into the dimness of routine, the darkness of visceral processes, the various degrees of unawareness of the growing cabbage and the falling stone, and finally dissolves in the indeterminacy of the Janus-faced electron.

But in the upward direction the hierarchy is also open-ended and leads to the infinite regress of the self. Looking upward, or inwards, man has a feeling of wholeness, of a core to his personality from which his decisions emanate, and which in Penfield’s phrase "controls his thinking and directs the searchlight of his attention" (Penfield 1961). But the metaphor is deceptive. When a priest chides a penitent for indulging in sinful thoughts, they both assume that behind the agency which controls his actual sinful thinking there is another agency which decides what subject he should think about; and so on ad infinitum. The ultimate culprit, the self which directs the searchlight of my attention, can never be caught in its focal beam. The experiencing subject can never fully become the object of his experience; at best he can achieve successive approximations. If learning and knowing consist in making oneself a private model of the universe, it follows that the model can never include a complete model of itself, because it must always lag one step behind the process which it is supposed to represent. With each upward-shift of awareness toward the apex of the hierarchy -- the self as an integrated whole -- it recedes like a mirage. "Know thyself" is the most venerable and the most tantalizing command. Total awareness of the self, the identity of the knower and the known, though always in sight is never achieved. It could only be achieved by reaching the peak of the hierarchy which is always one step removed from the climber.

This is an old conundrum, but it seems to blossom into new life in the context of the open-ended hierarchy. Determinism fades away not only on the subatomic quantum level, but also in the upward direction, where on successively higher levels the constraints diminish, and the degrees of freedom increase, ad infinitum. At the same time the nightmarish concept of predictability and predestination is swallowed up in the infinite regress. Man is neither a plaything of the gods, nor a marionette suspended on his chromosomes. More soberly, similar conclusions are implied in Karl Popper’s (1950) proposition that no information-processing system can embody within itself an up-to-date representation of itself, including that representation. Somewhat similar arguments have been advanced by Michael Polanyi (1966) and Donald McKay (1966).

Some philosophers dislike the concept of infinite regress because it reminds them of the little man inside the little man inside the little man. But we cannot get away from the infinite. What would mathematics, what would physics be, without the infinitesimal calculus? Self-consciousness has been compared to a mirror in which the individual contemplates his own activities. It would perhaps be more appropriate to compare it to a Hall of Mirrors where one mirror reflects one’s reflection in another mirror, and so on. Infinity stares us in the face, whether we look at the stars or search for our own identities. Reductionism has no use for it, but a true science of life must let infinity in and never lose sight of it.

REFERENCES

Cochran, Andrew. 1972. "Relationships Between Quantum Physics and Biology." Foundations of Physics 1:235-249

Heisenberg, W. 1969. Der Teil und das Ganze. Munich: R. Piper.

Koestler, A. 1967. The Ghost in the Machine. New York: Macmillan.

Koestler, A. 1970. "Beyond Atomism and Holism -- The Concept of the Holon." In Koestler, A.. and Smythies, J. R. (eds.), Beyond Reductionism. New York: Macmillan, pp. 192-216. London: Hutchinson.

MacKay, D. M. 1966. "Conscious Control of Action." In Eccles, J. C. (ed.), Brain and Conscious Experience. New York: Springer-Verlag, pp. 422-445.

Pattee, H. H. 1970. "The Problem of Biological Hierarchy." In Waddington, C. H. (ed.), Towards a Theoretical Biology, Vol. III, pp. 117-136. Edinburgh: University Press; Chicago: Aldine Publishing Co.

Penfield, W. 1961. "The Physiological Basis of the Mind." In Farber, S. M., and Wilson, R. H. L. (eds.), Man and Civilization: Control of the Mind. New York: McGraw-Hill, pp. 3-17.

Polanyi, M. 1966. The Tacit Dimension. New York: Doubleday.

Popper, K. R. 1950. "Indeterminism in Quantum Physics and in Classical Physics." British Journal for the Philosophy of Science 1:117-133,173-195.

Skinner, B. F. 1953. Science and Human Behavior. New York: Macmillan.

Thorpe, W. H. 1966. "Ethology and Consciousness." In Eccles, J. C. (ed.), Brain and Conscious Experience. New York: Springer-Verlag, pp. 470-505.

Watson, J. B. 1925. Behaviourism. London: Kegan Paul & Co.; New York: People’s Institute Publishing Co.

Weiss, P. 1969. "The Living System -- Determinism Stratified." In Koestler, A., and Smythies, J. R. (eds.), Beyond Reductionism. New York: Macmillan, pp. 3-42; London: Hutchinson.

 

RESPONSE TO KOESTLER’S PAPER

By Charles Hartshorne

Charles Hartshorne has taught philosophy primarily at the University of Chicago, Emory University, and the University of Texas; at the latter he is now Emeritus Ashbel Smith Professor of Philosophy.



A puzzling question to me is the topic currently discussed under such captions as ‘systems analysis’ (Bertalanffy, Laszlo), or ‘holograms’ (Bohm), or as ‘holons’ by Koestler. The mathematics of the discussion is, alas, beyond me. Informally I have some difficulty in grasping the distinction between:

1. Any member of a set of interacting individuals (each a sequence or society of momentary actualities) is influenced by all others;

and

2. Any member is influenced by the whole formed by the entire set.

Of course, an existing whole is no mere ‘sum of the parts,’ since each part or member interacts, has definite relations of influencing and being influenced by the others. Each member has its own perspective on the others. This is built into the psychicalist idea of ‘prehension,’ the relational aspect common to memory and perception, the aspect Leibniz failed to appreciate. But what is added by taking the totality of individual agents as itself an agent?

There is one very important qualification to the above. Given a system or composite of individuals, all on a comparable level, for instance molecular or cellular, then in some cases, given integration of activities (thus molecules in a cell, or cells in a vertebrate animal), there may be what Leibniz called a ‘dominant monad’ or what Whitehead calls a ‘society of presiding occasions.’ The sequence of experience in a waking vertebrate animal is the paradigm case. In one sense such a presiding individual is only another of the interacting agents, but in another sense it stands for the totality. It does this because its dominance consists in an exceptional adequacy of prehending and being prehended. Thus a cell in a nervous system interacts directly and strongly only with a few neighboring cells, whereas a human experience directly prehends and is prehended by a multitude of cells. It sums up to some extent the bodily totality. But it is not literally the totality. Also, in dreamless sleep there are no presiding occasions above the cellular level; the cells are on their own. And this seems to be true at all times of the constituents of the blood, such as the phagocytes. It is also probably true of the cells of a tree. Each cell, in such cases, responds to neighboring cells and to lower level singulars -- molecules, atoms, particles.

RESPONSE TO KOESTLER’S PAPER

By Bernhard Rensch

Bernhard Rensch is at the Zoologisches Institut of the Westfaelischen Wilhelms-Unversitaet in Muenster, West Germany.



I fully agree with Dr. Koestler’s description of mental functions as a hierarchical order of different levels between ‘mechanical’ and mindful processes. But in my opinion it is possible to assume that even voluntary thinking can be regarded as a determined process. This means that we have not a ‘free will.’ When, for instance, a chess player has a choice between different moves, he will not only be guided by the rules of the play, but he will evaluate several possibilities. His decision is guided by the dominant idea to choose the most advantageous move. All our voluntary thinking is guided by such dominant ideas, when we want to solve a problem or plan an action. All voluntary thinking is motivated and this means ultimately that it is determined. However, it is often difficult to analyze the course of our thinking afterwards, because we have to do with an interaction of many different factors. Our thinking is guided by the abilities of our brain, by personal mental abilities or weaknesses, by the feeling tone of relevant associations, by memory traces which we have stored, by actual sensations, mainly by exciting ones, and by special moods. As this interplay is extremely complicated, we can not pretend to have certainty that our voluntary thinking and acting is not free, but we can say that this is most probable.

In simpler cases it is possible to judge the course of the brain functions. When I train a rat to discriminate a cross from a circle and to prefer the former one (because only this is rewarded during the training period), then I can predict that the rat will choose the cross later on, although now both patterns are rewarded. Or when I know a child very well and know what he has learned before in arithmetic, then I can predict with a high degree of probability the course of his considerations and their result when I offer him a new arithmetrical task which does not exceed the realm of his knowledge. The voluntary thinking processes of adult men are only too complicated to become sufficiently analyzed. And we can understand when Max Planck once said that even Kant’s deepest thoughts and Beethoven’s best musical productions were ultimately necessitated. However, later on Planck made some reservations because he became aware of the moral and juridical difficulties when we deny free will (cf. B. Rensch, Hippokrates 24: 1019-1032, 1963; B. Rensch, Biophilosophy, ch. 7A, New York: Columbia University Press, 1971).

Chapter 3: Temporal Order and Spatial Order: Their Differences and Relations by Milic Capek

Milic Capek is Professor of Philosophy at Boston University.

If we look at the subject catalogue of nearly any university library, we find that not only the books dealing with space and time are listed under the same heading, but also that in many of them both concepts are treated jointly as their very titles indicate: Space and Time, Space, Time and Matter; Space, Time and Motion; and even Space, Time and Deity, etc. This is certainly not accidental; nor is it accidental that with a few exceptions the word ‘space’ regularly appears in the titles before ‘time.’ This is due to the fact that the properties of space and time are, or at least appear to us, quite similar and, furthermore, that the spatial relations seem to us as somehow more fundamental, more solid, and easier to grasp than the elusive temporal relations. Hence our instinctive tendency to believe that the relations of succession can be adequately symbolized by geometrical relations.

The purpose of this paper is to trace the sources of this belief, its remarkable persistence through centuries of philosophical and theological speculation, and its disastrous influence not only on philosophy, but also on the interpretation of some recent and contemporary physical theories. Finally I want to show that an attentive analysis of both our introspective experience as well as of the revolutionary discoveries of twentieth-century physics indicates that, not only are temporal relations basic and irreducible to spatial relations, but also that spatial relations are of a derivative kind, being mere approximations or idealizations of those strata of experience in which the temporal aspect is less prominent and can be disregarded for practical purposes. Time, or more accurately, becoming seems to be more fundamental in the light of available evidence, while spatial relations are mere instantaneous cross-sections in what Whitehead called the ‘creative advance of nature’ and Bergson, before him, ‘true duration.’ Such instantaneous cuts have their usefulness and justification in our macroscopic and macrochronic perspective, but their pragmatic usefulness should not be confused with objective ontological status.

There are two approaches to the origin of the concept of time: on one side, Jean Piaget’s recent investigations of the formation of the notion of time in children; on the other side, the analysis of the development of the concept of time in adult humanity, more specifically, in the philosophical and scientific community. It is remarkable how complementary and convergent are the results obtained in two such apparently disparate areas -- child psychology and history of ideas. Piaget showed how young children (4-6 or even 7 years old) have considerable difficulty in differentiating temporal from spatial relations. Hence the succession is confused with spatial order; the duration of concomitant motions is judged erroneously from the distance covered by the moving bodies without taking into account the differences in speed; and the relation of simultaneity is not disentangled from spatial coincidence. Only gradually and after considerable groping and in a higher age (7-8 years) do children succeed in differentiating the temporal order from the spatial order, i.e., the temporal ‘before-after’ relation from the spatial ‘before-after.’ Similarly, the duration of motion is dissociated from the length of its trajectory, and the coincidence in time (i.e., simultaneity) is not confused with the coincidence in space. This is, of course, an extremely concise and simplified account. Piaget discovered that the formation of the notion of time proceeds by three successive stages, with the boundaries slightly different in different children; it is only in the third stage that the notion of duration is entirely freed from what Bergson called ‘the fallacy of spatialization,’ in the sense that it is understood how several different motions with different speeds and covering different distances can occur within one and the same interval of time. Thus the distinction is finally drawn not only between time and the spatial trajectory of motion, but also between time and motion itself (Piaget 1937).

It would be absurd to claim that there is a complete analogy between the formation of the notion of time in the child and the development of the concept of time in humanity. The situation is far more complex; yet, some interesting similarities do exist and they are hardly accidental. The early Pythagoreans identified time with the celestial sphere, that is, with the circular trajectories of the daily motion; later, time was identified with the rotating motion of the celestial sphere. The reference to the celestial sphere and its motion had far-reaching effects on the subsequent development of the concept of time: it focused the attention of philosophers on the regular periodicity of the celestial motions by which time can be measured and thus it deepened the distinction between the qualitative content of time and its metrical aspects; the correlation of time with spatial motion became the source of the relational theory of time, according to which "time is nothing by itself," as Lucretius wrote (De rerum natura, 1,495f.), and cannot be separated from concrete changes occurring in it; finally, the alleged inseparability of time from spatial displacements created the tendency to exaggerate the analogy between space and time and, eventually, to spatialize time entirely and thus virtually to eliminate it.

This extreme tendency is conspicuous in the Eleatic school. Zeno’s four arguments against the reality of motion were based on the assimilation of time to a geometrical line. According to Zeno, temporal intervals are adequately symbolized by spatial segments: they both are divisible ad infinitum, and to the point-like extremities of linear trajectories correspond the durationless extremities of temporal intervals -- instants. From this the impossibility of building motion from the motionless positions, and durations from the durationless instants, followed naturally. Hence, in Zeno’s mind his teacher, Parmenides, ridiculed by his opponents for his denial of time and motion, was vindicated and avenged.

Eleatism was the metaphysics of timeless Being in its most radical form and, although it has never reappeared in its extreme form, it exerted a lasting influence on the subsequent development of Western thought. In truth, such systems as those of Spinoza and Bradley came very close to Eleatism; but instead of eliminating time, change and motion entirely, they half-heartedly retained them in treating them as mere ‘appearances,’ as having some sort of existence or rather semi-existence which was of an inferior and less dignified kind than that of immutable Being. In truth, it can be argued that even Parmenides treated change and motion at least as ‘appearances,’ as belonging at least to the illusory realm of phenomena and not as mere non-entities; otherwise the division of his poem into two parts -- "The Way of Truth" and "The Way of Opinion" -- would lose its meaning. But this is a historical question; what is important in this context is the fact that the subsequent development of Greek, medieval and modern philosophy was largely dominated by the contrast between the timeless realm of Being and the temporal realm of change; in this sense, it was a continuation of the dialogue between Parmenides and Heraclitus, with Parmenides having an upper hand. In most philosophical and theological systems Being was endowed with a more dignified status of the true reality, of which the temporal realm is merely a pale, shadowy replica. From Plato, who defined time as a moving (i.e., imperfect) image of eternity, down to St. Thomas, who stressed the perfect immutability of his Supreme Being in terms indistinguishable from the language of the Eleatic school, we can trace the same persistent theme -- a metaphysical dichotomy of Being and Becoming, of perfection and imperfection, of the timeless and the temporal realms. To this dichotomy corresponds the epistemological dichotomy of two kinds of knowledge -- the true knowledge of the all-embracing timeless truth which only God possesses and man’s imperfect knowledge confined to the temporal realm. The incompleteness of human knowledge is due to the temporal incompleteness of the realm to which it is confined. It is easy to see how the timelessness of God and of his knowledge led to the theological determinism, to the predestinationism of St. Augustine, St. Thomas and Calvin; for the abolition of time and becoming on the divine level eliminates entirely the ambiguity of the future which is uncertain only to our imperfect, time-bound insight, but which is in its completeness timelessly present in the mind of God.

This trend continued in modern philosophy and to some extent in modem science, in spite of the Copernican and Cartesian revolutions. Spinoza merely secularized the God of Aquinas and Calvin by equating him with the impersonal, but equally static and equally timeless, order of nature (Deus sive natura); and Laplace’s ‘omniscient mind’ is nothing but a secularized and depersonalized version of the God of the Scholastics and of the Protestant Reformation. This accounts for a "secret alliance of theological and naturalistic determinism" of which Professor Hartshorne once wrote (Hartshorne 1932, p. 429). I like to quote this because I regard it as one of the most insightful and also most neglected remarks concerning the history of ideas. There is no doubt that the preferential treatment of the concept of Being, with the concomitant tendency to debase the status of time, becoming and change continued until the last decades of the last century, when the first process philosophers -- Renouvier, Boutroux and Bergson in France, and William James in the United States -- tried to reverse the centuries-old trend. (I am not mentioning Hegel, since his status as a process thinker is rather ambiguous, as is shown by the two divergent interpretations of his philosophy -- that of Croce and J. N. Findlay, in contrast to the static interpretation of J. E. McTaggart.)

I would like to conclude this digression into the history of ideas with the following words which Friedrich Nietzsche wrote in 1888:

You ask me which of the philosophers’ traits are really idiosyncracies? For example, their lack of historical sense, their hatred of the very idea of becoming, their Egypticism. They think that they show their respect for a subject when they de-historicize it, -- sub specie aeterni -- when they turn it into a mummy. All that philosophers have handled for thousands of years have been concept-mummies; nothing real escaped their grasp alive. When these honorable idolators of concepts worship something, they kill it and stuff it; they threaten the life of everything they worship. Death, change, old age, as well as procreation and growth, are to their mind objections -- even refutations. Whatever has being, does not become; whatever becomes does not have being. Now they all believe, desperately even, in what has being. . . .They place that which comes at the end -- unfortunately! for it ought not to come at all! -- namely, the ‘highest concepts’ which means the most general, the emptiest concepts, the last smoke of evaporating reality, in the beginning, as the beginning. This again is nothing but their way of showing reverence: the higher may not grow out of the lower, may not have grown at all. . . .

But Heraclitus will remain eternally right with his assertion that being is an empty fiction. The apparent world is the only one: the ‘true’ world is merely added by a lie (Nietzsche 1954, pp. 479-482).

This is a temperamental revolt against the perennial metaphysics of Being presaging the birth of process philosophy. (Curiously enough, Twilight of Idols, from which this passage is taken, was published only one month before Bergson wrote a preface to his first book.) But it is ironical to see how much Nietzsche, despite his furious attack on the metaphysics of Being, remained committed to it. He -- like some ancient Greek philosophers, perhaps even Heraclitus -- still accepted the idea of eternal recurrence, that is, in the words of Mircea Eliade, "Ontology uncontaminated by time and becoming." For, in the cyclical theory of time, what will be already has been, and what has been will be; the distinction between the past and the future -- and with it time itself -- is abolished. Another instance of spatialization of time.

The evolution of modern science in this respect was more ambiguous. On one side it exhibits the tendency similar to that which we observed in the history of modern philosophy. This is only natural, since modern science and modern philosophy have not developed independently, but have perpetually interacted and influenced each other; in truth, until the post-Kantian period they were so intertwined that it is difficult to speak of separate histories. This explains why certain ideas were shared by both scientists and philosophers. I have already mentioned Spinoza and Laplace, who were both equally intransigent in their insistence on rigorous determinism and equally explicit in their reduction of time to a mere illusory appearance. (What is less known is that Immanuel Kant in his Critique of Practical Reason expressed Laplace’s idea of timeless or becomingless determinism twenty-five years before Laplace wrote his famous passage in his Essai philosophique sur les probabilities.) I have already mentioned how the medieval theological determinism transformed itself into modern naturalistic determinism, which dominated modern science and a large part of modern philosophy. Emile Meyerson showed convincingly in a number of his books how an elimination of time was a theme common to both science and philosophy in the last three centuries. There is no question that the fact that time was symbolized by a geometrical line (t-axis), which had the role of an independent variable on which various physical quantities depend, greatly contributed to this view. For it is too easy to forget that even in this geometrical symbolism time is not represented by a static ready-made line, but by an incomplete line which is being continuously extended ‘into the future,’ as its terminal point, representing the present moment, is continuously moving; it is difficult to resist the notion that the future positions on the t-axis somehow pre-exist prior to their occupation by the moving present, and that all events -- present, past and future -- coexist on the ‘fourth dimension’ which is as complete and as static as the other three spatial dimensions. This view of time as the fourth dimension is usually associated with the theory of relativity, but it can be traced far back to the past pre-relativistic period: it was Descartes who called time a ‘dimension’ and d’Alembert who called it ‘the fourth dimension,’ while Lagrange called mechanics ‘geometry of four dimensions.’ In truth, the roots of this becomingless view go as far back as Zeno of Elea, who was probably the first who treated time and motion as a static geometrical line: his view that the allegedly flying arrow is motionless in all points of its trajectory has an obvious affinity with the strange view of some of our contemporaries according to which successive moments exist ‘tenselessly’ on the fourth dimension called ‘time.’

But even if we do not forget that in the geometrical diagram ‘the t-line’ is perpetually being extended by the continual forward motion of its terminal point, representing the present moment, the risk of serious confusions is hardly lessened. For, to confuse time with motion is only slightly less misleading than to confuse it with the trajectory of motion. In other words, in speaking of time, the kinematic metaphors are hardly better than the static and geometrical ones. How many confusions were and are still caused by an assimilation of time to motion! The metaphorical expressions, ‘direction of time,’ and ‘time arrow,’ are very fashionable and are justified to a certain extent. They express in kinematic terms the basic asymmetry of time, the distinction between past and future. The image of time as a ‘flow’ was used already by early Greek thinkers -- let us only remember of Heraclitus -- and was retained even by sober Newton in his Principia. But as soon as this kinematic metaphor is taken literally, serious difficulties arise. We may ask, for instance: "Whence and whither does time flow?" The conventional answer, "From the past to the future," seems to satisfy most people. Time thus becomes a metaphysical river whose source is in the infinitely distant past and its estuary in the infinitely distant future. More abstractly, but not in an essentially different way, time is described as a motion along a straight line; the moving point stands for the present instant, the path already covered corresponds to the past, the points not yet occupied correspond to the future events. But as every motion in space is relative and can be transformed into a rest by an appropriate change of the frame of reference, it is permissible to regard the present moment as stationary and future events as moving toward the past with an equal and opposite velocity, to wit, opposite with respect to the velocity of the present moment in the first picture. Instead of the present’s advancing toward the future, future events retreat toward the motionless present, pass through it, and then sink into the deeper and deeper past. Now which of these descriptions is correct? Does time flow forwards, from the past to the future, or backwards, from the future to the past?

The only possible answer is that both descriptions are equally inadequate; both are metaphorical attempts to translate into spatial and kinematic terms the elusive and essentially incomplete nature of time. As soon as we try to illustrate the nature of time by comparing it to motion, the principle of kinematic relativity of motion will sooner or later sneak into our illustrations and diagrams; hence two apparently contradictory answers concerning the alleged direction of time. If, however, we forget that the word ‘direction’ is borrowed from geometry and kinematics and therefore can be applied to time only in a metaphorical sense, we may thoughtlessly draw all consequences from the alleged analogy between ‘movement of time’ and movement of bodies. Thus we may think that, as the direction of motion in space may change, the time direction may change too; as a material particle may reverse its motion and pass again through the positions previously occupied, ‘the moving present’ can also return to the past; or as the motions in space may be circular, the course of time may be circular too. Both the theory of reversible time and the theory of eternal recurrence are based on such false kinematic analogies. As I tried to show in my first book (Capek 1969) as well as in some of my articles, such theories, when closely analyzed, cannot be even stated in a self-consistent language, since they use alternately and surreptitiously two incompatible temporal descriptions. Such antagonistic and contradictory descriptions are merely clumsy translations into geometrical and kinematic terms of the basic irreversibility of becoming which cannot be meaningfully separated from the very nature of time.

The criticism of the confusion of time with motion represented another trend in the development of physics which was clearly incompatible with the trend just described. This is why I said that the development of physics, unlike that of philosophy, was far more ambiguous. One of the first instances of such criticism is to be found in Aristotle’s Physics. Although Aristotle insisted on the inseparability of time and motion, he was careful not to identify them. He did not fail to observe that the motions with different velocities and covering different distances can take place within one and the same interval of time, so that none of them could be identified with time itself. It is true that there is, according to Aristotle, one privileged motion -- the rotation of the sphere of the fixed stars -- which measures accurately the objective flow of time; in its insistence on the close correlation of time with the universal celestial clock, Aristotle’s theory of time remained relational. But there are certainly the shades of Newton in it. What prevented him from reaching Newton’s conclusion about the independence of time from its concrete physical content was, besides the Eleatic influences, his belief in the universal cosmic clock which was, so to speak, a physical embodiment of the oneness and uniformity of time. When the astronomical revolution of the sixteenth century -- in which the Italian philosophers of the Renaissance played a far more important role than historians of science admit -- removed the universal cosmic clock, there were two alternative ways open to physics and philosophy of nature: either to retain the relational theory of time and to hold with Bruno (Bruno 1879, p. 144) that "there are as many times as there are the stars" (tot tempora quot astra), since there is no body possessing a privileged rotation motion, and the only body which allegedly had it -- the sphere of the fixed stars -- has been swept away; or to save the unity and homogeneity of time by separating it from any particular motion -- and this is what Newton did, anticipated in this respect by Isaac Barrow and, in particular, Gassendi. Thus the absolute theory of time -- and with it classical physics itself -- was born.

In the eyes of a philosopher, time has acquired in classical physics a strangely ambiguous character. The name of both Newton and Laplace belong to classical physics. Yet, these two giants of the classical era had entirely different views of time. According to Newton, time is ultimately real, even on the divine level; this is why he regarded it as sensorium dei. In it everything exists, even space; for space endures in time, and what we call ‘enduring space’ is really nothing but a continuous succession of instantaneous spaces. According to Laplace, time is merely the fourth dimension of space, as static as the other three dimensions; its apparent incompleteness is illusory, being merely, as Bergson observed, "the infirmity of a mind that cannot know everything at once (Bergson 1944, p. 45). Who then was right, Newton or Laplace?

Today it is tempting to say that they were both wrong, since they both belong to the classical era which is now definitely over. But such a bare negative statement will certainly not satisfy a philosopher interested in the ultimate status of time and becoming. When the relativity theory came into existence, there was at first the widespread tendency to regard it as another confirmation of the Laplacean view of time as the additional fourth dimension. Long was the list of those -- not only of popularizers, but philosophers as well, and even some physicists -- who interpreted Minkowski’s four-dimensional continuum not as the four-dimensional process, essentially incomplete, but as a sort of a four-dimensional hyperspace, whose fourth dimension exists in its completeness as much as the three spatial dimensions. Although this misinterpretation had been effectively criticized, not only by philosophers such as Bergson, Meyerson, Whitehead and Reichenbach, but also by a number of physicists -- among them Einstein himself, Langevin, Eddington, etc. -- it was again revived recently by Costa de Beauregard, Adolf Gruenbaum, and J. J. Smart, and apparently accepted by W. Quine. I criticized this revived misinterpretation in a number of previous articles (Capek 1951, 1955, 1965, 1966) and in both my books (Capek 1969, esp. Ch. XI, XVII, and Append. I, II; 1971, pp. 226-256). Hence I prefer to restate the main arguments against the static misinterpretation in a concise form only. They are as follows:

1. To a historian of ideas, it is clear that this misinterpretation is another instance of the perennial tendency to spatialize time which can be traced to the very dawn of Western thought. This in itself would not be a decisive argument against it, but it acquires its significance in conjunction with other arguments, in particular with the unsolvable epistemological difficulty to which such elimination of time leads.

2. The most plausible argument for the static interpretation is the relativization of simultaneity and succession. But this relativization is far from being unqualified and, when attentively analyzed, it leads to an elimination of instantaneous space without weakening in any sense the ontological status of becoming.

3. Finally, the static interpretation does not eliminate becoming; it merely relegates it into the subjective, ‘phenomenal’ realm. But in this way it creates an intolerable dualism of two completely heterogeneous realms without any attempt to relate them in some intelligible way.

Let me comment on these three points.

Re 1. This point has been already lengthily and sufficiently covered and there is no need to return to it.

Re 2. This point requires rather lengthy discussion. The relativization of simultaneity and succession has been and still is regarded as an argument against the objectivity of time and/or becoming. If there is no objective ‘Now’ unambiguously separating the past from the future -- in other words, if what is simultaneous for one frame of reference is not so for another system -- is it still meaningful to uphold the objectivity of succession and the reality of becoming? This is how the argument is usually formulated and in this form it was accepted by Quine when he wrote that the principle of relativity "leaves no reasonable alternative to treating time as space-like." But despite its superficial plausibility, the argument collapses under a closer scrutiny.

In the first place, it is simply not true that simultaneity and succession are unqualifiedly relativized. What is relative is only the simultaneity of the events occurring at different places; the simultaneity of the isotopic events (i.e., occurring at the same place) remains a simultaneity for any conceivable observer. It is true that, strictly speaking, there are no such events: as long as they remain different they cannot be exactly at the same point; and as long as they are rigorously isotopic, they merge into a single event. Thus the thesis is reduced to the apparent triviality that each event is simultaneous with itself But it ceases to be trivial if we add that each event is simultaneous only with itself That this is not trivial is obvious from the fact that it was denied, for instance, by Kurt Goedel (1949, pp. 560-561), who accepted the possibility of self-intersecting world-lines -- that is, the possibility of a Wellsian trip to the past and back to the present. Thus we would have an event which, besides being simultaneous with itself, would be also simultaneous with its future descendant. Even if we disregard the difficulty of stating this view in a self-consistent language, its incompatibility with relativity theory is obvious. For in the relativistic universe no body can move with a velocity greater than that of light; in the language of the relativistic space-time diagram, it can never enter the forbidden zone of ‘Elsewhere’ outside of its own causal future; a fortiori it cannot cross the Elsewhere region and re-enter its own past. Here we see the relevance of Eddington’s insightful remark that, in the relativistic world, the past is separated from the future even more effectively than in the universe of Newton (Eddington 1925, p. 178). (In the Newtonian universe the past is separated from the future by an ‘infinitely thin’ layer of instantaneous space which contains all events objectively simultaneous with Here-Now; in the universe of Einstein the separation is effected by the four-dimensional region of Elsewhere.) Goedel’s hypothesis shows also a disastrous effect of false geometrical analogies; apparently the fact that there are some curves (e.g., lemniscate or Descarte’s leaf) which intersect themselves in the so-called singular points is for some mathematical minds a sufficient reason to believe that ‘the curve of time’ can behave in a similar way. This is a fallacy of geometrization or spatialization at its worst; it is certainly revealing that Goedel is proudly aware of the philosophical tradition to which he belongs -- that of Parmenides and McTaggart, whose names he mentions.

Equally important -- probably even more important -- is the limited extent to which succession is relativized. As early as 1911, Paul Langevin pointed out that only the succession of causally unrelated events can be changed into a simultaneity or even be reversed by an appropriate change of the frame of reference; the succession of causally related events remains qualitatively (though not metrically) invariant for any conceivable observer. Here is what, in my view, is philosophically the most significant result of relativity: that while there are no juxtapositions of events which would remain justapositions in all frames of reference, there are certain types of succession which preserve their character of succession for any possible observer. This certainly cannot be described as any weakening of the objective status of becoming; but it does away with the classical notion of instantaneous space defined as a three-dimensional layer of objectively (absolutely) simultaneous events.

This means that it is far more accurate to characterize Minkowski’s fusion of space with time as a temporalization or dynamization of space than as a spatialization of time. Minkowski’s universe is a network of the irreversible causal lines (‘world-lines’), but without any instantaneous, i.e., purely spatial, connections. This is a deeper meaning of the relativization of simultaneity, whose more correct term should be elimination of simultaneity; once we realize that the terms ‘class of simultaneous events’ and ‘instantaneous space’ are entirely synonymous, we shall also see that a negation of one implies a negation of the other. The four-dimensional becoming contains the regions of causal independence (the so-called regions of ‘Elsewhere’ which could be equally well called ‘Elsewhen’); it contains topologically invariant successions, but it does not contain any instantaneous transversal cuts, any purely spatial distances. As Whitehead observed, since the advent of relativity, "spatial relations must stretch through time" (Whitehead 1925, p. 6); in other words, there are only spatio-temporal, but no merely spatial distances. If we furthermore consider the space-time of the general-theory of relativity, with its locally variable and changing curvature, and, in particular, if we consider the expanding space of modern cosmogony, we will agree that the term ‘dynamization of space’ is not inappropriate and that we can even speak of incorporation of space into becoming. In any case, nothing is left of the traditional space since its fundamental constituting feature -- the relation of coexistence (juxtaposition, Kantian auseinander or nebeneinander) -- has no physical basis. Briefly: if we take relativity seriously, we must deny the reality of space in the sense of simultaneous juxtaposition of points.

Two objections will come immediately to our mind. First, is it not true that nothing is left of the Newtonian concept of time as well? It cannot be denied that the classical concept of time is also profoundly modified. The time of relativity is neither a uniformly flowing metaphysical river, nor is it an immaterial invisible clock in God’s mind, as Newton imagined. Yet, as Whitehead (1920, p. 17) observed as early as in 1920, the variety of metrically discordant temporal series is entirely compatible with one single underlying ‘creative advance of nature,’ or, in less poetic words, with universal becoming. Thus one basic feature of Newtonian time remains intact: its incompleteness. This, too, can be proved rigorously and without any appeal to metaphors by drawing another consequence from Minkowski’s same formula. It can be shown that no event which is still unreal for me, i.e., which is still in my causal future, can be included in the causal past of any possible observer. (By ‘possible observer’ I mean any observer located either in my causal past or anywhere in my Elsewhere region.) Since the inclusion of the events in the causal past of some observer is a necessary condition of their observability, it means that future events are intrinsically unobservable and thus it is entirely superfluous to postulate their existence. It is far more natural to regard them as genuinely future. i.e., as actually not yet existing. The physical emptiness of the future is another inevitable consequence of special relativity.

The second objection is the following: is it not true that the character of spatiality is in some sense preserved in Minkowski’s time-space? This is certainly true provided that we carefully distinguish between spatiality in a broader sense and the classical notion of static instantaneous space. The four-dimensional world process consists of an enormous number of world-lines which occasionally interact, but which mostly run independently of each other. In this sense, the world process has a certain transversal width, since it contains contemporary parts. But ‘contemporary’ is not co-instantaneous! This is more than a mere terminological difference. As said before, the four-dimensional becoming does not yield to any instantaneous cuts. In other words, there is no simultaneity of instants. But, as Bergson observed, the relativistic universe does not exclude the simultaneity of intervals, i.e., contemporaneity of temporal series, even if they are metrically discordant. Minkowski’s time-space is, to use another of Bergson’s terms, ‘extensive becoming’ (Bergson 1944, p. 265; 1965, pp. 52-53), which cannot be properly decomposed into one-dimensional time and three-dimensional instantaneous space. Such decomposition was possible in the classical space-time and this precisely distinguishes it from the relativistic time-space.

Commenting on both the relativity and quantum theories, Whitehead (1925, p. 54) observed in 1925 that "nature is nothing at an instant." If this is difficult to our visual imagination, Whitehead, following again in this respect Bergson, tried to remove the difficulty by using an auditory model: "In an analogous way, a note of music is nothing at an instant, but it also requires its whole period in which to manifest itself." The fact that melody does not exist at an instant does not make it unreal; on the contrary it shows the fictitious character of durationless instants. Similarly, time-space does not cease to be real because it does not yield to any instantaneous, purely spatial, cuts. But this impossibility of instantaneous cuts is entirely compatible with the transversal width of becoming, i.e., with the co-presence, or rather, co-becoming of the world-lines. In an analogous way, there are different melodies co-present within a polyphonic pattern. In this respect an auditory model is free of the limitations of visual models and geometrical diagrams.

Re. 3. The epistemological difficulty of the static interpretation of space-time can be dealt with briefly. As stated above, becoming and succession cannot be simply denied, but only confined to the subjective realm, to be made, in Gruenbaum’s words, ‘mind-dependent.’ This is the meaning of his statement that "coming into being is only coming into present awareness" (Gruenbaum 1963, p. 329; italics his). This can mean -- if it means anything at all -- only one thing: that what I experience as a new present moment existed prior to and independently of my deceptive temporal experience timelessly -- or, as it is fashionable now to say, tenselessly -- in the becomingless physical world. There are thus two realms: the realm of physical reality devoid of becoming in which all events that appear to us in succession are ‘tenselessly’ juxtaposed, and our private ‘stream of consciousness’ where there is a genuine succession and becoming. This is a dualism far more extreme than that of Descartes, for in Cartesian dualism, mind and matter, in spite of their heterogeneity, both shared at least the general temporal character. But for those who deny the reality of physical becoming the difficulty is doubled, since their two realms do not even share this temporal character! No attempt is even made to relate such altogether heterogeneous realms; no explanation is offered why and how the becomingless reality manifests itself in a gradually unfolding temporal form in the subjective realm. The difficulty is only increased by the physicalistic leanings of J. J. Smart, Quine and Gruenbaum himself, for if the subjective realm itself is a part of the becomingless physical reality, we should not have even the illusion of becoming, since the successive character of our stream of consciousness would become impossible! Briefly, the static interpretation of physical reality is nothing but a relapse into a strange Eleatic myth with all its oddities and contradictions, not only completely divorced from our immediate experience, but incompatible with contemporary physical science properly interpreted.

We are now returning after a long detour to the results of Piaget’s investigations. It appears that the fallacy of spatialization of time is committed by young children and -- quite a number of philosophers! This conclusion may sound offensive and disrespectful; but it is far less offensive than it sounds. Not all the features of a child’s mind are objectionable. Did not, for instance, Aristotle say that a proper trait of a philosopher’s mind is the capacity of wonder -- which children certainly have more than an average adult? In one of his other studies Piaget (1957) discovered that the children’s minds in their earliest stage, when not yet shaped by the influence of macroscopic environment, have no notion of a permanent object and are thus closer to the microphysical world of the impermanent ‘particles’ than an average adult mind. But the case of the spatializing fallacy is amusingly different. Children gradually realize that their confusions of time with spatial trajectory contradicts their own experience -- sensory and, especially, introspective -- and they give up spatialization; philosophers, while they are fully aware of the same contradiction, retain spatialization and deny experience -- and are even proud of it!

The question why the fallacy of spatialization does not entirely disappear with childhood would require another paper. Let me only indicate a sketch of the answer, which only a biological, evolutionary theory of knowledge can provide. Human mind and organism are adjusted to the realm of middle dimensions and middle velocities, in which Euclidian geometry, Newtonian mechanics and classical determinism very approximately hold. Perceptible motions of our bodies and of surrounding objects are very slow with respect to the velocity of light and gravitation -- a velocity which for us is practically infinite. Thus the motions of macroscopic bodies appear on the background of the network of practically instantaneous interactions. By the further natural process of abstraction and idealization, the notion of space as a network of instantaneous connections and as the container of everything existing emerges. It is not accidental that for c = oo the transformation of Lorentz passes over into that of Galileo. Hence a persistent tendency to translate every experience into visual and spatial terms -- even at the price of distorting, or even eliminating it.

 

REFERENCES

Bergson, H. 1944. Creative Evolution, trans. by A. Mitchell. Random House.

Bergson, H. 1965. Duration and Simultaneity, trans. by L. Jacobson. Bobbs-Merrill.

Bruno, Giordano. 1879. Jordani Bruni Nolani opera latine conscripta publicis sumptibus edita. . . .Naples: D. Morano, 1879-1891.

Capek, M. 1961. "The Doctrine of Necessity Re-examined." Review of Metaphysics 5:11-54. Capek, M. 1955. "Relativity and the Status of Space." Review of Metaphysics 9:169-199.

Capek, M. 1965. "The Myth of Frozen Passage: The Status of Becoming in the Physical World." Boston Studies in the Philosophy of Science. II, pp. 441-464. Humanities Press.

Capek, M. 1966. "Time in Relativity Theory: Arguments for a Philosophy of Becoming." In Fraser, J. T. (ed.), Voice of Time, pp. 434-454. Braziller.

Capek, M. 1969. The Philosophical Impact of Contemporary Physics. Princeton, Van Nostrand.

Capek, M. 1971. Bergson and Modern Physics. Boston Studies in the Philosophy of Science, VII. Dordrecht: Reidel.

Eddington, A. S. 1925. The Nature of the Physical World. Macmillan.

Goedel, Kurt, 1949. "A Remark about the Relationship between Relativity and Idealistic Philosophy." In Schilpp, P. A. (ed.), Albert Einstein: Philosopher-Scientist. Evanston: Library of Living Philosophers (V. 7).

Gruenbaum, Adolf. 1963. Philosophical Problems of Space and Time. Knopf.

Hartshorne, C. 1932. "Contingency and the New Era in Metaphysics." Journal of Philosophy 29:421-431 and 457-469.

Nietzsche, F. 1954. Twilight of the Idols. In The Portable Nietzsche, trans. by W. Kaufmann. Viking Press

Piaget, Jean. 1946. Le developpement de la notion de temps chez l’enfant. Presses Universitaires de France.

Piaget, Jean. 1957. "The Child and Modern Physics." Scientific American 196:46-51.

Whitehead, A. N. 1920. The Concept of Nature. Cambridge University Press.

Whitehead, A. N. 1925. The Principles of Natural Knowledge, 2nd ed. Cambridge University Press.

Whitehead, A. N. 1925. Science and the Modern World. Macmillan.

Chapter 2: Three Counter Strategies to Reductionism in Science by Francis Zucker

Francis J. Zucker is affiliated with the Max-Planck-Institut in Starnberg, West Germany, and is now with the Center for the Philosophy and History of Science at Boston University.

Reductionism and its Counter-Strategies

I will briefly describe three research programs in this essay that are motivated by opposition to physical reductionism.

All scientific analysis seeks to isolate elements which, in suitable combination, account for the appearances and events in some domain of nature; and this enterprise, carried to completion, necessarily lands one with the basic entities of physics. Thus physical reductionism is rightly called the methodological ‘superparadigm’ of modern science, and it seems odd indeed to look for research programs in opposition to it.

To be sure, the physical reductionism I have in mind is ontological, not methodological: it is the position that the universe ‘consists of’ the basic entities disclosed by physics, which are, in their diverse arrangements, all there ‘really is’ in the world -- all else, including the subject himself and his perceptions, being reduced to the derivative status of ‘epiphenomena,’ which H. Jonas defines as the powerless byplays of physical happenings that follow their own rules entirely (Jonas 1966, p. 88). But can methodological reductionism be so easily separated from this ontology? Do we not, as scientists, insist on passing beyond method to existence? If we acknowledge scientific truth to be the most certain we possess, then surely its objective correlatives can be no mere ‘thought economies’ (Mach) or ‘tools for manipulating nature’ (instrumentalism), but they must in some sense be ‘real.’ Once the physical constituents are granted an independent existence, though, they appear to usurp the whole: gathered together in systems of arbitrary complexity, but all conforming to their inherent laws of combination, they now become coextensive with the universe they so fully explain -- including, therefore, the reflecting subject, which is capable of juxtaposing itself to the rest of the world, and of enunciating the theory of the very entities that constitute both. Though this feat demands a greater faith in the miraculous than we may wish to muster -- and prompts some of us to dismiss such an ontology at the instant -- it yet appears no mere trivial matter to disentangle the two phases of reductionism and thus avoid pouring out the baby with the bath.

The task would perhaps be simpler were it not bedeviled by the heritage of Cartesian dualism, which I think still serves most of us in the West as the point of departure in our ontological deliberations. When in the seventeenth century a reality split into an ‘extended’ and a ‘thinking’ substance (res extrensa and res cogitans) replaced the hierarchically structured, divinely governed universe of Antiquity and the Middle Ages, it did provide modern science with an ontological frame more congenial to its rise. But this frame, which had to encompass so radically fragmented a universe, struck some minds almost immediately, and ever since, as being unequal to its task. Ontological unity was thereupon sought by opting for a monist solution: for idealism (the monism of the res cogitans), or materialism (that of the res extensa), or for the hoped-for middle ground of phenomenalism. Materialism, which pictured the basic entities in mechanical terms (with billiard ball-like atoms), represents physical reductionism in its classic form.1 Today’s physics, having turned progressively more abstract and rejecting thing-like models for its elementary particles, may puzzle the old-style materialist in its apparent break with the res extensa, but does not in itself challenge his monist claim.

If we reject the idealist and phenomenalist monisms for the reason cited (the reality claim of scientific analysis), we can only hope for an escape from ontological reductionism by outflanking dualism itself. Both Whitehead and his contemporary, Husserl, held that dualism could indeed be overcome philosophically, by seeking the fundamental unity in a domain of reality that lies anterior, in some sense, to the sharp subject-object split, justified though that split be in certain contexts. ‘Process’ is Whitehead’s label for that domain, the ‘pre-predicative’ and the ‘life world’ are Husserl’s. In their attempts at clarifying the two-faced reductionism issue, there seem to me to be important differences between the two philosophers,2 but these need not concern us here so long as we have license to believe that disentanglement is philosophically possible at all, that anti-reductionism can mean opposition to the misplacement of ontological unity, and not to the scientific enterprise as such.3

Philosophical consideration may thus help clear the air, but I believe that only in conjunction with the development of science itself can it lead to the construction of an ontology of nature, i.e., of a ‘body-social of scientific knowledge’ that is neither a hangover from the old hierarchical order nor captive to a monist pseudo-unity. If you have no clear vision of the perfect society, Marcuse says, register your protest against what hurts most in the old; the new will supposedly emerge in the sequel. While this attempt at emancipation through negation may not lead far with respect to the body-social, I will try it here in describing the three research programs in terms of the ‘No’ each of them says to one of the basic strands of the reductionism syndrome: to the dualism that spawned it, to the ‘nothing-but’ of its monism, and to the fragmenting sort of mathematical conceptualization it one-sidedly encourages.

The first strategy tries to explode the ontological claim of physical reductionism from within, so to speak. It takes a leaf from the tremendous changes within physics between the nineteenth and the twentieth century, and decides to continue riding the tiger. ‘Think physics to the end’ is C. F. von Weizsaecker’s way of putting it: since physics itself, as the development of quantum mechanics shows overcame the fragmentation of physical reality into building stone-like basic entities, so it might also, as it approaches its final, ‘unitary’ form (von Weizsaecker’s term for the still unknown theory that subsumes the currently known theories plus the long searched-for elementary theory in one system), overcome the fragmentation of the whole of reality into distinct domains of the physical and the mental. Indeed, von Weizsaecker’s researches suggest that the axioms of unitary physics are precisely the formal expression of the preconditions of (conceptualizable) experience; that the subject as knower, in other words, is encountered at the very base of physics, as Kant had (in a slightly different manner) already surmised. Thus physics itself, for whose sake Descartes had so radically separated the ‘extended substance’ from total reality, may be able to subvert the ontological reductionism which was an offspring of that separation.

To explain the second strategy, let us recall the display of the reductionist order of the sciences along the ‘Comtean ladder,’ which arranges the forms of life and the corresponding specialized disciplines in an ascending series of rungs starting with physics at the base, followed by chemistry, biology, psychology, sociology and, depending on one’s inclination, history and religion. The ‘simples’ of each rung, i.e., the basic conceptions of that discipline, are ‘nothing but’ a complicated structure composed of the simples of the rung beneath it. If human behavior, for example, is conceived as made up of a network of pre-programmed responses (psychology), these can be reduced, say, to conditional reflexes (biology), which are in turn nothing but complicated physical chemistry. Anti-reductionism, in the common understanding, here argues for the irreducibility, in some sense, of the higher forms of reality to the lower -- of biology to physics, or of religion to sociology -- and this the second strategy attempts to do. To this end, it accents the relative autonomy of each rung by formulating the simples appropriate to it, i.e., by conceptually ‘assimilating’ (to use Bohm’s term [Bohm, 1974, p. 58]) each type of fact into its own sort of order. I therefore term this strategy, ‘Cultivate the simples appropriate to each order,’ and cite Waddington’s ‘epigenetic landscape’ along with his ‘chreods’ as an outstanding example of its fruits. This strategy tries to meet Whitehead’s demand for a natural philosophy that regards "the red glow of the sunset. . . as much part of nature as the molecules and electric waves by which men of science would explain the phenomenon" (Whitehead, 1970, p. 29). That the ontologically unprejudiced exhibition of the domains of ‘sunset colors’ and of electric waves, each in its own terms, does indeed allow us to account for the "coherence of things" (ibid.) without succumbing to the nothing-but of the waves, is one of the points I shall try to make below. The problem is to discover in what sense this coherence, which of course includes the methodologically reductionist relation, is nevertheless richer than it. I suggest that language is sufficiently powerful to express this enrichment, and that it is a form of knowledge which is being thus expressed, albeit not of scientific knowledge in the sense of Weizsaecker ("testable predictions on precisely formulated alternatives"); from this point of view, the first and second strategy appear to be complementary. It must be admitted, however, that the second is weaker than the first: opposition to physicalist monism dictates its employment, but it cannot by itself overthrow it. For the reductionist can always argue that, while it may be good heuristics to develop concepts peculiarly fitted to each of the rungs (if they are not already available in ordinary language), these will be necessarily anthropomorphic and their sole function in the scientific enterprise is to invite reduction.

The third strategy is rather speculative. Like the second, it seeks the simples in structuring any domain of knowledge, but tries to give this search a sharper edge by applying a lesson learned from physics. So far, simples have been taken as concepts that seem elementary in an intuitive way: a leaf, for example, is an intuitively obvious constituent in plant morphology. As one learns one’s way about in any field of inquiry, one gradually becomes aware of the ‘chreods’ that lie close to the level of immediate perception through the senses or the intellect. But perhaps our ordinary perceptive powers are not sufficiently acute to discover the deeper-lying chreods. Indeed we know from physics that, as we dig deeper, the basic notions become very abstract, i.e., non-intuitive. The first strategy is satisfied with pointing out that, in the end, at least the axioms of unitary physics become transparent. It is possible, however, that progress on the higher rungs, for example in biology, depends on an enrichment of our perceptive faculties, and since it is structures we wish to recognize in science, this means an enrichment of our mathematical imagination. This is precisely what David Bohm has been after in his work with ‘implicate’ (or ‘enfolded’) orders, which so far he has discussed only in relation to physics, but clearly means to apply in other fields as well, such as in perception theory and in cognitive psychology (Bohm 1973 and 1974). Bohm believes that if we learn to think in ‘non-local’ orders, i.e., in orders quite other than those based on the Cartesian res extensa, we will be able to understand the aspect of wholeness in quantum mechanics, which in a formal sense we already know to be there, intuitively as well. Implicate orders express "the intimate interconnection of different systems that are not in spatial contact" (Bohm and Hiley 1975), and will be needed wherever spatial (or temporal) wholeness is to be given a mathematical form. The example presented below under the title, ‘Is there a mathematics for wholes?’, is an attempt to apply a primitive instance of implicate order to plant morphology. The germ of this idea was already known to Whitehead, and to the mathematician F. Klein, both of whom speculated about its use in classical mechanics (Whitehead 1898). In adapting it to biology, George Adams (d.1963) introduced the notion of ‘formative forces,’ which correspond to the ‘formative causes’ mentioned by Bohm (Bohm 1973, p. 22). The true testing ground for the implicate-order strategy, it seems to me, may indeed be biology rather than physics, where abstract methods are so powerful as to perhaps make it dispensable: just as the old style building-block materialist was refuted not by philosophical polemic, but by the one authority in which he trusted, i.e., by physics itself, so the nothing-but reductionist in contemporary biology will modify his views should it be possible some day to provide him with a mathematical language that fills the currently existing gap between our formal knowledge of gene structure and combinations, and our intuitive apprehension of growth and shape. This language, both Adams and Bohm agree, will have to be that of an implicate topological or projective order, not the explicate metrical order of classical physics. What the precise relationship between the novel morphology and traditional biochemistry might be is a question I have not been able to resolve; Bohm’s and Adams’ expectations diverge on this point.

Think Physics to the End

Why should one expect physics to develop toward a unitary theory, and what could be the meaning of such a theory?

Physics develops in a sequence of ‘closed’ theories, to use Heisenberg’s term for a mathematical structure with associated physical semantic that cannot be improved upon by means of ‘small’ changes (as, for example, Newton’s law of gravitation cannot be improved by modifying the number two in the exponent of the distance term). When a ‘deeper’ closed theory is found (as, in the case of gravitation, general relativity), the older theory is not simply discredited, but its predictions are upheld within certain parameter ranges specified by the newer theory, which adds correct predictions of its own outside those ranges. Classical mechanics united terrestrial and celestial kinematics in a unified dynamics. During the past century, electromagnetic theory united electrostatics, magnetostatics, and network theory with optics in one stroke; special relativity combined classical mechanics with electromagnetic theory; general relativity combined the theory of gravitation with physical geometry and special relativity; and quantum mechanics united much of physics with, at least in principle, all of chemistry. It is at least as puzzling to think of an infinite progression of ever more general theories in physics as it is to postulate a final, unitary theory. (What is still missing is chiefly a theory of elementary particles, which Weizsaecker believes to be implicit in quantum mechanics itself, to become explicit once the correct symmetry group is applied.)

Weizsaecker conceives the unitary theory, which encompasses all other as special cases, to be the formal expression of the preconditions of experience (Weizsaecker 1971a and 1971b). This thesis was inspired by Kant, but goes well beyond Kant: only the regulatives of science, e.g., the principle of causality, were in Kant’s opinion a priori; the special laws would have to be formulated on the basis of special experience. If unitary physics spans all special laws, however, then all physics dispenses with special experience and depends solely on its preconditions. In principle, then, unitary physics ought to be deducible from a sufficiently detailed analysis of terms such as time, logic, observation, number. So long as this task appears too formidable, we must try to construct the theory by working from both ends: by axiomatizing the existing most general theory, i.e., quantum mechanics, in a manner that invites interpretation in terms of plausible preconditions, and by logically analyzing the preconditions that occur to one upon reflection so as to reconstruct the theory. In trying to close the gap, one finds additional preconditions of which one had previously been unaware, and, looking in the other direction, one tries out new axioms, for example concerning symmetry conditions suggested by the local analyses. The most cursory analysis of experience shows it to presuppose the structure of time: we learn from facts of the past to predict future events, and we test these when they are no longer future but present or past. Weizsaecker tries to develop a logic of temporal propositions that incorporates these features. The logic would be a probability theory based on a time-dependent axiomatics, which somehow must include the axiom of indeterminism (or its equivalent, the ‘superposition’ axiom).

Granted this much, Weizsaecker can reconstruct the Hilbert space structure of quantum mechanics for the case of the most elementary objects conceivable. An object of this sort is not a smallest something in physical space; it is, rather, the smallest unit of information, i.e., a single yes-no decision, termed an ‘ur,’ or a ‘simple alternative.’ Its Hilbert space is a two-dimensional complex vector space, and a simple mathematical development shows that complex bodies constituted by urs admit of a natural description in a three-dimensional real space.

Thus Weizsaecker explains the structure of space from quantum mechanics, and quantum mechanics from the structure of time. Although the details of his project are far from being carried out, the basic scheme stands: the unity of physics, and therefore (since knowledge and the known are not to be separated) the unity of physical nature, reflect the unity of time. ‘Physical’ nature, to Weizsaecker, means nature objectified -- any part of nature, including, for example, a living cell, or even the human mind which, insofar as it is analyzable in terms of yes-no questions, is fully subject to the laws of unitary physics. Therein lies Weizsaecker’s ‘reductionism.’ It does not reduce mind to palpable 19th-century matter, nor even to a something situated in physical space; and it leaves open the possibility of other, non-objectivating modes of encountering mind, in which another ‘Thou’ is met. If matter is what obeys the laws of physics, then it is an aspect of all that exists in the universe, and rather than juxtaposing it to life or mind, it is but a mode of experiencing these. A reductionism that claims no more than this is a reductionism with all the poison drawn from it.

Cultivate the Simples Appropriate to Each Order

Ordinary language provides us with elementary concepts on all levels of nature. Deliberate scientific work begins with a search for elementary terms -- the ‘primitives’ or ‘simples’ -- even more peculiarly fitted to the analysis of the subject at hand, in one with the search for patterns in which to view their interrelations. It has been a principle of good craftsmanship with all empiricists from Occam to Bridgeman to tailor their terms and theories as closely to the phenomenally given as possible. Occam’s razor, Bridgeman’s admonition to make the least down payment on future conceptualizations (Bridgeman 1959, p. 10), are born of the same spirit of faithfulness unto the phenomena. (Mach, too, was of this persuasion.) Tackling color theory in this spirit, as I will now show, one immediately obtains the simples appropriate (in a clearly specifiable sense) to that field; these simples are not the electromagnetic frequencies high school physics tells us colors ‘really’ are. Examples of this sort, taken from well-established sciences, may serve as practicing ground for biology and the higher-rung domains in which the shaping of the appropriate simples is still an open problem. Here I will confine myself to the ‘lowest’ level of that discipline (optics) where it is closest to physics, and where the nature of its relative autonomy can therefore best be studied.

Let us recall, to begin with, the ‘color circle’ on which we usually represent the universe of possible hues and saturations. (A third elementary determinant of color, brightness, need not concern us in the present context; its prepresentation would require a third dimension.) As we move around the circle clock-fashion starting, say, at red, we pass through the successively neighboring hues: orange, yellow, green, blue, violet, purple, and back to red. At the rim of the circle, the hues are at their most saturated (darkest); as we move inward along a spoke, the hue remains constant but becomes progressively less saturated, until we reach white at the center. Actually, this qualitative (mathematically speaking: topological) representation of the color world is two steps removed from physics, not one. The representation I want is obtained by noting that any color can be matched by superposing three standard (but arbitrarily chosen) ‘primary lights,’ and choosing as coordinates the relative contribution of each of these primaries. As a result, one now finds oneself in a so-called projective, rather than topological, plane, in which the color ‘circle’ is actually a noncircular closed curve whose exact shape need not concern us here, with the White center somewhere inside. The point to hold onto is that the mathematics on this particular rung of color science (which used to be referred to quite universally as that of ‘psycho-physics’), is neither quantitative (metric), as in physics, nor purely qualitative (topological), as in the description that satisfies our common-sense curiosity in the structure of the color world, but in between, namely, projective. I will now discuss the appositeness of two types of color primitives with respect to this rung.

All of us are familiar with Newton’s decomposition of sunlight (white) by means of a prism: he allowed a ray to enter through a small hole, and displayed a sequence of highly saturated colors from red through green to violet on a screen beyond the prism. It is a characteristic of these ‘spectral colors’ that they cannot be further resolved into constituent hues by passage through another prism; that they stand in a one-to-one correspondence with a particular angle of refraction (i.e., with a spatial property); and that, in the context of physical optics, each turns out to be quantifiable in terms of a spatial periodicity (the wavelength). It is important to note, however, that only the spectral colors correspond one-to-one to a wavelength; every other color corresponds to a set of possible spectral-color distributions, the so-called spectral ‘metamers.’ In other words, the Newtonian spectral wavelengths designate one-to-one every point on the rim of the color circle (or projective closed curve) between red and violet, but do not designate in a unique fashion the purple segment of the rim, nor any of the points inside the circle.

Almost two hundred years ago, the poet Goethe experimented with prismatic colors and hit upon an entirely different set of color simples (Goethe 1791). He held up a prism against a white wall, expecting to see the Newtonian spectrum displayed, and noticed instead that the wall remained white except where it was crossed by dark cracks, which the prism resolved into hues much brighter than the Newtonian ones. Goethe concluded that the minimum condition for the genesis of prismatic colors was a simple black-white border, and the colors he then saw, termed ‘edge colors’ today, he took as the true primitives for a theory of color, while he considered the Newtonian spectral colors to be compound, i.e., derivative. It is a simple matter today to prove that, by combining these primitives pairwise in two possible ways, every color point in the projective (or topological) plane is covered in strict one-to-one correspondence. The two combinations are ‘parallel’ and ‘series’: i.e., one edge-color filter is inserted in front of one light source, the other in front of a second, and the two beams are superposed on a screen (parallel combination), or the two edge-color filters are placed one behind the other in front of a single source (series combination) The Newtonian spectral colors are obtained through the series combination of complementary pairs of edge colors (pairs that add up to White when combined in parallel), and thus are in fact compound in terms of these simples, as Goethe had claimed. Conversely, the edge colors can be resolved into Newtonian spectra; there is no reason, other than ontological prejudice, for calling one primitive ‘more fundamental’ than the other.

The Newtonian simples are quantifiable and specify color uniquely insofar as it links up with physics; the edge colors are projectively defined and specify the colors whenever we talk of the color universe as a whole -- as of course we do in any theory of color, not only on the psychophysical level, but on the next higher (topological) level as well. It turns out that there are still higher rungs in color theory (which Goethe tried to structure with his notion of the ‘Urphaenomen,’ the ‘archetypal phenomenon’), and that on each of these the edge colors retain their basic role.

How do the levels of color physics (wavelengths) and psycho-physics (projective color plan) cohere? The reductionist answer is that the electromagnetic input spectrum, processed by the neurophysiology of the cones in the retina, generates the color ‘signal space’ represented by the color plane. Can our second strategy add any further dimension to this coherence?

In most textbooks on color science, the following relation between the wavelengths and c corresponding to complementary hues is mentioned, with the added remark that it seems to be purely empirical,’ i.e., an accidental relation and not part of any theory:

(1) (a - )(c - b)c,

where a, b, c are numerical constants. I would like to suggest, however, that there is a way of viewing this relation such that it becomes perfectly meaningful, provided only we look in the ‘counter-reductionist’ direction, i.e., we try to understand the lower rung in terms of the higher, instead of explaining the higher in terms of the lower. The richer coherence we seek depends on that understanding.

Complementary hues, it can be shown, lie on a straight line through the White point in the projective plane. Their designation in terms of wavelengths, as in (1), is a foreign body in the color plane, since a projective (or topological) universe knows nothing of a metric; the wavelengths are simply imported from the level of physics, i.e., from interference measurements made in physical space. To parametrize hues in a manner intrinsic to the color plane (i.e., properly ‘assimilated’ to that order), we must proceed differently. The hues appearing angularly distributed about the White point, we introduce a projective (non-metric) angular measure, which is given by

(2) (d - h) (h c - e) =f,

where d, e, f are arbitrary constants, and h, hc are the hue-parameter values of a complementary pair. It is clear that, by setting the arbitrary constants in (2) equal to the particular values a,b,c in (1), the hue scale will be in the units of wavelength. In other words, the class of permissible metrics for hue, specified by the projective measure (2), includes the metric of physical space as a special case. It is decisive to realize that there is nothing matter-of-course about this state of affairs. What we are here being told is that the metric of physical space is a specialization of a projective measure not descriptive of phenomena in physical space (such as the projective measure relating the electromagnetic tensors in vacuum, which of course must include the metric of physical space as a special case), but of phenomena in a space of sense experience. In other (psycho-physical) spaces of sense experience (I have specifically examined only that of acoustic pitch and of optical brightness), the relation to the physical metric seems to be similarly that of a higher geometry (albeit affine, rather than projective) to one of its metric models. We are therefore able to understand that some of the physical quantities are born out of the more qualitative mathematical structures of sense experience as a straightforward quantification thereof. This manner of coherence between two adjacent rungs is not explicable in terms of the signal transformation scheme mentioned before; rather, an evolutionary principle of neurophysiological realizability seems to be involved that is yet to be clarified.

Is there a Mathematics for Wholes?

We normally think of a plane as made up of the totality of its points, but the converse is equally possible: a point can be thought of as the totality of planes passing through it, and in fact is thus conceived in the field of mathematics mentioned in the preceding section, viz., projective geometry. Figures and theorems of point-like and of plane-like character appear always in parallel in that geometry and with equal justification, however odd the latter at first may seem to our intuition, which is unused to structures whose parts are larger (in ordinary space) than the whole. We can even specialize projective geometry in a manner parallel to the specialization that gives us Euclidean geometry, to obtain what Whitehead termed ‘anti-space,’ and Adams ‘counter-space’ (Adams and Wicher 1960): a space characterized by an ‘absolute point center,’ corresponding to the ‘absolute plane at infinity’ that lifts Euclidean geometry out of the totality of geometries contained in projective space. What this geometry (and projective geometry in general) teaches us is that structure need not be internal, it can be external to the object as viewed in ordinary space. Thus we have the choice, in plane projective geometry, of constructing an ellipse (using a straightedge only) point by point, or else tangent by tangent; we can feel, as we work with the latter, how the periphery shapes the figure. (In three dimensions, it is tangent planes rather than lines; and in the differential geometric and topological generalizations, it is tangent surface elements rather than planes.) Whitehead was apparently the first to wonder why this plane-like geometry should not be applicable in nature, when its parallel, the point-like geometry, is so ubiquitous; he did begin noticing projective elements in the science of statics, and F. Klein’s student, E. Study, explored the ‘plane-wise’ representation of mechanical rotation, an idea further developed by G. Adams (in unpublished manuscripts). It is also clear that electromagnetic theory can be understood in these terms; in fact, the Maxwell equations are a purely projective theory in vacuum, the metric entering only through the material properties, such as the permeability and the dialectric constant.

The exploration of the projective and counter-space points of view in physics does promise to throw new light on old relations, but seems unlikely, unless I am mistaken., to produce new results. Perhaps physics can serve as a training ground for the novel mathematical imagination here required -- and this would be a worth-while effort, provided Adams is right in thinking that counter-space geometry will lead to new results in biology (Adams and Whicher 1960). His point is that morphological changes in living things, if we view them intuitively, do seem to proceed from, or in, the periphery -- think of the blastula or neurula in embryology, or of the unfolding of leaves in botany. Accordingly, he introduces force fields that act in the periphery rather than in a point as in physics, and conceives of every growing point (such as the tip of the shoot) as the absolute center of a counter-space. In this way Adams does seem to catch some growth processes in mathematical form, but it is not clear to me how the surface-like forces are supposed to act on the material they shape. Adams’ language here sounds vitalist; the forces are said to ‘cause’ morphological changes by interacting with matter at points where it is in a ‘chaotic state.’ Yet this is surely incorrect; at least on the microlevel matter is extremely highly organized. (Its correctness on the level of macro-shape throws no light on our problem.) I think Bohm (1973) is right in invoking the causa formalis in connection with implicate orders, ruling out, for the formative forces, an explanatory role of the causa efficiens type, which is reserved to physicalist explanation. Even in saying this, of course, we are merely posing a difficult problem, not solving it.4

If this approach is still too primitive, how can we further develop it in the direction of Bohm’s implicate orders? What the two have in common is that Bohm’s paradigm, the Hilbert space implicate order, is also a projective geometry, though one greatly enriched in comparison with the ordinary projective geometry we have so far considered. Two large steps separate the two. The first corresponds to the transition from ‘geometric optics’ (rays and phase fronts) to ‘physical optics’ (waves, interference). Whereas in the ordinary projective case, space and counter-space (explicate and implicate) are rigidly correlated, their association with a wave equation unfreezes them to some extent by introducing periodic or even nonperiodic motion. The form of this motion (explicate) is represented implicately by spectra, which are thus a further training ground on the road to the more basic implicate orders. Into every point of the intensity distribution across an aperture, for example, the entire spectrum is enfolded as a superposition of moving planar structures; and conversely, every spectral wave front sweeps over all points in space, thus contributing to the intensity everywhere. The second step then introduces operators to achieve the full Hilbert space structure. I cannot imagine that anything less will do for the ‘new morphology.’ Just as quantum mechanics is needed to, and does perfectly, fill the gap between the atomic structure of crystals and their beautiful shape in space, so too it will be needed to link the biochemistry of genetic code and cellular processes to the growth and shape of living things -- provided, that is, that such a link can be found at all. Quantum mechanics thus plays a central role in the first as well as the third strategy, but for different reasons: in the first, the analysis of its meaning takes the sting out of reductionism, in the third it becomes a guide in the development of our perceptual and conceptual appreciation of living forms.

I mention in conclusion the work of R. Thom (1970/71), which presents an enrichment of our morphological understanding perhaps more immediately applicable to the morphological sciences than quantum mechanics can be at the present moment; in fact, I suspect Thom’s method to be indicative of the bridge that needs to be constructed between the macro- and the micro-realm. The topological mathematics he employs lends itself to interpretation in terms of implicate orders, although Thom is not himself interested in making this explicit. It remains to be seen whether his approach (or the cohomology theory used by Bohm, or any other approach now known) is adequate to the mathematical innovation sought: to the direct description of growth, and of the metamorphosis of forms, in space and time.

 

NOTES

l) For an excellent discussion of ontological physical reductionism as a fission product of Cartesian dualism, see Jonas 1966, pp. 38-92.

2) Some points of similarity between Whitehead’s and Husserl’s philosophies are discussed by R. Wiehl in his thorough Introduction to Whitehead 1971.

3) This is also the position of D. Bohm.

4) The fact that George Adams was a disciple of Rudolf Steiner, who taught a ‘Western style’ of consciousness expansion called ‘Anthroposophy,’ and thought well of projective geometry, explains his constant references to the ‘cosmic’ and ‘spiritual’ significance of counter-space. Although I fear his hopes for the idea may be inflated, I think nevertheless that its suggestive wealth merits attention.

 

REFERENCES

Adams, G. and O. Whicher 1960. Die Pflanze im Raum und Gegenraum. Stuttgart: Verlag Freies Geistesleben. (An earlier, shorter version appeared in England in 1952, under the title The Plant between Sun and Earth.)

Bohm, D. 1973. "Fragmentation and Wholeness." Van Leer Jerusalem Foundation Series. Humanities Press.

Bohm, D. 1974. "Holography and a New Order in Physics." Technology and Society (Bath University Press, England) 8/2:58. A fuller version is given in Bohm, D., Foundations of Physics 1:359-381 (1971)and 3:139-168 (1973).

Bohm, D. and B. Hilley. 1975. "On the Intuitive Understanding of Non-Locality as Implied by Quantum Theory." In Foundations of Physics 5:93-109

Bridgeman, P.W. 1959. The Way Things Are. Cambridge: Harvard University Press.

Goethe, J. W. 1791. "Beitraege zur Optik." In Kuhn, D., Matthaei, R., Schmid, G., Troll, W., and Wolf, K L. (eds.), Die Schriften zur Naturwissenschaft. Vol. 1 of Leopoldina (Halle) edition, Weimar, 1947. For an excellent discussion, see Goegelein, C., Zu Goethes Begriff von Wissenschaft. Munich: Carl Hanser Verlag, 1972.

Jonas, H. 1966. The Phenomenon of Life. New York: Harper & Row.

Thom, R. 1970/71. In Waddington, C. H. (ed.), Towards a Theoretical Biology. Vol. 3, pp. 89-116, and Vol. 4, pp. 68-82. Edinburgh University Press.

Weizsaecker, C. F. von. 1971a. In Bastin, T. (ed.), Quantum Theory and Beyond. Cambridge University Press, pp. 229-262.

Weizsaecker, C. F von. 1971b. Die Linheit der Natur. Munich: Carl Hanser Verlag. Whitehead, A. N. 1898. Universal Algebra. Cambridge University Press.

Whitehead, A. N. 1970. The Concept of Nature. Cambridge University Press.

Whitehead, A. N. 1971. Abenteuer der Ideen. With an Introduction by R. Wiehl. Frankfurt: Suhrkamp Verlag.

Chapter 1: The Implicate or Enfolded Order: A New Order for Physics by David Bohm

David Bohm is Professor of Theoretical Physics at Birkbeck College, University of London.

The quantum theory is, without doubt, the most revolutionary development in modern physics. Unfortunately, a large part of its potential impact on our overall world view has been lost sight of, because it is generally treated as being nothing more than a calculus, for which no general imaginative conception is thought to be possible. The main emphasis in working with this theory has therefore been on the development of a mathematical formalism that can predict the widest possible range of experimental results. In this talk I shall, however, describe in general terms how the quantum theory, understood somewhat more imaginatively than is usually done, can point to a new order in physics, which I call the enfolded order, or the implicate order.

I shall begin by sketching briefly a few salient historical features in the development of our modern notions of order in physics. Now, the ancient Greeks thought in terms of an essential order of aesthetic and moral perfection, which is least on the surface of the Earth and increases progressively toward the Heavens. And so, they were led to suppose that Heavenly bodies should express the perfection of their nature by moving in what they thought to be the most perfect of geometrical figures -- the circle. When observations failed to disclose such circular orbits, they retained their notions of essential order by supposing that the movements could be analyzed in terms of the Ptolemaic epicycles, i.e., circles on top of circles. In more modern times, as is well known, this view was overturned by the Copernican idea that the Sun is at the centre (and ultimately that there is no determinate centre at all). This idea led to the development of an entirely new notion of essential order, which was expressed in terms of a detailed description of the mechanical motions of bodies through space. This order was first given a precise mathematical form by Descartes, through his invention of co-ordinates. The co-ordinates are pictured with the aid of a grid (as shown in Fig. 1).

The orbit of a body is described by a curve, given algebraically by an equation determining a ‘coordination’ between two orders, that of the position, x, and that of the time, t.

Clearly, the Cartesian co-ordinates constitute a way of thinking of order that is radically different from that of the ancient Greeks. These co-ordinates have entered the whole of physics and are by now pervasively present in almost all that physicists do. In fact, it can safely be said that, while almost all the detailed content of physical thinking has changed fundamentally in the past few hundred years, the idea of co-ordinates is the one thing that has remained essentially constant. And this need not be felt to be surprising, if one takes into account that basic notions of order tend to be among the most strongly retained features of our thinking.

What I want to suggest here is that the quantum theory, understood imaginatively, gives a clear indication that we now need yet further new notions of order, as different perhaps from those of Descartes as these latter are from those prevailing in ancient Greece. I can give here only a brief description indicating certain essential features of my proposals concerning these new notions of order (which have been discussed in more detail elsewhere). In doing this, I shall use the hologram as an illustrative example, with the aid of which we can consider these notions not only mathematically, but also imaginatively.

Now, traditionally, the world was thought to be constituted of points. This view was given a great deal of support through the use of the lens, which provides in principle (as shown in Fig. 2) a point-to-point correspondence between object, O, and Image, I. By creating such a correspondence, the

lens brings the concept of point out in our minds in sharp relief, and encourages us to suppose that ultimately the whole of reality can be analyzed in terms of points which are to be regarded as, in some basic sense, separately existent.

The hologram, however, works in a very different way. To show this difference, we consider the

diagram in Fig. 3. A laser beam (consisting of coherent light) is split by being passed through a half silvered mirror. Part of the reflected beam strikes an object, so that waves from this object diffract and come back to overlap the original beam, producing a very complicated interference pattern that has no obvious relationship to the shape of the object and that is in general too fine even to be visible to the naked eye. This pattern is recorded on a photographic plate. When a section of this same plate is illuminated by a laser beam (as shown in Fig. 4), light waves come out which are similar to those coming from the original object. To an eye that is placed in these waves, it appears that the original object is seen three-dimensionally, as if through a window the size of a beam. What is

important here, however, is not this three-dimensionality but rather that, in some sense, each part of the hologram contains the whole object.

This example indicates a new order not hitherto given serious attention in physics. Actually, of course, the photographic plate is merely a convenient way of recording this order. Primarily, however, the order is in the movement of the light whose intensity is recorded. What is characteristic of this order is that a whole is enfolded in the movement in each region of space. This movement may appear at first sight to be more or less random, but evidently it has a complex order within it. This we call the implicate order.

Cartesian co-ordinates, then, express the unfolded or explicate order, in which the analysis of everything into separate points has been the general means of understanding the world. In physics, the explicate order has until now been considered to be fundamental for expressing the laws of nature. Thus, Newton’s laws of motion are a relationship determining an unfolded order of successive positions occupied by an object at a series of successive times. What I am proposing here, however, is that the quantum theory indicates the need to take the implicate order as fundamental. In other words, the essential order of movement is not that of an object translating itself from one place to another, but rather, it is a folding and unfolding, in which the object is continually being created again, in a form generally similar to what it was, though different in detail. The explicate order of movement of the object is thus not independent, substantial, and self-existent. We suggest instead that it is an appearance, abstracted from the implicate order, on which it depends and from which it derives its whole form and set of characteristic relationships.

The above notion can be brought out by considering an example, in which an insoluble ink droplet is placed inside a viscous fluid, such as glycerine. If the fluid is stirred slowly by a mechanical device (so that there is no diffusion) the droplet is eventually drawn into a fine thread that is distributed throughout the whole system in such a way that it is no longer even visible to the eye. If the mechanical device is then reversed, the thread will slowly gather together until it suddenly coalesces once again into a visible droplet.

Now, before this coalescence took place, the droplet could be said to be ‘folded into’ the viscous fluid, while afterwards it is unfolded again. So we have an example of a movement in which an explicate order is implicated and then explicated.

We may further develop this example by considering a case in which a droplet is first put into the fluid, after which the system is stirred n times. Another droplet is then placed in the fluid at a slightly different position and the system is once again stirred n times. If this operation is given an m-fold repetition, we will end up with a distribution in which the last droplet has been stirred n times. The whole distribution can then be unstirred continuously and one droplet after another will be explicated. If the motion is so fast that individual droplets are not resolved in our perception, it will appear that a permanently existing localized object is continuously moving across the space occupied by the fluid. Actually, however, there is clearly no such object. What underlies this appearance is indeed an implicated order in the distribution of ink throughout the whole system.

The analogy with the quantum theory is fairly easy to see. The quantum theory indicates that at a deep level matter can be understood neither as constituted of localized particles nor as constituted of fields extended through space and undergoing wave motion. Rather, it seems to have some of the attributes of both. Indeed, this is the essential meaning of the uncertainty principle. It is not that particles exist, whose location cannot be known exactly. It is rather that there are no particles, in the sense that the order implicit in the particle model is simply not applicable at this level. But likewise, there are no fields, in the sense that the order implicit in the continuous field model is also not applicable. Some fundamentally new notion of order is therefore needed.

What I am suggesting here is that the notion of implicate order imaginatively captures the essence of this new situation in physics and that it may perhaps serve as a germ for further development of ideas in this domain. Thus, in the example of a series of ink droplets folded into a viscous fluid, we have a movement in which the results visible in certain regions (e.g., ink droplets) originate in and depend on the whole fluid in an inseparable way. The particle-like aspect is evidently implicit or enfolded in this whole. Likewise, this whole has enfolded within it a certain field-like aspect, as demonstrated by the fact that the order of appearance of the enfolded ink droplets may be radically altered by changing the general conditions throughout the whole fluid, e.g., by introducing structures of slits and obstacles within it. (The analogy with the quantum-mechanical inseparability of the results of observation from the overall experimental arrangement is evident here.)

Of course, in the above example one may, if one wishes, explain the implicate order as the result of a distribution of the particles of ink, which can ultimately be understood in terms of the ordinary explicate order of space and time. So, this example, like all analogies, can be used only in some limited sense, as a pointer. We have, therefore, now to return to the hologram, for which no such ultimate analysis in terms of a distribution of localized particles of matter is possible.

Now, with the hologram, light is taken as the particular movement that is involved in the folding and unfolding of a certain structure (e.g., an object). But more generally, it may be sound, electron beams, or any other movement, which like light is able to carry a whole content in each region or part. The totality of all such possibilities, known and unknown, I shall call the holomovement.

The holomovement is to be understood as necessarily and essentially undivided. Since it has no divisions, there can be no explicit way to describe or specify it. It can be known only implicitly, through particular manifestations (such as light, sound, electrons, etc.). Such manifestations have a certain relative autonomy, i.e., self-rule, in their order of movement, and this permits them to be studied in themselves, at least up to a point. But ultimately this autonomy is limited, because the fundamental order is holonomy, i.e., the law of the whole. This law of the whole is, however, just such as to provide for the above described relative and limited autonomy of the partial aspects. This provision for such relative and limited autonomy is indeed a key requirement in any theory which takes the whole as primary, since without it there is no way to understand or even account for the fact that partial aspects can be found which may serve as points of departure in the development of knowledge.

What we are suggesting then is that all matter is to be understood as a relatively autonomous and constant set of forms built on and carried by the universal and indivisible flux of the holomovement. Such material forms have a certain subsistence, in the sense that under appropriate conditions they can continue with a certain limited possibility for stable existence. However, they are not to be regarded as substances, which would be completely stable, permanent and not dependent on something deeper for their continued existence. So the flux of the holomovement, with its implicate order, is the primary reality, while the explicate order of relatively constant material forms is secondary.

It is important to emphasize here that what counts in any theory that is developed along such lines is relative degree of implication. For example, if we take our own order of perceptual experience as explicate, then the electron’s order is implicate. But we might equally well take the electron’s order as explicate, in which case our own experiential order will be implicate. In other words, the laws of nature will be invariant, in the sense that their content will be the same, regardless of which order is taken as explicate. This brings out in another way how the explicate order is an abstraction from the implicate, having no independence or substantiality of existence.

This means, however, that ‘localization’ cannot be a fundamental notion. What is ‘local’ in one order is enfolded throughout the whole of space (and time) in another order. And as pointed out above, any one order is no more fundamental than any other. Space and time are thus an abstraction from the universal flux of process.

This abstraction has ultimately to be expressed precisely in some mathematical form that will give us a new description of implicate order, which is as systematic and coherent as that given in classical physics by the Cartesian co-ordinates.

With the aid of such a mathematical development we then have, of course, to understand the overall situation in physics in a way that is free of current contradictions and confusions (e.g., the infinities of quantum field theory). Work on this is now going on and some progress has been made which will be reported later.

It is also necessary, however, to go much deeper and to explore what the implicate order means with regard to our common-sense notions based on general experience, as well as with regard to our basic philosophical ideas. In short, we have to come to a new general world view, or metaphysics, in which the implicate order is primary, while the explicate order is secondary or derivative.

In developing such a view, we cannot stop with the attempt to understand matter alone through the implicate order. For we ourselves, along with electrons, protons, rocks, planets, galaxies, etc. are only relatively stable forms in the holomovement. It is necessary, moreover, to include not only our bodies, with their brains and nervous systems, but also our thoughts, feelings, urges, will and desire, which are inseparable from the functions of these brains and nervous systems. If the ultimate ground of all matter is in the implicate order, as contained in the holomovement, it thus seems inevitable that what has generally been called ‘mind’ must also have the same ultimate ground.

What we are proposing then is that what can be touched, seen, handled by scientific instruments, etc. is an explicate abstraction from the real implicate totality of the holomovement. Likewise, in physics, the modern quantum mechanical field theory regards ‘particles,’ along with all structures constituted out of them (i.e., material bodies), as small modifications of the ‘vacuum’ (which is, in effect, being treated as an unknown and only implicitly specifiable movement that is the ground of the whole of reality).

Similarly, when we look into the depths of the clear sky, what we actually see is an unspecifiable total ground of movement, from which objects emerge. Particular ideas or thoughts coming to the mind may similarly be perceived as being like particular objects that arise from an unspecifiable ground of deeper movement. What we call ‘mind’ may be this deeper ground of movement, but if we think of the particular thoughts as the basic reality, we miss this.

Such a way of looking at everything fits in rather well with our general experience. Thus, while any statement may give an explicit expression to our thoughts and feelings, the meaning or significance of this statement is in a vast and unspecifiable implicit background of response. Unless we share most of this implicit background, the explicit statement will communicate little or nothing. So one may propose that, also in the mind, the explicate order arises out of the implicate, and that the basic movement is one in which the content of each of these continually passes into the other.

What all this means is that the flux of the holomovement is the implicate source of all forms, both physical and mental. That is to say, the whole of existence, including inanimate matter, living organisms, and ‘mind,’ arises in a single ground, in which these are all enfolded, or contained implicitly. Inanimate matter is characterized by a relatively autonomous mechanical order of behaviour (i.e., a dominant tendency to recurrence, repetition, relatively fixed and stable patterns of movement, etc.). This order is inherited mainly from the past. On the other hand, what is essential to mind is the possibility of a fresh creative act of intelligent perception, which can assimilate knowledge from the past, but which is not dominated by this knowledge. Of course, inanimate matter has certain creative possibilities also, but these evolve relatively slowly. And while mind too can function mechanically and repetitively, this is not its essential quality. Mind, which is deeply creative and new in its essential mode of operation, cannot then be explained in terms of any mechanical abstraction of the properties of inanimate matter. Rather, it is being proposed here that its operation originates in implicate depths of the holomovement beyond those needed for understanding the ordinary mechanical qualities of matter.

It is clear that, in this view, living organisms are to be regarded as particular manifestations of what is ultimately enfolded in the inward depths of the holomovement. We are suggesting here that a living organism has a more direct contact with what is thus enfolded in the holomovement than does inanimate matter. When such an organism dies, this relatively direct contact ceases to operate, so that the body of the organism reverts back to the more mechanical order of inanimate matter. So, in a certain sense, we could say that the energy of life more typically reveals the innermost order of the holomovement than does inanimate matter. For this reason, one can appropriately call the holomovement the life energy, which is the ground that ultimately creates and sustains all matter and all mind, as two relatively autonomous and independent streams that may move in parallel.

This view does not deny the importance of the mechanical abstraction of the structure of the living organism. But it denies that the abstraction of mechanism comprehends the ultimate ground of life, and indeed it denies also that such an abstraction comprehends the ultimate ground of inanimate matter. Nor are we saying (e.g., with Descartes) that mind and matter are to be considered as two independently existent substances. Rather, the universal life energy is what operates in the role that has generally been attributed to the one self-existent substance, which is the implicate ground of every form that comes to explicate manifestation.

The view outlined above is evidently close, in important respects, to that of Spinoza. The main difference is perhaps that the notion of implicate order may be more suitable for accomplishing what Spinoza intended, than was the logical geometrical form that he used. For example, the modes and aspects that he introduced to describe the activity of substance can be understood as relatively autonomous orders of movement. A key point introduced by the notion of implicate order is that it is not possible in general for all such modes to be explicate together. Rather (as indicated by the fact that different quantum mechanical ‘observables’ cannot generally be defined simultaneously), when one mode is explicate, others will have to be implicate.

One can in this general way regard the implicate order as a further development of what is already present in Spinoza, as well as in Heraclitus, Cusano, Leibniz, Whitehead and others, a development that is capable of making full contact with modern science, and yet opens up a way to assimilate common experience and general philosophical reflections on this experience, to give a single, whole, unfragmented world view.

Chapter 6: Some Whiteheadian Comments by John Cobb, and Response by W. H. Thorpe

John B. Cobb, Jr., is Professor of Theology at the School of Theology at Claremont, Avery Professor of Religion at Claremont Graduate School, and Director of the Center for Process Studies.

W. H. Thorpe is Director of the Sub-Department of Animal Behavior, Department of Zoology, at the University of Cambridge.

Thorpe raises the question whether process thought can illumine the mystery of life and consciousness as evolutionary emergents. Dobzhansky stresses that with human self-consciousness we are dealing with something that is radically new. Both oppose the reductionistic interpretation of these emergent novelties as merely unfolding of pregiven necessities. Both encourage us to view what has happened with wonder.

Birch, without opposing the sense of wonder, holds that evolutionary development must be seen as embodying a fundamental continuity. This means that what is manifest in higher stages must be continuous with what is present at earlier stages. This does not mean that something very like self-conscious human purpose is to be found in amoebae, but it does mean that there must be some continuity between an amoeba’s response to its environment and the response of higher organisms (including human ones) to theirs. Waddington shows how his revisions of Neo-Darwinian Theory clarify the nature of this continuity by focusing on the interactive relating of phenotype and environment.

Both positions reject mechanism or reductionism as an ultimate truth. Both affirm emergence. One stresses the radical mystery of particular emergents. Birch’s Whiteheadian view stresses the continuity underlying all emergence. The difference in emphasis need not amount to systematic opposition, but it can easily be hardened into it. This hardening occurs to the extent that particular gaps, such as that between self-conscious human experience and that of animals, is asserted to be fundamentally different from all the other gaps to be found in reality. Such judgments seem to demand an ontological dualism of the human and the natural that is incompatible with Whiteheadian process philosophy. Short of this extreme, as the gap is seen as one gap among others, in a process of emergence, the extent and importance of the gap is a matter of factual investigation. That is, as long as ontological dualism is avoided, the extent of the difference between human activity, subjectivity, and purpose and those of other animals is a subject for detailed investigation to which process philosophy is entirely open.

If those who fear the stress on continuity wish to make a case against it, they need to define more precisely where that gap occurs which they regard as inexplicable in terms of continuities. Thorpe’s paper is instructive in this regard in that it repeatedly witnesses to the variety of places where significant emergence is found. Although he stresses gaps, and focuses on two, he also testifies to continuities and refers to many gaps. In this respect his paper is highly congenial to the process perspective.

If process thought is to help, it must clarify both what the continuities are and what novelties are introduced through emergence. Whitehead’s suggestion is that all entities whatsoever are understood better as organisms interacting with their environments (composed of other organisms) than as self-enclosed entities passively shaped by external forces. All organisms both take account of their environments and act upon their environments. The taking account involves an element of receptive subjectivity. The action involves aim at some immediate attainment and at altering the situation in some way. Hence subjectivity and purpose in rudimentary and unconscious forms are characteristics of nature generally. The evolutionary process is one in which new forms of order make possible more complex organisms in which both receptivity and action are enlarged in scope.

Whitehead’s theory of consciousness illustrates the way in which he conceives of fundamental emergents, or threshold crossings, in a process that also has a basic continuity. As Birch notes, what is required physiologically for consciousness to emerge is a specialization of cells leading to a central nervous system with sense organs oriented to messages from the external world. Whitehead adds that what this form of bodily organization makes possible is the concentration of complex and novel information in one portion of the body, namely, within the brain. Where complex features of the environment are thus internalized and these internalizations are brought into intense interactions, a series of events becomes possible that integrates selected aspects of this material at a new level. These events Whitehead calls ‘the final percipient occasions’ or ‘the dominant occasions.’ It is these occasions of which we have immediate knowledge; or more accurately, the experiences we speak of as ours (both conscious and unconscious) are these occasions. The basic structure that makes these dominant occasions possible emerged with the development of the central nervous system in animals, and where this structure is present, it is reasonable, as Thorpe does, to posit consciousness as present to some degree.

Consciousness is an aspect of feelings belonging to the dominant occasions in animals with nervous systems of some order of complexity. In all probability consciousness is lacking to all other feelings. Philosophical explanation calls for a more precise statement of the feature of feeling that allows for the emergence of consciousness. Whitehead’s answer to this question is technically developed in terms of ‘propositional feelings’ and ‘intellectual feelings.’ I shall offer a non-technical account that may prove suggestive.

Whitehead believes that consciousness is a feature of feelings which contrast what is felt with what might have been felt. We are not conscious of feelings that are constant unless by an unusual imaginative leap we are able to consider the possibility of their absence. This is why metaphysics is so difficult. It consists in an account of what is always and everywhere necessarily occurrent, whereas all our ordinary attention is directed to what differentiates one situation from another. Similarly, if our visual experience were completely homogeneous in terms of color, we would not be conscious of that color. As it is, if we were on some occasion exposed only to a particular shade of red, we would still be conscious of it, because we would compare what we saw with other colors we had seen in the past. Whitehead thinks that even low grades of conscious feeling require this contrast of what is felt with what is not felt, or better, of what is with what might be.

The note of possibility is thus indispensable to consciousness. Only subjects capable of bringing contrasting possibilities of some sort to bear upon present perception are conscious. That requires that there be not only a concentration of information of the sort the nervous system offers in the brain but also memory. The qualities given in past experience must be contrasted with those given in the present. The introduction of memory brings us one step further into the analysis, but we will have to back up a bit to grasp what is distinctive of memory.

The ordinary way in which nature achieves order through time is by means of repetition or re-enactment. The characteristics of one event are inherited by its successor which in turn transmits them very little changed. A route of such occasions is in Whitehead’s language an ‘enduring object,’ and the ordinary physical objects of the world are built up of such enduring objects. These provide for order and predictability, but the occasions in non-living enduring objects cannot achieve much value or intensity. They have to trivialize almost all of the potential offered by the past in order to maintain intact one route of dominant inheritance.

There are, however, occasions that, instead of almost totally repeating past characteristics, achieve significant novelty. They respond to their data in such a way that they can incorporate more complex elements and still achieve the unity needed for actuality of any sort. These are living occasions. The novelty they incorporate provides for this inclusion of more variegated elements, but it also tends to disrupt the continuities and orderliness that are equally needed for further achievements.

In the dominant occasions, Whitehead believes, novelty and order reach a new synthesis. For, they are living occasions and yet they can be ordered to a greater or lesser degree into enduring objects. Hence, each of these occasions receives data not only from the events transpiring outside the body through the nervous system but also from preceding dominant occasions. A succession of these occasions emerges with a definite pattern of relatedness. However, unlike ordinary enduring objects the succession is not primarily a matter of repetition of qualities in one occasion after another through long stretches of time. Instead, the successor also remembers or prehends what was novel and creative in its predecessor, and what is novel and creative in its own feelings is transmitted to its successor.

It seems likely that only occasions of considerable complexity and vitality could profit from the novelties in their antecedents in this way. Once this becomes possible then a rich storehouse of memories is available to bring into comparison with present perceptions. (E.g., the scent of a predator is noticed by its contrast with preceding olfactory experience in which that scent was lacking. The animal is conscious of the new scent.) The point being stressed here is that conscious experience is a radically new emergent in the evolutionary process, and required and still requires an extremely complex, even awe-inspiring set of conditions; and yet it emerged and still emerges out of entities which are not totally different in kind. Lower grade events or ‘occasions’ constituting the life of a cell illustrate the same characteristics as conscious events or occasions, but in a radically lesser degree.

This account of the physiological-psychological grounds of consciousness deals only with one of the many astounding stages of evolutionary development. Beyond it is the human self-consciousness to which Dobzhansky especially calls our attention. But, in Whiteheadian perspective, the explanatory description of each stage prepares the way for the understanding of the subsequent stage without in any way showing that the subsequent stage is necessitated by its antecedents. The wonder remains, but the novel emergence at each level is seen as made possible by and as continuous with the many earlier stages of emergence. (For a speculative account of a variety of emergent stages in human pre-history and history among which the rise of modern self-consciousness is one, see my The Structure of Christian Existence, Westminster Press, 1967.)

One of the great gaps often noted between the human species and other animals is that human purpose is a factor in shaping events on the planet, whereas pre-human evolution is interpreted without reference to purpose. This seems to justify a dualism that is antithetical to process thought. The duality can be somewhat reduced by considering how blind are many of the processes that shape human history, but it is not the intention of process philosophy to deny human purposes an important role. The question is instead whether evolutionary theory has been correct in excluding animal purpose altogether from the explanation of biological evolution generally. Is it not rather the case, as Teilhard pointed out, that "what we call evolution develops only in virtue of a certain internal preference for survival?" (Science and Christ, Harper, 1968, p.212.)

Waddington’s paper illustrates how a Whiteheadian vision of organisms interacting with environments takes account of a purposive element throughout the evolutionary process. This is not, of course, a purpose for the process as a whole or even for any long-range goals at all. The ability to act in terms of far-reaching goals appears flickeringly among human beings and, so far as we know, nowhere else. But animals act intelligently in their quest for food and, in doing so, modify their environments. Evolutionary theory needs to take account of the interaction between short-term purposive behavior on the part of animals and the survival value of particular characteristics.

Whitehead in this respect as in others provides a rigorous ontological grounding at the microcosmic level for the macrocosmic phenomena studied by biologists. This will appear more fully in the papers by Griffin and Overman in Part Four. However, it can be noted briefly and less technically here.

In the past, when purpose has been introduced as a category into evolutionary thinking, it has been attributed to animals as complex organisms enduring through time. Even in the understanding of human ethical behavior this kind of view has proved unsatisfactory. To describe any complex event or pattern of behavior as guided by a single purpose is always to abstract radically from the concrete course of events. The novelist more accurately shows us through a detailed account of the succession of occurrences how the result came about in a way that no one had purposed. But this does not mean that purposes were not factors in the events. As attention is focused on more and more limited sub-events, one finds it possible to see why, in just that situation, a person acted as he or she did. The action makes sense, that is, it conforms to the actual felt need of the moment. To explain it is to show how the agent concretely experienced the situation rather than to show what, from a more objective vantage point, the situation actually was. How the agent experienced the situation can be explained causally in terms of antecedent events. But the action is directly determined not by these antecedent events as such but by the aim, in the situation so perceived, to achieve something. Otherwise it is not an action at all.

There are many factors present in the perception of a situation by a human being that cannot be attributed even to the higher animals. Also, a larger part of animal behavior may rightly be interpreted as reflex action. But despite these important differences, the situation is often similar. Animals act in any moment in terms of how they perceive the situation. This perception characterizes the final percipient or dominant occasion. Why they perceive the situation as they do can be explained largely in terms of efficient causes or antecedent events. But the action that responds to that perceived situation is not determined by those conditions but by the purpose or aim to which the perception gives rise. Through the bodily action precipitated by the purposes of the momentary dominant occasions within the organism, the actual situation is changed as well as the perceived situation for subsequent dominant occasions. Thus purposive animal behavior alters the situation to which future animal behavior must be adapted. This also alters, as Waddington shows, the evolutionary selection of phenotypes and, indirectly, the genetic factors that prove most adaptive. Hence, the many purposes of individual events, if not some encompassing purpose, do constitute a factor in evolutionary development.

RESPONSE TO COBB’S COMMENTS

By W. H. Thorpe

I feel that the comments by Cobb are lucid and helpful. But there are one or two points concerning which it seems needful to argue further here.

The first of these concerns Cobb’s question "whether evolutionary theory has been correct in excluding animal purposes altogether from the explanations of the biological situation generally." To this I would reply that for a long time now many biologists have readily accepted the possibility, if not the virtual certainty, that purposes in the form of the making of choices between alternative situations may indeed have played an important function as canalising in certain directions the selective forces acting on the stock in question.

I myself discussed this in 1951 in the first chapter of my book, Learning and Instinct in Animals, and in a number of writings since. More recently Sir Alister Hardy has put forward such ideas very cogently (1965). Much of Waddington’s writing also impinges very significantly on this topic.

But it seems to me essential to be as precise as possible as to the evidence of purpose.

While the evolutionary biologist might agree that no purpose can be discerned in the physical universe prior to the state at which evolution in the biological sense commenced (that is to say, where entities which are born, reproduce and die and in so doing are subject to natural selection), yet he might argue that evolution by natural selection automatically provides the ‘purpose.’ That is to say, he might argue that natural selection inevitably injects something into the cosmos which appears to us as purpose. In other words, once you have a selective mechanism which ensures that forms which produce more offspring and tend to last longer become more numerous, then you have the directiveness which is characteristic of biological and ultimately of man-made mechanisms. Thus it is meaningless to ask the question: What is a physical system such as a nebula, an atom or a solar system for? On the contrary, it is always meaningful to ask of a mechanism, whether a biological mechanism or a man-made mechanism, "What is this for?" The evolutionary biologist might cite the view of Bertalanffy (1952) that organisms are open systems which display equifinality. By this is meant that organisms are systems which (exchanging materials from the environment) attain a steady state, which is then independent of the initial conditions. "The directiveness which is so characteristic of life processes that it was considered the very essence of life, explicable only in vitalistic terms, is a necessary result of the peculiar system-rate of living organisms, namely that they are open systems." And today, twenty years later, we can be more precise and say that living organisms accumulate, store, and process information. They are thus not merely internally programmed but, having internal self-representation (as in the DNA of the nucleus), are self-programming. (For recent further discussion see Thorpe 1974, Chapter 1.)

From the philosophical point of view, the central problem of ethology is the relation between purposiveness (‘purpose’ here has the usual meaning -- a striving after a future goal retained as some kind of an image or idea) and directiveness. All biologists agree that the behavior of organisms as a whole is directive, in the sense that in the course of evolution some at least of it has been modified by selection so as to lead with greater or less certainty towards states which favour the survival and reproduction of the individual. All machines are also directive in the sense that their parts have been designed or selected so as to behave in a particular way whenever activated by an external source of power. But not even the most elaborate machine, such as a computer, is purposive. So for the ethologist the question is, "How much, if any, of the animal’s behavior is purposive and what is the relation of this behavior to the rest?"

In human perception, as H. H. Price (1932) has shown, the very idea of a material object is dependent upon an element of anticipation. He says, "every perceptual act anticipates its own confirmation by subsequent acts." A. N. Whitehead (1929) considers the act of perception as the establishment by the subject of its causal relation with its own external world at a particular moment. Whitehead argues that every vital event, in fact, involves a process of the type which, when we are distinguishing between mental and material, we describe as mental -- the act of perception. A very strong case is made by W. E. Agar (1943) for the theory that a living organism is essentially something which perceives. Therefore some element of anticipation and memory, in other words, some essential ability to deal with events in time as in space is, by definition, to be expected throughout the world of living things.

All this, so far as it goes, fits in well with modern Whiteheadian conceptions; as I should be the first to agree. But I still feel doubtful as to the general value of Whiteheadian theory as a guide for the research biologist, except insofar as it encourages him to doubt the reliability of Lloyd Morgan’s ‘canon’ as the sole guide to research at the present day. This is, admittedly, a very important matter and my own desire to investigate the ‘higher’ and more complex aspects of animal behaviour, and not to rest content with Lloyd Morgan’s injunction, may well have been due to my early reading of Whitehead. Lloyd Morgan’s insistence on never adopting a complex theory or formulation for a given behaviour when a ‘simpler’ (usually a more ‘mechanical’ or more physiological one) would suffice was a most valuable warning at a time when, following Romanes and other naturalists of the period, strongly anthropocentric attitudes were so absurdly rampant. Nowadays the study of perceptual synthesis, of memory, of ideation, of insightful problem-solving and of the complexities of motivation in animals, has reached a point at which the exact opposite of Morgan’s strategy often seems more promising.

For myself I nevertheless find the discontinuities in nature so great and so obvious that I stick to the dualist position as an essential attitude of mind -- I am, so to speak, a pragmatic dualist. But I can say with certainty that if I ever become a monist it will be a monist of the Whiteheadian type!

I will end by quoting a characteristic remark by a very great zoologist. D’Arcy Thompson, in the introduction to his great work, On Growth and Form, has much to say on this and kindred subjects, which biologists, psychologists and philosophers would do well to read and to re-read. One sentence runs: "Still, all the while like warp and woof, mechanism and teleology are interwoven together, and we must not cleave to the one nor despise the other; for their union is rooted in the very nature of totality."

 

REFERENCES

Agar, W. E. 1943. The Theory of the Living Organism. Melbourne University Press.

Bertalanffy Ludwig von. 1952. Problems of Life. London: Watts; New York: Wiley.

Hardy, Sir Alister. 1965. The Living Stream: Evolution and Man. New York: Harper and Row.

Price, H. H. 1932. Perception. London: Methuen and Company, Ltd.

Thompson, D’Arcy W. 1942. On Growth and Form. Cambridge University Press.

Thorpe, W. H. 1951, 2nd ed. 1963. Learning and Instinct in Animals. London: Methuen; Cambridge: Harvard University Press.

Thorpe, W. H. 1974. Animal Nature and Human Nature. New York: Doubleday (Anchor); London: Methuen.

Whitehead, A. N. 1929. Process and Reality. New York: Macmillan.

Chapter 5: The Process Theory of Evolution and Notes on The Evolution Of Mind by C. H. Waddington

C. H. Waddington taught and did research at the Department of Genetics in the University of Edinburgh. He died in 1975.



The Theory of Evolution has undergone rapid changes during this century. Some of these changes seem to me to be drastic enough to qualify as alterations of paradigm in the sense of Kuhn. Certainly the change at the beginning of the century was of that nature. Previous to that time any theories of heredity which existed -- there was nothing very highly developed -- were in terms of statistical aggregates. For instance, Pearson and Galton had some sort of theories based on the resemblances and differences between collections of individuals standing in different relations to one another, as sibs, cousins and so on. The introduction of Mendelism changed the emphasis entirely from a consideration of populations in statistical terms to a consideration of the offspring resulting from individual crosses between identified individual parents. It was by experiments of this kind that genes were identified and the process of gene mutation discovered. Evolution was not of major interest to most of these biologists, but insofar as they had a theory of it, it was a theory in terms of mutations of individual genes, carried by individual organisms and submitted to natural selection. It was not until some two decades later that the people started seriously to consider evolution, when of course it became clear that they had to think in terms of populations of individuals. However, the first workers in this field, such as Haldane and Fisher from the theoretical point of view, and biologists such as Timofeef-Ressovsky, Dubinin and others, in practical field investigations, were still thinking mainly in terms of individual genes. This was the phase of evolutionary thinking about which the term neo-Darwinist was first used. However, this first phase fairly rapidly was superseded by a second, in which Sewall Wright and Theodosius Dobzhansky were the two key figures, both of them insisting on the importance of thinking of populations of many genes, as well as populations of many individuals. This can properly be called neo-Darwinism of the second and fully developed kind.

A characteristic of all this neo-Darwinist thinking is that it effectively did not pay any attention to the phenotype. In its mathematics, the selection coefficients are attached to genes or genotypes. Now, selection of course does not act directly on genes or genotypes, but on phenotypes, and in the formation of these phenotypes environmental factors have an influence as well as genetic ones. I have for many years been trying to introduce a new paradigm which takes the phenotype seriously. Charles Birch seems to think this can also be lumped in together with neo-Darwinism, and speaks of it as part of the orthodox modem theory of evolution. I should like to see it accepted as orthodox, but I do not believe it as yet. Very few other authors, with the exception perhaps of Schmalhausen and a few of his students, have done anything more than pay the merest lip-service to the idea that selection operates on phenotypes. The conventional view surely still is that ‘acquired characters,’ since they are not inherited by individuals, have nothing to do with evolution. Anyone who suggests anything else is dismissed by many leading biologists, such as Monod and Luria, as a Lamarckist if not a Lysenkoist.

This rejection persists because people persist in thinking in terms of individuals rather than of populations. If an individual acquires a character during its lifetime, that does not increase the probability that its offspring will exhibit the character; but if the development of some members of a population are affected by the environment in ways which improve their chance of leaving offspring, this will obviously increase their contribution to later generations, that is to say, their natural selective value; and the frequency of that character in later generations will be increased, not by any physiological or genetical change, but by the operation of selection. In fact we can say that natural selection will favour organisms which acquire useful characteristics.

Now, if one combines this simple fact with a well-known but rather more unexpected one, namely that developmental processes are difficult to alter, one finishes up with a very powerful evolutionary mechanism. Say we have a population of animals which has to meet some new challenge offered to it by an altered environment; there may be some individuals in the population whose development is changed by the environment in a way which makes them better able to deal with the challenge -- they show a capacity for adaptive modification. They will, therefore, be favoured by natural selection. After this selection has gone on for some considerable number of generations, the new pathways of development will have gradually acquired a more pronounced chreodic character, which can itself be difficult to modify. In fact, should the environment now revert to what it was before the whole process started, the organism may well go on developing in the way in which it adapted to the changing environment. This process, which I have called genetic assimilation, gives exactly the same end-result as the theory proposed by Lamarck, at one time espoused by Darwin but rejected by modern biology, of the direct inheritance of acquired characters.

The fact that, by a slightly more sophisticated development of modern biology, we have come across an evolutionary mechanism which can produce the same effect is, I think, not without importance in relation to mind. It shows how a behavioural pattern, which in an earlier evolutionary stage emerged only in the actual presence of a certain environmental situation, might, if it was selected as useful over many generations, become habituated to a chreodic developmental pathway which would operate even in the absence of that environmental situation. It is because of the existence of this mechanism that I am not alarmed by the suggestions, of people like Chomsky, that man has an ‘innate’ capacity for the use of language. If at an early stage in his evolution it was useful for an individual to be able to adapt to a language-using community, i.e., to learn language as fast as possible, selection for this capacity might well have brought about a genetic assimilation of at least the bases for what had originally been only a learned adaptive response.

Another great oversimplification of the neo-Darwinist statement of evolution implies that the thing that is being selected (genotype according to the conventional statement, phenotype as I suggest) finds itself inevitably subjected to certain selective pressures arising from ‘its environment.’ In fact most higher organisms select their environment before they allow the environment to select them. Release a hare and a rabbit in the middle of a field; the rabbit will run off to the hedge and live its life there, while the hare will be content to live its life in the open field. Even plants, in a more rudimentary way, make some sort of selection of the circumstances in which they will develop. If a seed falls on stony ground in the desert, it simply refuses to germinate until the next shower of rain comes along and gives it an environment at least somewhat appropriate to its needs. However, there can be no doubt that this reciprocal relationship of mutual feedback, between an organism which selects an environment and an environment which then selects the most efficient organisms, assumes greater importance as we go higher up the evolutionary scale.

In particular, I think it must be of crucial importance in the evolution of behavioural patterns and of anything which we might call a mind. For instance, at some point in their evolutionary history, the ancestors of horses began to eat primarily the grasses of open plains, and not for instance the leaves of shrubs. They then had to deal with the possibility of being attacked by carnivores, and they came to deal with this threat by running away rather than by standing on their hind legs and trying to fight off the attack with their front feet, as giraffes do, for instance. One might somewhat figuratively say that they had ‘chosen’ to inhabit one type of environment rather than the other, and to adopt one type of strategy against predators rather than another. But, of course, that mode of expression should not be taken to imply a conscious process of choice. However, once evolution had started to go in those directions, this defined the character of the natural selection that would be exerted, and evolutionary changes went on in the same direction for a very long period. The mind of the horse has evolved into that of a plains-dwelling fleet-footed animal, which runs away from its enemies. The mind of the buffalo, on the other hand, is that of a plains-dweller which faces its enemies and charges them.

Such types of animal minds, evolved in relation to a reciprocal interaction between the selection of environment by animal and of animal by environment, are what we refer to rather crudely as instincts. An instinct is a pattern of behaviour which is to a major extent dependent on the hereditary constitution of the animal. It is a mistake, however, to think that it is in all cases wholly dependent on the genetic constitution and that the environment plays no role in shaping the behaviour. I will mention only one example which illustrates two ways in which the environment is important in the development of instinct. Weaver birds build elaborate nests consisting of a completely enclosed nest chamber, approached through a tubular entrance. Each species of weaver birds builds nests of a different shape. I do not know why different species should adopt differently shaped homes, but the fact that they do shows that there is a very strong hereditary element in their behaviour. However, birds build a better finished, and more competently constructed, nest in their second year than they do in their first. There is, therefore, an element of learning involved. Consider the problem of a bird approaching a half-finished nest. It has got to decide just how to weave the piece of straw in its beak in amongst the other pieces of straw. It has been found that there are certain kinds of weaving stitches which it can do, but it never, for instance, ties a proper knot. However, it has always to discover some way of adapting the particular types of weaving process at its command to the particular circumstances which confront it. This involves highly adaptive behaviour -- much more adaptive to the environment than one might imagine if one simply wrote the instincts down as hereditary.

We may say that instinctive behaviour is behaviour related to a rather well-defined goal, but often demanding a more flexible adaptive type of behaviour, including the possibility of learning from experience, in deciding exactly how that goal shall be reached. I myself should not refuse to use the word mind in connection with organisms which showed this type of behaviour. The main point I should like to emphasize is that, in such cases, the goal towards which the instinct drives has certainly not been decided by any conscious choice of the organism, but by this subtle evolutionary process of natural selection within a framework which has been set by the previously existing instinctive behaviour.

But if an animal behaves in accordance with one definite unalterable goal, how much of a mind would we be inclined to attribute to it? Surely, we would not think it was being very clever. In fact, we might be tempted to say it was indulging in ‘mindless repetition.’ We would be much more tempted to think the animal had a worthwhile mind if it had at least two goals, and followed one or the other in appropriate circumstances.

Thus the evolution of the mind must involve not only the formation of a goal, but also the development of alternative goals, and the ability to pick the appropriate goal under particular circumstances.

The problem of mind -- being intelligent -- at this level is not only to find new ways of attaining already accepted goals, but puts a premium on the still greater flexibility of discovering new goals.

I will tell the tale of the evolutionary origin of the birds -- or rather, one of the more plausible tales, because the experts have not yet quite decided exactly how it did happen. But one of the ways it may have happened concerns a group of little reptiles, rather like lizards, which had larger hind legs than forelegs, and which normally ran about on these back legs. Suppose they started using their forelegs to work up speed when they were running away from a nasty bigger reptile who was trying to catch them. And suppose the scales on the arms grew longer, into something a bit like feathers, to help them get the benefit of beating the air effectively to push them along. And natural selection pushed this development further until, one day, some of them found themselves taking off and becoming airborne. It must have been very disconcerting; they probably ran the risk of crashing in considerable disorder, and getting gobbled up. But a really clever little lizard, full of mind, must have said "Hey, we’ve got something here," and set about finding how to fly. To attain this brand new goal, he may have had to change quite a lot of his previous routines; for instance, beating his arm-wings in unison instead of one after the other in time with his legs. In order that such an evolution could be possible, his mind had to be able to do two things. It had to be able to reorganize itself around a new sub-goal, to fly, within its old main goal, to escape; and it had to be able to re-arrange its detailed activities so as to achieve this new sub-goal, to change the timing of its arm movements, for example.

Finally, one might ask the question, out of what kind of stuff is mind constructed? Recently, even those who accept physico-chemical entities as a basis of all scientific knowledge have realized that something more may be involved in them than the properties of mass, energy, etc., attributed to them in classical theory. This further component might be referred to as ‘specificity’ of spatio-temporal configuration. In the last twenty years or so, mathematicians and engineers have attempted to replace the rather undefined term ‘specificity,’ which had been much used by biologists earlier, with a more precisely defined notion of ‘information.’ Unfortunately, in order to achieve a precise definition capable of being utilized in a mathematical logical system, they have ‘purified’ the notion until it has become almost useless in connections with biology, or indeed in almost all contexts except that of messages -- which was the main business of the Bell Telephone laboratories in which the originator of the theory, Claude Shannon, was employed. ‘Information,’ as it emerged into the world of mathematics, is a measure of the degree of selection which has been employed in choosing some particular configuration out of a closed universe of possible configurations. It is concerned only with the specificity within a particular universe of possible specificities. For instance, the amount of ‘information’ contained in the letter A is less if it is chosen out of the English alphabet of 26 characters than if it is chosen out of the Russian alphabet with 29. Moreover, the amount of ‘information,’ in this sense, has nothing whatever to do with bringing about any action outside the closed universe; that is to say, it has nothing to do with ‘meaning,’ in any sense of that term. The information content of a message written in English words is just the specificity of the string of letters in which the words are spelt. Consider the two messages:

MEET HIGH MARKET TWELVE TEN

MEAT HIGH MARKET TWELVE TON

The differences in ‘information’ are simply that the third letter from the beginning is an E in one and an A in the other, and the penultimate letter is E in one and O in the second. ‘Information’ Theory has nothing whatever to say about the fact that the first is obviously about an appointment to meet at the corner of High Street and Market Street, and the second is a message from a wholesaler that the stocks are going off and had better be got rid of as quickly as possible.

This limitation in the meaning of ‘information’ made it possible to develop a mathematical theory which is very useful in connection with transmission of messages along channels, but effectively mined it as a word which is useful to apply in wider contexts. Rather unfortunately, the mathematical theory assigned, to the measure of ‘quantity of information,’ a formula which was identical to algebraic form with one of the most famous formulae of thermo-dynamics, namely that for entropy. This at first led Shannon to identify the amount of information given out by a source with its entropy. Later Warren Weaver developed an alternative interpretation, that the quantity of information contained in a message is the negative of its entropy. It was Weaver’s rather than Shannon’s interpretation which became fashionable, and the new word ‘negentropy’ was invented to mean ‘quantity of information or negative entropy.’

The relevance of all this is that there is no doubt that reactions in living systems are very much concerned with the specificity rather than the mass or energy of the components. It is the specific arrangement of nucleotides along the chain of DNA which determines what that gene will do; it is the specific shape in three dimensions of a protein molecule which determines what sort of enzyme activity it will exhibit, and there are many other examples. For a time it became fashionable to discuss this sort of specificity in terms of negentrophy, and some of the most penetrating minds, when they turned from physics to biology, were deceived for a time. Thus Schroedinger, in his elegant essay, What is Life? in 1944, indulged in aphorisms such as ‘life feeds on negentrophy.’ However, he soon came to realize that this is an inadequate way of looking at the situation, and he withdrew or at least greatly qualified the remark in the later editions of his book.

The main point is that the specificity with which biology is so deeply concerned is not a static specificity, with no meaning outside itself. It is rather the possibility of bringing about, or tending to bring about, a certain type of activity in appropriate things which react with it. It is, in fact, a specificity of instruction, the imparting of one particular program, or algorithm. Several authors, including, for instance, H. C. Longuet-Higgins, insist that language has basically to do with programs or instructions, rather than with imparting descriptions from which nothing follows.

Of course, the word ‘information,’ as it is used in ordinary speech, often has some implication that the information will be useful as a guide to action. But it is pretty ambiguous in this context. In fact, during World War II, there was a useful distinction made in the slang of the RAF, which distinguished the ‘info,’ a lot of boring rigmarole about useless facts, from the ‘gen,’ the real stuff you needed to know to tell you how to operate. When we say that biological systems work by means of the programs or instructions incorporated in their components, this is a long-winded way of saying that it’s the gen, not the info, that matters for them. It is not negentropy they feed on, but it might have made some sort of sense to call it gentrophy, if I may coin an unnecessary word.

It is not only biological systems that feed on gen. There are some physico-chemical systems, which no one would dream of calling living, which very clearly do so too (possibly they all do, but I will not pursue this point here). Consider the minerals making up that, at first sight, boring material, clay. They have been discussed in some detail from this point of view by Cairns Smith in his book, The Life Puzzle. Clay minerals consist of crystals in which atoms of silicon, oxygen and a number of metals, such as aluminum, iron and various rarer and less frequent ones, are arranged in a three-dimensional lattice. The lattice is such that at any given time in the growth of the crystal its boundary is a flat two-dimensional plan, with a particular arrangement of these atoms at certain points on it. Now, the forces at work are not terribly choosy about which particular atom goes into which place. At one particular point on the surface there might be an atom of aluminum or alternatively there might be an atom of iron, or some other substance. "Ha!" the information theorists will say, "This surface can encode a great deal of ‘information’." So it can, but the point is that this is not mere info, it is gen. If there is iron instead of aluminium at point X, and the crystal is in a solution which allows it to grow by the deposition of a new layer of atoms on top of the old one, it is much more likely that another iron atom will take this place in the lattice of the next layer. The presence of iron at X is an instruction for building the next layer.

Whatever we imagine the first living systems to have been like, they must have been even more deeply involved in a traffic of instructions. Any type of hereditary material, be it DNA or anything else, which can be transmitted from one ancestral system to two or more daughter systems, must in effect contain instructions for its own copying. Moreover, in all the living things as they are on this earth, the copying system is carried out by mechanisms, such as enzymes, which operate by means of instructions built into them. Finally, systems which we consider worthy candidates to be granted the name ‘living’ differ from things like clay minerals in that they contain instructions, not only for copying, but for the elaboration of structures which can actively Operate on surrounding materials. These new embodiments are what geneticists speak of as the phenotype. The crucial role of instruction-generated phenotypes as a fundamental aspect of living systems has been a dominant theme in recent discussions of the theory of general biology (see the four volumes entitled Towards a Theoretical Biology, edited by C. H. Waddington, Edinburgh University Press).

The early stages in the evolution of life, therefore, involve not only physico-chemical mass, energy, atoms and so on, but also specific instructions. We find the firmest evidence of mind when we look at the other end of evolution, as in our ‘occasions of experience,’ and we are again, of course, fundamentally involved in a traffic of instructions. A knower does not merely sit down before the known and observe it without consent or response. On the contrary, he brings to it certain predispositions, or interests, and observes certain characteristics more than others. The content he finds in the occasion demands a response. As Popper has put it, the ‘prior knowledge’ with which he comes to the occasion is such that what he receives from it is not mere information but instructions or challenges.

In the light of this discussion, the evolution of mind appears as a transition from the instructional traffic involved in the very simplest living things, or even in the pre-biotic systems such as clays, to the much more complex traffic of instructions involved in our own occasions of experience. We can see two ends of the evolutionary range in similar terms. We have evaded the dilemma of considering the beginning of the evolutionary process as depending on nothing but atoms, forces and physicochemical factors, and the other end as involving something of a totally different character we call ‘mind.’ One recent author who has advanced a similar view is Stephen Black. In his book, The Nature of Life, he also draws attention to the importance of instructional traffic in all the processes of life (unfortunately he has not escaped from the fashionable convention of speaking of information when what he really means is instructions). His next step, however, is to expand the use of the word ‘mind’ to cover the whole range of situations involving instructional traffic from the very simplest to the most complex. This is hardly satisfactory, since, as we have seen, the simplest such situations occur in things like clay minerals, and it is hardly illuminating to speak of them having minds. When God fashioned us out of clay, he may have picked the right material to start from, but there was still a lot to do. I have briefly discussed earlier in this paper the nature of the evolutionary processes which have led from the simpler situations to the more complex ones.

Note:

The editors asked me to provide some account of points made during discussions about evolution and mind. Since pressure of other work has prevented my writing a special essay on this, I have put together the gist of what I said by taking extracts from my contributions to the Gifford Lectures at Edinburgh University in 1971/2 and 1972/3 (The Nature of Mind and The Development of Mind by A. J. P. Kenny, H. C. Longuet-Higgins, J. R. Lucas and C. H. Waddington; Edinburgh University Press, 1972 and 1973).

Chapter 4: Emergence in Evolution: (Response to Birch and Dobzhansky) by Ann Plamondon

Ann Plamondon teaches philosophy at Loyola University in New Orleans.

Charles Birch has explained Whitehead’s remark that

a thoroughgoing evolutionary philosophy is inconsistent with materialism. The aboriginal stuff, or material, from which a materialistic philosophy starts is incapable of evolution. . . (Whitehead 1925, p. 151)

as expressing the inconsistency of high-level orders such as self-determining organisms (possessing life and mentality) emerging from low-level orders of mechanically determined particles. I think that Birch’s setting of the problem is essentially correct and that his solution in terms of attributing subjectivity to all entities, even particles, does show a necessary condition for emergence in evolution. This comment, therefore, should be viewed as an addition to, rather than as a criticism of, Birch’s paper. The suggestion that I wish to add is that another metaphysical presupposition is necessary for emergence in evolution -- that of internal relations. It seems to me that understanding the role of this principle in emergence is extremely important for two reasons. First, to approach a more complete listing of the metaphysical presuppositions for such emergence. Second, to clarify an apparent disagreement between Birch and Dobzhansky with respect to the nature of emergence.

I

Whitehead maintained the necessity of a doctrine of internal relations for evolution in the continuation of the passage quoted by Birch:

This material is in itself the ultimate substance. Evolution, on the materialistic theory, is reduced to the role of being another word for the description of the changes of the external relations between portions of matter. There is nothing to evolve, because one set of external relations is as good as any other set of external relations. There can merely be change, purposeless and unprogressive. But the whole point of the modem doctrine is the evolution of the complex organisms from antecedent states of less complex organisms. The doctrine thus cries aloud for a conception of organism as fundamental for nature. It also requires an underlying activity -- a substantial activity -- expressing itself in achievements of organism (Whitehead 1925, pp. 151-152).

The full passage shows that Whitehead is basing his claim about the inconsistency of materialism and evolution on the grounds that materialism presupposes a doctrine of external relations and that this doctrine is inadequate to the development of more from less complex organisms. Consider the relationship of materialism and external relatedness. Materialism entails that what a thing (bit of material) is does not depend on its relationships to other things (bits of material); the relationships of a thing are not constitutive of it. (This is the doctrine of ‘simple location’; Whitehead 1925, pp. 69-70). The application of this doctrine to the formation of higher levels of order out of lower levels results in an aggregate view of the higher order. This means that when higher levels are formed out of lower levels of order there is no modification of the lower level to a pattern of the higher level. There is no modification of this kind because such modification requires that the relationships of the lower orders be constitutive of them. Materialism rules out the possibility of such internal relatedness. Yet increase in complexity depends upon such a modification. This is because there can be no real increase in complexity unless there is a new order brought about at the higher level. There can be no new order if what the lower orders are is independent of the relationships into which they enter. The formation of the higher order by their relationships does not bring about a new and independent order at all. The ‘higher order’ is, in a sense, a misnomer. It is an aggregate, and it cannot be said to be of greater complexity than its constituents.

The point I am attempting to make is that a necessary condition for evolution at all is an increase in complexity which cannot be accounted for on a materialistic view. This is an addition to the claim made by Birch (above, p. 14). I am claiming that the difficulty with most neo-Darwinian discussions of evolution is not merely that its mechanisms are inadequate to account for evolution’s ‘qualitative side’ but that these mechanisms will be inadequate so long as they are attached (ad hoc) to a materialistic philosophy. They will be inadequate because the meaning of evolution and the meaning of materialism are incompatible. That is, the difficulty to which I am referring is that the neo-Darwinian theory of evolution, on the whole, has not disassociated itself from materialism.

II

The addition of the doctrine of internal relations as a necessary condition for evolution can clarify three arguments used by Birch to support his claim that low levels of order, such as particles, must have a subjective as well as a mechanical aspect. Consider the following restatements of these arguments in general terms.

1. If an explanation is to be given of actualized subjectivity in higher orders, then subjectivity must be potentially in the lower level constituents.

2. When higher levels of order exhibit properties not belonging to their lower-level constituents, the correct inference is not that something has been added to the lower-level constituents but, rather, that they exhibit different properties when they organize the higher-level order. We know more about the lower-level orders in the sense that we know more about their possibilities for modification when they are situated in different higher orders.

3. ‘Taking account’ belongs to all orders which exhibit such modification. Since this is the principal criterion for subjectivity in highly developed organisms, there is no warrant for refusing to attribute some sense of subjectivity to lower organisms, including the constituent particles.

It seems to me that when the arguments are expressed in this way they provide the core of an answer to Dobzhansky’s question as to the process involved in the realization of potentiality in evolutionary development. At the same time, they avoid the ‘snippet’ fallacy which Dobzhansky seems to accuse Birch of committing. They avoid this fallacy because emergence in evolution is put in the context of acting and not of containing.

These arguments refer to the modification of lower levels of order to the pattern of the higher level which they organize. Because there is an internal relatedness of the higher-level order and the constituent orders, there can be an emergence of new properties. The emergence arises in the act of modification. There is no question of ‘snippets.’ It is not the case with respect to a particular property of the higher-level order that it must have been contained in some sense in the lower-level constituent(s). Rather the property comes about in the act of relating in that particular situation.

References

Whitehead, A. N. 1925. Science and the Modern World. New York: Macmillan.

Chapter 3: From Potentiality to Realization in Evolutiony by Theodosius Dobzhansky

Theodosius Dobzhansky, after teaching for many years at Columbia University, finished his career by teaching in the Department of Genetics at the University of California at Davis. He died in 1975.

The universe is the product of the evolutionary process. All that was, is, and will be has evolved, is evolving, and will evolve. The inorganic and human evolutions are parts of a single process. Ultimately all evolution is one. It is reasonable to assume that the past evolution has brought about the present, and the present will lay the foundation for the realization of future evolution. If so, the potentialities of the future must have been present in the past. The Big Bang, or whatever it was that launched the development of the Cosmos, contained the potentiality of life, and hence of the biological evolution. The primordial life had the potentiality of evolving mankind, as well as every one of the several million existing and extinct species. Mankind, the human species, is unique in several ways. It has rationality, self-awareness, and death awareness. These and other unique properties of the human species must have been among the potentialities of the primordial life and the primordial cosmic substratum.

The statement that the potentiality of man was contained in the primeval living monad and in the primordial cosmic stuff may seem inordinately audacious. More careful consideration shows that it is really trite. It means no more and no less than that the process of evolution has in fact generated life and engendered man; these are patently true but trivial affirmations! It does not necessarily follow from these affirmations that all matter or all energy have in them some bits of life or protolife, or that the primordial amoeba or the primordial virus possessed some rudiments of human consciousness or some embryonic minds. To have a potentiality to become something does not mean possession of a snippet of that something. Between potentiality and realization there intervenes a process of development or evolution. It is worthwhile to consider at this point some biological illustrations. Animals evolved quite different kinds of organs of respiration -- lungs, gills, tracheae, etc. The ancestral unicellular and primitive multicellular organisms respire through the entire body surface. It is gratuitous to ascribe to them proto-lungs, proto-gills, and proto-treachae. Mammals and birds arose from reptiles, reptiles from amphibians, amphibians from fish. Yet there is no trace of communication by learned symbolic languages among reptiles, of hair or feathers among amphibians, of the auditory apparatus of land vertebrates, or of legs or wings, among the fish. A zoologist can, to be sure, identify the body parts in the ancestral groups that gave rise to new organs and functions in the derived classes. However, it scarcely makes sense to say that certain bones of a fish skull are incipient ears, or that two pairs of fins in fish have concealed in them the five-fingered appendages of the higher vertebrates. Biological and human evolution are creative processes. This means that they at least occasionally engender novelties. How do novelties arise?

Preformation, Epigenesis, and Creativity

In classical biological terminology, a development may be preformistic or epigenetic. Preformation postulates that the germ has in it a miniature copy of the new organism, or at least of its main. components. Preformistic development is essentially growth. A human sex cell was imagined to contain a miniature homunculus, which increases in size until it becomes an embryo, an infant, and eventually an adult woman or man. Evolution may conceivably also be preformed. Etymologically, evolution means unfolding or unrolling something that had been present in the ancestral forms or substances. Organisms that evolved were latent but somehow prefigured in ancestral organisms. Evolution was, then, no more than gradual removal of masks and camouflages; a homunculus was hidden in the ancestral amoeba, and then it gradually became open to view.

Epigenesis means the development of something that did not exist previously. Epigenetic evolution is a creative process. To my taste, preformation is less interesting, and I am tempted to say less inspiring, than epigenesis. Being interesting and inspiring is however not a criterion of validity of a scientific or philosophical theory. The theories of preformation and epigenesis must be examined further. First of all, epigenesis does not mean creation ex nihilo. A fertilized human egg cell does not contain a homunculus, but neither is it a structureless drop of viscous liquid. It contains, in addition to nutrient materials, a developmental program encoded in the DNA of the chromosomes. The outcome of the development is, in a given environment, predetermined by this program.

Did the primordial life contain a program of evolutionary developments? Some philosophers and some biologists thought that it did. This led to evolutionary theories called orthogenesis, nomogenesis, finalism, etc. These theories, now mostly abandoned, postulated that the ancestral organisms were programmed to evolve into everything into which they did evolve. But this is failing to perceive the basic difference between the individual and the evolutionary developments (ontogeny vs. phylogeny). The ontogeny is so programmed that it either yields an individual of a certain species or nothing. Even so, the programming is not absolutely rigid; in different environments the development proceeds in more or less subtle or even clearly diverse ways. Thus, the human development depends, to use somewhat antiquated words, both on nature and on nurture. Evolution is not programmed in the same way as is ontogeny; in fact it lacks a program. This does not mean that the phylogeny is wholly unconstrained or wholly at the mercy of the environments. A mouse is unlikely to evolve into a species of elephants, or an elephant into a mouse. Their organizations are so radically distinct that they could hardly be reconstructed in such ways, even if this were advantageous in some environments. On the other hand, an environmental challenge may be answered by an adaptive modification. ‘May’ rather than ‘must,’ because this depends on many factors, such as the availability of genetic variance on which natural selection can work. Biological and human evolutions come neither from within the organism nor from the environment. They involve creative syntheses of internal and external causes.

Determinism or Freedom?

How much determinism or indeterminism is there in evolution? Was evolution fated from the beginning to produce mankind and every other species? Does evolution sometimes involve origination of radical novelties? Much confusion has entangled these problems, and I can hardly hope to disentangle them all here. The physicist and cosmologist Laplace believed that to an all-knowing intelligence "nothing would be uncertain, the future as well as the past would be present to its eyes." The Laplacean ‘hard’ determinism is now out of fashion even in physics. The universe is one, and it has evolved only once. Evolution is a unique event, or rather a unique concatenation of events. Since evolution is not acausal, the meaning of Laplacean determinism is at most that what happened was bound to happen. Even this is questioned by process philosophers. Anywhy, it does not follow that if the evolution were to start again it would go exactly as it did before.

The problem of evolutionary determinism is often brought up in relation to the hypothetical extraterrestrial life on hypothetical planets in other solar systems. The problem is not meaningless, but inferences that one may put forward are not at present verifiable or falsifiable. The crucial consideration is that if the hypothetical planets actually exist, none of them can be at any single moment identical in the states of every component with the earth as it was at any point of its history. The Laplacean determinism is therefore beside the point, and the problem is shifted back from astronomy and chemistry to evolutionary biology. The question to be asked is this: is the evolutionary process at all likely to be repeated even in its most general features on planets with similar, though not identical, environments? Those who ventured to speculate about these matters came to diverse decisions. A majority, composed mainly of cosmologists, physicists, chemists, and a few biologists, surmised that the extraterrestrial evolutions should proceed as the earthly one did, including the production of ‘humanoids,’ i.e., of rational beings. Projects are discussed in all seriousness of establishing radio communications with these ‘humanoids’ in other solar systems and even other galaxies. Such projects fit the needs for romance and fancy, felt by many millions of people who are bored with everyday drudgery.

A minority of skeptics, most of them biologists, see themselves obliged to deflate the romantic bubble. Assume for the sake of argument that extraterrestrial life exists, and that it is based on proteins and DNA like the life on earth with which we are familiar. Even so, that extraterrestrial life would evolve in some ways quite dissimilar to the earthly one. The probability of repetition of terrestrial evolution is zero. The same holds for the possibility that, if most life on earth were destroyed, the evolution would start anew from some few primitive survivors. That evolution would be most unlikely to give rise to new man-like beings. I want to make it perfectly clear that the un-repeatability of evolution does not mean that evolution is acausal. Nor is evolution, as sometimes alleged, due to ‘pure chance.’ Evolution, at least on the biological and the human levels, is neither rigidly predestined nor completely indeterminate. Viewed in the perspective of time, evolution is a creative process. It has so multiple a causation that its outcomes are unlikely to be repetitious. Each evolutionary event is conditioned by the whole preceding history of the species, by the environment in which it occurs, and possibly, in higher organisms with developed nervous systems, by the behavioral reactions of these organisms.

Emergence of Novelty

We have postulated that the potentiality of every evolutionary event was present in the primordial life and the primordial cosmic stuff. The problem then turns out to be what is involved in the realization of potentialities. According to the preformation model, evolution is mostly growth or unwrapping. The primordial life carried rudiments of every basic structure and function that appeared later. It may have had protopsychism, and protovoluntarism, and protogood, and protoevil. Metaphysics of panpsychism or panvitalism are attractive perhaps for the same reasons which make all preformistic notions attractive to many. Everything is in existence from eternity, albeit only in hidden states, which need germinate, sprout and grow. Old-fashioned vitalists supposed that the origin of life involved the addition of a vital force, which came from some unspecified place, or perhaps from God. Panvitalism avoids this problem by postulating that life was also preformed in lifeless matter. Panvitalism and panpsychism make it unnecessary to assume that a vital force need be added from somewhere. It was invisibly present everywhere before there was life.

Some of the process philosophers have, to my surprise, rejected the identification of panvitalism and panpsychism with preformation doctrines. Yet to a biologist, preformation is a perfectly respectable biological and evolutionary view, even though it is at present a minority view among embryologists and evolutionists. If you postulate that life and mind were brand new principles which began to appear at some time and were not at all present earlier, you have an epigenetic evolutionary view; if, by contrast, you find the rudiments present in all nature universally, this is a preformistic view. It must be admitted that epigenetic models lead to difficulties, because they postulate the emergence of qualities, such as life and mind, in evolving systems which did not possess them at all. Origination of novelty is harder to envisage than mere growth.

Epigenesis does not assume anything arising ex nihilo. My body is composed of atoms of the same chemical elements which are found in inorganic matter. But in my body these atoms are components of many kinds of molecules which are formed chiefly or only in organisms. Moreover these molecules are not mixed uniformly in a solution -- they are arranged in unbelievably complex patterns known as cells. And the cells, in turn, are ordered in an even more complex pattern, which is my body. Other, rather similar but not identical patterns are individuals of the species Homo sapiens, a great multitude of less similar patterns are representatives of other animal and plant species. Evolution is emergence of new patterns, particularly on the cellular and organismic levels. Living beings as we observe them now are patterns of inorganic and organic constituents. These patterns emerged and were gradually perfected during at least three billion years of biological evolution on earth. We should never forget about these billions of years of evolution. A sudden appearance of life from no life, and of mind from no mind, would be, in the words of Sewall Wright, ‘sheer magic.’ The billions of years of evolution have made this ‘magic’ everyday occurrence. Indeed, the kindling of new life in the process of reproduction of organisms would be awe-inspiring, if it were not so commonplace that it is taken for granted.

Molecular constituents of all organisms are far more similar than the organisms themselves. It is remarkable that the same four kinds of nucleotides compose the DNA’s of all organisms. Equally remarkable is that the same twenty kinds of amino acids make up most proteins, in organisms all the way from bacteria and viruses to man. Evolution was the emergence of patterns more often than invention of new chemical components. Life and mind did not arise ex nihilo. They appeared in the process of evolution as novel patterns, and patterns of patterns, of organic functions. Evolution involved what one may refer to as emergence or transcendence. These words nettle many scientific puritans. The dictionary definition of ‘transcendence’ is, however, simple: "going beyond ordinary limits, surpassing, exceeding." There is no doubt that this happened in evolution -- cosmic evolution transcended itself producing life, and biological evolution did so when there emerged mind.

Realized and Unrealized Evolutionary Potentialities

At the present state of our knowledge, it seems most probable that all life on earth was monophyletic at its origin, i.e., derived from a single kind of primordial life. If this is true, the primeval life had potentialities of originating every one of the existing and fossil species of organisms. It seems to me that this makes the preformationist model unlikely. Far too many things would have to be preformed! As already pointed out above, organic evolution is not what the etymology of the word ‘evolution’ suggests, i.e., not unfoldment of what was there hidden to begin with. Evolution has involved multiple branching and divergence of countless evolutionary lines. The old idea of the ‘great chain of being’ implied that all organisms can be ordered in a single sequence, from primitive to complex. This idea was important in the history of biology, since it suggested the idea of evolution. But as far as I know, the ‘great chain’ idea has no adherents at present. Instead of a single chain, evolution proceeded along innumerable branching lines, most of them ending blindly in extinction. Starting from a single original source, the evolutionary lines have branched and rebranched, and this branching has led to increasing structural and functional complexity. Evolutionary progress, no matter how the concept of progress may be defined (and no generally accepted definition has yet appeared), has undoubtedly occurred in some lines, but in other lines there has been stasis or even partial regression.

Potentialities of all biological evolution were present in the primordial life. This must now be supplemented by the assertion that, in addition to all the potentialities that became realized, the living world had, and doubtless still has, countless unrealized potentialities. The foundations on which this assertion rests are really very simple. So great is the efficiency of the Mendelian mechanism of gene recombination that only a minuscule fraction of the potentially possible gene combinations can ever be realized. This was pointed out by Sewall Wright already in 1932. Supposing that a species has only 1000 genes each in ten different allelic forms (both figures are overly conservative estimates), 101000 gene combinations are potentially possible. The number of subatomic particles in the Universe is estimated by physicists to be of the order of only 1078. Even if most of the possible gene combinations are poorly viable, or inviable, a stupendous majority of the genetic potentialities of the living world have never appeared, and never will be realized and tried out by natural selection.

There is no way of telling what sorts of organisms evolution could have produced but did not in fact produce. There is plenty of evidence that the availability of an opportunity for a certain way of life does not by any means guarantee that species of organism exploiting that opportunity will evolve. This is interesting to philosophers who wish to discover the degree of determinism in evolution. Biologists have, since pre-evolutionary days, been fascinated by instances of structural parallelisms in not closely related animals and plants that exploit similar environments in similar ways. Whales and dolphins resemble fishes in body shape, though not in internal anatomy and physiology. Cacti in the deserts of the New World are mimicked by euphorbias in African deserts, although they are botanically not closely related. Marsupials have evolved in Australia several forms which occupy ecological niches held on other continents by placental mammals -- wolf-like, squirrel-like, mole-like, woodchuck-like, etc.

Biologists have paid much less attention to the equally significant but opposite phenomena -- absence of evolutionary parallelisms where they could, by analogy, be expected. Thus, there are no horse-like, deer-like, or antelope-like marsupials in Australia. The large herbivores in Australia are instead kangaroos, which are obviously quite different from horses or antelopes. And yet in South America there developed in Miocene times horse-like and camel-like animals; these belonged to the extinct order of mammals, Litopterna, and were not closely related to the real horses and camels. One of the most widespread and ecologically obviously successful groups of ants in the American tropics are the fungus-growing Atta. These agriculturalists of the insect world feed exclusively on certain kinds of fungi which they cultivate in subterranean ‘gardens’ on especially collected pieces of leaves and other plant parts. Yet such agriculturalist ants are wholly missing in the Old World tropics. Their ecological success and diversity in the New World virtually insures that they could flourish in the Old World as well if they evolved or were introduced there. The absence in the Old World of humming birds is a further example of lack of evolutionary parallelism where it could well be expected.

The Uniqueness of Mind

The possession of human mind makes our species a unique product of an evolutionary transcendence. The capacities for abstract reasoning, symbolic language, self- and death-awareness set mankind apart from the rest of the biological world. Was a man-like ‘rational’ species predetermined to appear in the course of evolution? Some philosophers and biologists thought so; in fact, the so-called finalists contended that organic evolution as a whole was designed to bring man into existence. Or was the origin of man a matter of chance alone, a haphazard outcome of the operation of the evolutionary roulette? This view also has its exponents, among whom perhaps the most recent and distinguished is Jacques Monod. His statement is crystal clear: "man knows at last that he is alone in the universe’s unfeeling immensity, out of which he emerged only by chance." I believe that the emergence of mankind, and for that matter of any other form of life, was neither foreordained nor due to random chance. Mankind is a masterpiece of creative evolution. Like the creativity of a human artist, evolutionary creativity is a synthesis of environmental challenges with the available biological (or intellectual) means to respond to these challenges.

Julian Huxley defined evolution as a process which generates, among other things " . . . more complex organizations, higher levels of awareness, and increasingly conscious mental activity." Teilhard de Chardin postulated the so-called law of complexity-consciousness, according to which mind must inevitably emerge when a certain level of structural and functional complexity is reached. As a definition of evolution, that given by Huxley is certainly invalid, since increasing complexity, awareness, and mental activity occur by no means in all, not even in a majority, of evolutionary lineages. In many lineages the opposite has occurred, and the self-awareness and ‘mental activity’ appeared in only a single species, among two or more millions that now exist. We need not take a stand here on the problem whether some rudiments of mind, or self-awareness, or conscious mental activity, are present in animals other than man. Most of the observations bearing on this problem come from introspection rather than from controlled experiments. As a result, competent students of the issue hold quite different opinions, none of which is demonstrably right or wrong. An evolutionist is not surprised if he finds component parts or precursors of organs or functions fully developed in more complex organisms in their less complex relatives. Human mental abilities must have emerged in evolution from raw materials that were present on prehuman levels. Anyway, the human psyche is unique in the living world. This uniqueness does not force us to return to the Cartesian body-soul dualism. It does however illustrate that evolution can produce radical novelties.

There is no reason to think that, given some millions or tens of millions of years to evolve further, other animal species will evolve humanoid minds. Nor is it likely that if mankind were to become extinct it would be replaced by another ‘rational’ or ‘humanoid’ species. The fact that mind has emerged only once in the whole known course of evolution does not, in my opinion, bear out the view that rudiments of mind, or some kind of protominds, are omnipresent or even widespread in the living world. One can see that certain conditions are necessary, but evidently not sufficient, for the appearance of a psyche capable of self-awareness. A highly developed nervous system and a capacious brain appear to be indispensable for the emergence of anything like human mind. Jelly-fishes, ants, termites, and even birds have not evolved nervous systems that could sustain humanoid performance. As stated above, there is no assurance, and even not much likelihood, that given some more millions of years to evolve, any of them would reach a level of brain development at which the emergence of mind would be a possibility. This may seem, particularly to non-biologists, excessive skepticism; at least a brief explanation of my reasons for this stand seems in order.

Any biologist, at any rate any not exclusively laboratory biologist, knows that organisms that inhabit a given geographic area exploit its resources in many different ways. Yet all of them possess adaptedness to their environments and their ways of life, for otherwise they would have died out. Already Darwin had to rebut the objection to his theory that the coexistence at our time level of high and low, primitive and advanced organisms contradicts the doctrine of evolution. If, for example, mammals are more advanced than amoebae, and flowering plants more than bacteria, why then are amoebae, bacteria, and a host of other ‘primitive’ organisms still with us? Why have they not evolved to more advanced grades? The answer is that bacteria and amoebae exploit different environments or sub-environments, or exploit them in ways different from, for example, insects or birds or mammals. There is no ‘law’ that would make all organisms evolve just for the sake of evolving. Evolution propelled by natural selection is sometimes progressive, but not always and not necessarily so. Evolution is thoroughly opportunistic. Bacteria and amoebae seem to be doing as well in their ecological niches as insects and vertebrates in theirs.

Conclusion

Life appeared in the universe some billions of years after its origin in the hypothetical ‘Big Bang.’ Furthermore, it appeared, as far as anybody knows for certain, on just one of the myriad of celestial bodies, the earth. Before that event, the universe was lifeless, and most of it is still lifeless. Some three billion years after the origin of life on earth, there appeared man. During these billions of years life existed without man, and could continue to exist without him, in some ways even better than with him, since man is a pitiless destroyer of many animal and plant species. Yet man did arise and develop a completely novel and hitherto unprecedented way of life.

Mankind adapts its environments to its genes more often than it changes its genes to fit its environments. The rationality, or mind, or symbolic communication, or self-awareness -- call the evolutionary uniqueness of man by whatever name you prefer -- has made him by far the most successful biological species. His arrogance makes him sometimes call himself the Lord of Creation. The origin of man was neither predestined nor was it an evolutionary accident. Mankind’s novel and unique psychic capabilities came about as a result of a long travail of evolutionary creation. The successful outcome of this travail was not guaranteed. There were two species of Australopithecus living in late Pliocene and early Pleistocene periods -- A. africanus and A. robustus. The A. africanus was apparently our ancestor; it did evolve the biologically unique human qualities, and its descendants gradually became human. The A. robustus did not so evolve, and eventually became extinct. Now both species must have descended from some common, but as yet unknown, ancestral species. This ancestor must have had potentialities of becoming humanized. The potentialities became realized in one species derived from it, but not in the other species.

Life at its origin was a radical novelty in the formerly lifeless world. Human mind was another radical novelty. Man, a species endowed with mind, or consciousness, or self-awareness -- call this unique property by whatever name you choose -- arose from ancestors not endowed with this property. To some philosophers the origin of such novelties is as unbelievable as magic. I can offer two considerations which will make this ‘magic’ perhaps less magical. In the first place, the evolutionary transcendence from the non-living to the living did not require anything like old-fashioned ‘vital force’ suddenly implanted by the Creator. Nor was the transcendence from the non-human due to implantation of a ‘soul.’ Both transcendences were basically like other evolutionary transformations, albeit more radical ones.

In the second place, the transcendences should not be imagined to have been sudden. They took probably millions of years. The transitions from no life to life, and from no mind to mind, were gradual. Our scientific knowledge is, of course, quite insufficient to give anything like satisfactory accounts of these transitions. Biologists as basically different in their philosophical and biological views as W. H. Thorpe and Jacques Monod agree that the origin of life is a difficult, and thus far intractable and unsolved, problem. I concur. However, probably thousands of biologists and biochemists all over the world are now working on this problem. Their working hypothesis is that life arose epigenetically in a lifeless world. Assuming that life always existed is a simplification, but not a helpful simplification.

The origins of life and mind are indeed miraculous. Do not forget, however, that many other biological phenomena also strike us as wondrous and awesome. Consider the origin and development of mind in a human child. A miracle indeed! But no more miraculous than the origin of mind in human evolution. A newborn infant has a potentiality of developing mind, and self-awareness, but this potentiality can be realized only by way of a slow and gradual process of maturation. As pointed out above, a potentiality of mind must have been present in all ancestors of the human species, down to the primordial life. The analogy between the evolutionary origin and the maturation of mind in a growing child must not, to be sure, be pushed too far. Ontogenetic and phylogenetic potentialities are fundamentally different. Ontogeny is a product of phylogeny, not vice versa, as some people wrongly assumed. The alternative to realization of many ontogenetic potentialities is death; a child either grows up or dies. Not so with phylogenetic potentialities. In the first place, these potentialities are innumerable. Secondly, a great majority of the potentialities are never realized. Novelty may emerge or not emerge. This is not due to some intrinsic biological indeterminacy, but rather to an overwhelming complexity of very numerous interacting causal chains. You may see here a precursor of freedom on the human level.

Chapter 2: Can Evolution be Accounted for Solely in Terms of Mechanical Causation? by L. Charles Birch

L. Charles Birch is Challis Professor of Biology at the University of Sydney.

What happened to evolution? The happenings we know a lot about, thanks to evolutionary biology, particularly of the last four decades, are the roles of mutation, recombination of genes in sexual reproduction resulting in a great diversity of gene arrangements, and natural selection. These are the main mechanisms of the so-called Neo-Darwinian theory of evolution. There are of course many mechanisms that are involved in these concepts, such for example as genetic assimilation established by Waddington (1957), but they may all be regarded as aspects, albeit subtle ones, of the three main mechanisms mentioned. The sum total of them presents a mechanistic world picture of the evolutionary process. It has been eminently successful in explaining transformation phenomena, although less successful in prediction. A brief but cogent account of the modern view is given by Waddington (1969).

What these mechanisms help us to understand is the way in which organisms are transformed in time, genetically, anatomically, physiologically and behaviorally in adaptation to environment. At that level the theory seems to be remarkably successful. One of the great achievements of modern evolutionary theory has been the quantification of many of these processes. The theoretical model of Neo-Darwinism is a quantitative model. It rests on well-attested concepts and has great explanatory value.

The terms ‘mechanism’ and ‘Mechanistic world picture’ may be regarded as the conception according to which the universe is seen as a machine or contrivance and all that is in it as smaller machines or contrivances which, once set in motion, perform as they do, by virtue of their construction (Dijksterhuis 1961, p. 495). Having originated in the physical sciences, the mechanical analogy was taken over by the biological sciences as they became established as sciences in their own right. As a result it is true to say now, as Whitehead (1926, p. 128) said decades ago, that "It is orthodox to hold that there is nothing in biology but what is physical mechanism under somewhat complex circumstances . . . the appeal to mechanism on behalf of biology was in its origin an appeal to the well-attested self-consistent physical concepts as expressing the basis of all natural phenomena. But at present there is no such system of concepts." To this latter claim I shall return later. The main point of Whitehead’s remark is that biology has an orthodoxy; it is mechanism based on physics.

There are, however, outstanding problems in evolution that remain unaccounted for when we come to look at evolution in a comprehensive way. In particular, there is a qualitative side to evolution which escapes interpretation that is solely concerned with mechanical causes. Evolutionary biologists, with some notable exceptions, such as Dobzhansky (1967), Wright (1969), Waddington (1961) Rensch (1961), and Thorpe (1965), scarcely seem to recognize that this is a problem requiring interpretation. Where the problem is recognized it is usually in relation to two ‘events’ in evolutionary history: the emergence of life and the emergence of mind. ‘Livingness’ and mentality I take to have qualitative as well as quantitative aspects.

The Concept of Emergence

Dobzhansky (1967, p. 32) has said that "the origin of life and the origin of man were evolutionary crises, turning points, actualizations of novel forms of being. These radical innovations can be described as emergences, or transcendences, in the evolutionary process." In the sentence that immediately follows this quotation, which is about the human mind, Dobzhansky says that evolution is a source of novelty, of forms becoming what did not occur at all in the ancestral state. It is this new aspect which leads him to use the words ‘emergence’ or ‘transcendence.’ C. Lloyd Morgan, in his book Emergent Evolution (1923), wrote of the emergence of life and mind as ‘miracles’ in the sense of something that is not understood. He with other ‘emergent evolutionists’ such as J. C. Smuts believed that these emergent properties could not be understood in terms of the laws of physics and chemistry, but that some new laws come into existence with the new qualities. Morgan considered there to be different laws at the level of the inorganic, the organic, and the human (Lloyd Morgan 1923, p. 43). By contrast, Dobzhansky clearly considers that the emergence of life and mind are to be understood according to the same evolutionary principles as are applied to anatomical or physiological characters.

The term ‘emergence,’ however, does not explain any problems. It is not a solution to a problem. As Leclerc (this book) says, "rather the term emergence signifies a problem requiring solution."

The qualities that ‘emerge’ in evolution are not, of course, just livingness and mentality. To discuss the issue as if they were is to discuss evolution as though it were punctuated with discontinuities. Modern biology has demonstrated the continuity of the evolutionary process in the sense that what has evolved constitutes a continuum without any sharp dividing lines, even between non-living and living. While the words ‘life’ and ‘mind’ refer to aspects of such great significance in the whole process that we might wish to attach special terms such as transcendence or emergence to them, we must recognize that the qualitative side of evolution, like the material side, is a continuum. The manifestation of life in an amoeba is different from that in a bacterium, and so on. I am not implying that the word ‘life’ refers simply to the qualitative, but it does include the qualitative when we recognize responsiveness and perception as evidence that living organisms are subjects for themselves and in themselves as well as objects that we observe. The problem that calls for explanation is the qualitative so far as it is evident in the whole gamut of evolutionary history.

The tradition of science since Descartes has been to start with the physicist’s atoms and molecules and attempt to account for the whole complexity of the universe, including life and mind, this way. That is the aim of molecular biology. It is the faith of mechanical determinism. The trouble is that it does not work, because there is nothing ultimate about the physicist’s atoms and molecules. It incorporates two attitudes which are radically inconsistent. As Whitehead (1926, p. 94) remarked, "A scientific realism, based on mechanism, is conjoined with an unwavering belief in the world of men and of higher animals as being composed of self-determining organisms." How you get one from another is the problem (Overman 1967, p. 166). I think it is because of this problem that Whitehead (1926, p. 157) further remarked, "a thoroughgoing evolutionary philosophy is inconsistent with materialism. The aboriginal stuff, or material, from which a materialistic philosophy starts is incapable of evolution." If we are to understand why Whitehead thought this way we need first to examine the alternative he proposed.

Whitehead reversed the situation of the mechanists (Waddington 1961, p. 19). We do not start with knowing all about atoms and molecules and then seem to understand the phenomena of biology. It is from observed phenomena in biology that we have to start, with "occasions of experience" (Whitehead 1933, p. 196). It is from these we work back to construct models of similar entities. But these models take account of the phenomena observed at the more complex levels. To use an example of Waddington (1961, p. 20), sodium chloride molecules exhibit properties which we cannot observe by studying sodium and chlorine atoms in isolation. When the compound sodium chloride is formed, it is not that something entirely new is added to sodium and chlorine atoms, but rather we now know something more about the nature of sodium and chlorine atoms then we did before. Similarly with phenomena of life. When certain arrangements of atoms of carbon, nitrogen, hydrogen, oxygen and so on exhibit properties which we recognize by the name enzymes, or other combinations are able to conduct electrical impulses as in nerve cells, it is not that something new has been added to these atoms. We have discovered something about the nature of these atoms that we did not know before. We discover that when atoms are organized in particular ways they reveal aspects of their nature not revealed in isolation. Atoms and molecules organized in brains reveal the potentiality of atoms organized in particular ways to give rise to entities with subjective experience to which we give the name mind or consciousness. Atoms that can give rise to brains that think must be different from hypothetical atoms that could under no circumstances have done this.

Evolution of the Subjective

This is the crux of our problem. Evolution has given rise to ourselves who know the existence of a subjective aspect of life in our own lives. The subjective aspect of life for us is a central fact of our existence. The conduct of human affairs is entirely determined by our recognition of foresight determining purposes which we pursue consciously. This conscious pursuit of purpose issues in conduct determined by the purpose. There is of course a whole set of mechanisms, physiological and biochemical, associated with these subjective experiences, but these are not that experience per se. A feeling is a feeling, period. Consciousness is sui generis an aspect of existence. What then is the origin of the subjective we know in our own lives?

Where in evolution did the subjective start if it started anywhere? We cannot of course be sure that anyone besides ourselves has a subjective side to life. We cannot have another’s experience. But most of us do not deny that other human beings must be like ourselves in this respect. Many of us too will be inclined to attribute feelings of pleasure and pain to the animals we know well, such as our domestic cats and dogs. We may be less willing to attribute a subjective side to organisms lower down the evolutionary scale. But that is an arbitrary decision based perhaps on their lack of a complex nervous system. It seems to me less arbitrary and more logical to go along with Jennings (quoted by Agar 1943, p. 153), who wrote after years of study on the behavior of amoebae: "I am thoroughly convinced, after long study of the behavior of this organism, that if Amoeba were a large animal, so as to come within the every day experience of human beings, its behavior would at once call forth the attribution to it of states of pleasure and pain, of hunger, desire, and the like, on precisely the same basis as we attribute these things to the dog."

It is of course commonplace that perception does not reveal to us the intrinsic nature of things, but only the way in which they act, and are acted upon by other things. The object as constructed by us in perception differs from the real object in the same way as another person’s sensory perception of myself differs from the real me. I am an experiencing subject and no one else can experience my experiencing. The position is the same in our perception of living organisms other than man and in our perception of non-living objects.

Although we cannot know another organism’s subjective experience, it is reasonable to posit that variations will depend upon the sort of nervous system and sensory system the organism has. A person deprived of eyes or of the region of the brain where vision is involved must have a subjective experience that differs from the person equipped with these organs. Animals that are color blind have a different subjective experience than those that respond to color. Furthermore, there is no logical reason to restrict the subjective to organisms with a nervous system. Plenty of organisms without nervous systems have sense organs. Even an individual cell in our bodies is a responsive entity whose behavior can be studied in tissue culture. It has no nervous system though it has other means of communication within the cell. It ‘takes account of’ other entities in its environment, including other cells. It may move toward some things and move away from others. These are the sort of criteria we use to infer subjectivity in organisms like ourselves. But why limit subjectivity to just those complex organisms? If we do that, then we imply that subjectivity ‘emerged’ out of objectivity, that there was a stage in the evolutionary sequence when from zero subjectivity there came subjectivity, or that a combination of non-mental things could produce mind. This is to imply a discontinuity in the evolution of subjectivity; from no subjectivity came subjectivity.

This argument is applied to evolution even by those who do not apply it to their own existence, which is a contradiction. We have an insider’s view of our own ‘taking account of,’ and even then only of its conscious aspects. If we admit the existence of this aspect of our existence, then we admit that our lives cannot be reduced to the status of objective entities alone. Is it not then inconsistent to suppose that, in the evolutionary process, from objectivity alone there evolved subjectivity? Since in the rest of the world besides ourselves processes of ‘taking account of’ are going on, be it in electrons or atoms or cells, then it is logical to suppose that this subject-object relationship involves subjectivity for these other entities. This is Whitehead’s proposition that you have either got to have subjectivity everywhere or nowhere. Since it is obviously in us, then it must be everywhere. Just as the discovery that sodium chloride has properties not exhibited by sodium and chlorine in isolation tells us something about the nature of sodium and chlorine which we could not otherwise know, so too the existence of subjectivity in combinations of atoms that make human brains tells us something about the nature of those atoms that make those brains. To say that is not to imply some sort of preformation theory in which the experience of brains is wrapped up in atoms before there were any brains. It is to say that in those atoms there is the germ of subjectivity and not just zero subjectivity. The germ provides the possibility or potentiality of what can exist at more complex levels of organization. The evolving architecture of matter provides the necessary processes for the evolution of subjectivity, which is an aspect of the qualitative side of the process.

If we accept this argument so far, then does it not follow that there is an activity in evolving entities (their subjective aspect) which is not described in terms of mutation and selection and all other mechanical causes of the evolutionary process? Speculation on the nature of this activity is the subject of much of process thought. It is the point at which the concept of final causation becomes relevant.

A Role for Final Causation

Final causation is a potent causal agent in our conscious lives. What we do and are is very much dependent upon our imagined future or goals. Anticipation of the future influences our present existence and activity. It is the causal agent in so-called ‘cultural evolution’ which made the difference between cave man and modern man, and will make the difference between modem men and men of the future. Is there anything at all analogous to this in pre-human evolution? That is the critical question. It is a question that raises unwelcome spectres in the minds of most biologists. It resurrects for them concepts now discarded by science as a result above all of the work of Charles Darwin -- concepts of design according to a preordained plan of a designer, or primitive transformist concepts of fishes willing to become Amphibia, or Bernard Shaw’s man willing to become superman. But none of these concepts correspond to the sense in which Whitehead and other process thinkers see the role of final causation in the world.

Final causation is concerned with one aspect of the subjective side of entities, be they humans or electrons. It is that aspect which Whitehead (1934, p. 134) called ‘subjective aim.’ Subjective aim is analogous to purpose at the human level. It is the anticipation of the future and in this sense the influence of the future on the present. It is the nisus towards completion of an event. It is the element of creativity in every event, for it involves the selection of possibilities. Each occasion of experience is in touch with possibilities from which it selects a goal regarding its ‘self-fulfillment.’ Unless there were some selection of possibilities there would be no reason for including the conception of subjective aim. It would be enough to describe an event as an effect of preceding events which automatically become the cause of other events. The selection of possibilities is the element of freedom which may, indeed, be infinitesimally small in the electronic event but substantial in a conscious mental event in human experience.

What we call ‘things’ are really not things but events. Reality is process. An entity as it is to itself is a subject, but then as it appears to another subject it is an object.

We may hardly quibble at the existence of subjective aim at the human level, but why postulate its existence in non-human entities? There seem to me to be two reasons. Firstly, it is postulated in response to the question of how it is possible to get freedom, responsiveness and purpose at the human level out of a determined, non-responsive and feelingless world of physical particles. No one has shown how that is possible. If science or the process of knowing cannot deal with purpose, then so much the worse for science and knowing. If final causation is to have a place anywhere, we must be sympathetic to a philosophy which finds it in principle everywhere, as we find mechanical causation everywhere. Secondly -- and here we enter upon a difficult argument -- subjective aim is postulated because, without it, no entity could exist. All entities are processes. "There is no thing in the universe" (Bohm 1969, p.42). When you come to analyze the nature of these processes, it is seen that they include this sort of relation to an immediate future state. As Whitehead has said somewhere, "the present is the fringe of memory tinged with anticipation." It is not within my competence to elaborate this argument in the way in which Whitehead has pursued it. Suffice it to say that the attempt to penetrate this difficult area of thought on the fuzzy boundaries of knowledge is more commendable than either erecting impenetrable walls around knowledge or supposing that knowledge has clear-cut boundaries.

Whereas subjective aim has no semblance to the role given to final causation in much pre-scientific thinking, and whereas it was science itself which eliminated the role given to final causation in the pre-scientific world, we nevertheless need to be circumspect lest we throw the baby out with the bath water. Indeed, modern physics seems to indicate a much greater awareness of this danger than modern biology. It has moved away from the mechanistic view of its fundamental particles and is finding the need for concepts that go beyond mechanism. The nature of the order of the physical world is not as simply mechanistic as biologists tend to believe. Bohm (1969, p. 18) has said that our physical theories are at present in a state of flux, that may lead to radical changes in them, such that current fundamental ideas, based on measure and metric, may also have to be replaced by new ideas, based on order." These new ideas, he suggests, will have to involve the notion of order in a way that is more fundamental than that in which order now exists in the theories of physics.

The conclusions of a reductionist mechanistic biology are dependent upon the assumption that the ultimate particles are exclusively mechanical in their properties. "Therefore," says Bohm (1969, p. 29), "the question of whether the basic laws of physics are in fact mechanical or not is of the utmost potential significance in biology." I have argued that a comprehensive evaluation of the phenomena of life leads to notions of the physical particles that are not exclusively mechanical. Now we find the physicist arguing on his own grounds that if physics comes to such a view of its subject matter this will be of the utmost significance for biology. "It does seem odd therefore," says Bohm (1969, p. 34), "that just when physics is thus moving away from mechanism, biology and psychology are moving closer to it. If this trend continues, it may well be that scientists will be regarding living and intelligent beings as mechanical, while they suppose that inanimate matter is too complex and subtle to fit into the limited categories of mechanism." But, as Bohm points out, such a position cannot stand up to critical analysis, for the molecules studied by biologists in living organisms are constituted of electrons, protons and other such particles, from which it must follow that they too are capable of behaving in ways that cannot be described in terms of mechanical concepts. There is in this view a thoroughgoing unification of nature from electronic-type events to events in the mind of man.

There are then arguments both from biology and from physics that lend credence to a view of the ultimate particles as having a subjective aspect as well as a mechanical aspect. It is within this subjective aspect that we have looked for final causation.

So far I have referred to the role of subjective aim as necessary to the constitution of the entity as it is in itself and, secondly, as being an aspect of the subjective side of existence, be it of an electron or a man. But the universe and organisms that are in it are more than a multitude of entities with subjective aims. In what way, if any, does the concept of subjective aim help us to understand the organization of organisms as we find them today and their evolution in time? Two questions call for comment.

1. How is it that subjective aims of a multitude of cells in a living organism relate to the subjective aim of the whole organism? The same question can of course be asked concerning the organization of fundamental particles into atoms and of atoms into molecules. It is by virtue of their physical properties that electrons and other particles combine in different ways to produce atoms, and so it is with atoms that find themselves in juxtaposition and then combine to produce molecules. The atom does not have a subjective aim to become anything that it is not. But when it combines with another atom by virtue of its physical properties, a new entity is formed and this new entity has its appropriate subjective aim. There is no sense in which atoms aim to become molecules or molecules to become cells. By their physical nature there is a great variety of possible architecture of arrangement of the fundamental building blocks. Of the many possible sorts of arrangements, no doubt some are too unstable to survive. Survival depends upon suitable environment. The concept of natural selection is appropriate both at the inorganic and organic levels.

At the cellular level, cells are subjects and the multicellular organism to which they belong is also a subject. The cell and the multi-cellular organism act causally as units and so as subjects have subjective aim. Tissues are nexus of cells; i.e., a tissue is a nexus of subjects but hardly a subject as such. What makes a nexus of cells into a subject in the higher multicellular organism is the centrally coordinating activity of the central nervous system or other centrally coordinating systems. It is such physiological mechanisms which turn nexus of cells into subjects as a whole. Natural selection determines that only those collections of animal cells that are organized into subjects survive. The plant is less coordinated as a subject than an animal and is more like a democracy of cells in which no particular group of cells has a central control. Nevertheless, there is coordination of function in plants despite the absence of a coordinating center.

2. Is there any sense in which the subjective aims of entities that exist at one stage of evolutionary history are directed toward some later stage of evolutionary history? For example, we might ask the question in this form: Once the cell had come into existence, was it destined to evolve men? Or, even more boldly, one could ask if there is any sense in which the universe and all that is in it have some movement toward "that far off divine event to which the whole creation moves," to use Tennyson’s phrase. The implication of these questions is that there is a terminus and that it is determined. There is, however, nothing to suggest that either concept applies to this universe (Whitehead 1934, p. 169). There is no single stream in evolution leading to Homo sapiens. There are two factors which give the question some metaphorical meaning. First, the possibilities of the future, though no doubt infinitely great, are yet limited. Second, these possibilities or potentialities exist within the foundation of the universe. That something is possible for this universe says something about the nature of this universe. But there is no implication that what is possible is inevitable. A third factor relevant to these questions might be that subjective aim applied to distant events. However, the concept of subjective aim in subjects below the human level is that of an anticipatory relationship to the immediate future, not to the distant future. It is possible that a wasp may have subjective aim to stock the nest with food before it lays its eggs. A consequence of this behavior is the survival of the species. But it would be ridiculous to postulate that the subjective aim of the wasp was the survival of the species. Natural selection sees to it that those wasps with appropriate subjective aims survive to reproduce.

What then is the role of subjective aim in evolution? Is it to be regarded as another force in addition to mutation and selection? The answer must surely be no. Mutation and selection are mechanical causes that tell us something of the way in which new organisms come into existence. The theory of subjective aim tells us that, unless subjective aim existed in entities, no organisms could come into existence. Organisms are entities that have subjective aim. The theory is saying that the only sort of universe in which evolution of organisms can occur is one in which the entities have subjective aim, and that there is an evolution of subjective aim alongside physical evolution. That is, it becomes more complex. It becomes conscious in man and, in so far as man can have long-range plans which he can execute, his conscious subjective aims or purposes can control the future direction of evolution. A second point the theory of subjective aim is making in relation to evolution is that the potentialities of the future are an aspect of existence that should be acknowledged as such, though a potential entity is a different sort of entity than one that is concretely realized. The potentialities of this universe are a property of this universe and make this a different sort of universe from some hypothetical universe without these potentialities. It seems to me, therefore, that the metaphysical background of process thought is far more germane to the evolutionary picture provided by biology than is the mechanistic philosophy, implicit or explicit, that so often accompanies evolutionary theory. I leave it as an open question whether this perspective is suggestive of new hypotheses that might be tested and whether such a view implies any change in the way in which biologists do biology and formulate theories.

 

REFERENCES

Agar, W. E. 1943. A Contribution to the Theory of the Living Organism. Melbourne University Press.

Bohm, D. 1969. "Some Remarks on the Notion of Order." In Waddington, C. H. (ed.). Towards a Theoretical Biology. 2. Sketches, pp. 18-40. Edinburgh University Press.

Dobzhansky, Th. 1967. The Biology of Ultimate Concern. New York: The New American Library.

Dijksterhuis, E. J. 1961. The Mechanisation of the World Picture. Oxford: Clarendon Press.

Morgan, C. Lloyd. 1923. Emergent Evolution. London: Henry Holt and Co.

Overman, R. H. 1967. Evolution and the Christian Doctrine of Creation. Philadelphia: Westminster Press.

Rensch, B. 1971. Biophilosophy, Columbia University Press.

Thorpe, W. H. 1965. Science, Man and Morals. London: Methuen.

Waddington, C. H. 1961. The Nature of Life. London: Allen and Unwin.

Waddington, C. H. 1957. The Strategy of the Genes. London: Allen and Unwin.

Waddington, C. H. 1969. "The Theory of Evolution Today." In Koestler, A. and Smythies, S. R. (eds.), Beyond Reductionism, pp. 357-374. London: Hutchinson.

Whitehead, A. N. 1926. Science and the Modern World. London: Macmillan.

Whitehead, A. N. 1934. Process and Reality. London: Macmillan.

Whitehead, A. N. 1933. Adventures of Ideas. London: Pelican Books.

Wright, S. 1964. "Biology and the Philosophy of Science." In Reese, W. L. and Freeman E. (eds.), Process and Divinity, pp. 101-125. La Salle, Ill.: Open Court.

Chapter 1: The Frontiers of Biology — Does Process Thought Help? by W. H. Thorpe and Response by Bernhard Rensch.

W. H. Thorpe is Director of the Sub-Department of Animal Behavior, Department of Zoology, at the University of Cambridge.

At the very start of my work in biological research I was much inspired and stimulated by the writings of Whitehead. I expressed this in the first chapter of my book (Thorpe 1951, 1963). He seemed to me one of the very few philosophers who showed a real understanding of biology; its nature and its problems. I still regard him as one of the most profound minds of his time and still find refreshment and stimulation in his writings on an extraordinary variety of topics.

But over the years I have found him less of a support in biological research than I had at one time hoped and expected. So I will try to single out one or two recent biological developments to illustrate this and at the same time mention some which seem more compatible with the Whiteheadian view.

The two main frontiers of biological thought at present are (1) the living/non-living frontier (and included in this the tremendous problem of the origin of the ‘primitive’ cell) and (2) the mind/life frontier. Both these frontiers seem today as impassable as ever they did; and indeed the first seems to have been rendered (contrary to popular belief) even more impregnable, as a result of the unraveling of the genetic code, than it was before.

The First Frontier

Ever since Victorian times it has been the changes in physics and in astronomy which have in fact seemed so appalling and disconcerting to many thoughtful persons. Many of our most cherished beliefs have gone by the board. Atoms were thought to be permanent, unchanging elements of nature. Now, far from remaining unaltered, they appear to be created, destroyed, and transmuted. What do remain enduring are certain abstract attributes of particles, of which the electric charge and the wave aspects of elementary physical particles are the most familiar. Edmund Whittaker (1949) has described what he calls postulates of impotence, but which Bronowski (1969) has cleverly entitled the laws of the impossible, a break-up of which is particularly disturbing. Thus a great part of mechanics can be derived from the single assertion that perpetual motion is impossible. In special relativity it is impossible to detect one’s motion if it is steady, even by measuring the speed of light. In general relativity it is impossible to tell a gravitational field from a field set up by one’s own motion. In quantum physics there are several laws of the impossible which are not quite equivalent: the principle of uncertainty is one, another is that it is impossible to identify the same electron in successive observations. At bottom all the quantum principles assert that there are no devices by which we can wholly control what state of a system we will observe next. Bronowski translates that into the statement, "It is impossible to ensure that we shall copy a specified object perfectly."

One of the most striking differences between physics and biology arises in just this context. I think one can say that in biology there are no genuinely biological postulates of impotence except that spontaneous generation is impossible. Any other postulates of impotence which may appear to be part of biology are in the end, I think, reducible to physics, and it is from that discipline that they really come.

But there is another side to all this. There are assumptions which we cannot do without, even though all seems to be dissolving. One of these is that there is a real world which we in some measure apprehend by our senses: that is to say that knowledge is possible. And (as Bronowski [1969] points out) in the field of science this means that it is rational. But this is not to imply that nature is necessarily therefore all machine-like. And this idea of a great machine is one of the great misconceptions of our age, haunting the biologist now as it haunted the thinkers of the nineteenth century when Tennyson wrote, "The stars, she whispers, blindly run." But let us come back to biology, and particularly to the ideas of modem biology as affecting man’s views of nature and his own place in it.

In 1944 Professor Schroedinger wrote a little book entitled What is Life? This treatise of less than a hundred small pages has perhaps had more influence on recent thinking on this topic, among both physicists and biologists, than almost any other recent study. Schroedinger points out that when a piece of matter is said to be alive it is because it goes on ‘doing something’ -- moving, exchanging material with its environment, and so on. Moreover, it goes on doing this for a much longer period than we would expect an inanimate piece of matter to ‘keep going’ under similar circumstances. A system that is not alive, if isolated or placed in a uniform environment, usually ceases all motion very quickly as a result of various kinds of friction. Temperature becomes uniform by heat conduction and after that the whole system fades away into a dead, inert ‘lump of matter.’ A permanent state has been reached in which no macroscopically observable events occur, a state which the physicist speaks of as thermodynamical equilibrium or ‘maximum entropy.’ During a continued stretch of existence, it is by avoiding rapid decay into the inert state of equilibrium that an organism appears so enigmatic; so much so that from the earliest stages of human thought some special non-physical or supernatural force was claimed to be operative in the organism.

Pantin, in discussing such statements, points out that almost everything that Schroedinger has said about life could at least in some measure be said about a thunderstorm. A thunderstorm goes on doing something, moving, exchanging material with the environment, and so forth; and that for a much longer period than we would expect of an inanimate system of comparable size and complexity. It is by avoiding the rapid decay into an inert system of equilibrium that a thunderstorm appears so extraordinary. But the parallels between living organisms and thunderstorms, and indeed some other meteorological phenomena, are remarkable. It is true that thunderstorms arise by spontaneous generation, and since they are incapable of sexual reproduction, natural selection can only act upon them by selecting individuals and not by acting upon the whole species. Like living organisms, they require matter and energy for their maintenance. This is supplied by the situation of a cold air-stream overlying warm, moist air. This situation is unstable and at a number of places vertical up-currents occur. Once these have developed they are maintained, at least for a while, through the liberation of heat consequent upon the formation of rain as the warm damp air rises. Each up-current ‘feeds’ upon the warm and damp air in its neighborhood and is thus in competition with and can suppress its neighbors. A storm is in fact parasitic on the increase of entropy which would result from the mixing of warm moist and cold air to form a uniform mass. Moreover, the storm itself has a well-defined anatomy of what can almost be called functional parts.

But although certain non-living systems, of which the thunderstorm is such a striking example, do show what we can call ‘organismal characters,’ this property is nowhere found in so high a degree as it is in living organisms. Woodger (1960) pointed to the importance of the fact that living things have parts which stand in a relation of existential dependence to one another, e.g., limbs, digestive organs, circulatory systems and brains. And even in a single cell we find organelles, micro-organs so to speak, all of which seem to constitute some essential part of the cell’s machinery. So we can ask of the structures in a living organism, just as we can ask of the structures in a man-made machine, "What is this for?" We can often give fairly exact and plausible answers. It has been argued, I think convincingly, that we cannot sensibly ask that kind of question of natural non-living systems. It is surely nonsense to ask of a solar system or its parts, or of a nebula or an atomic structure, or of the parts of a mineral, "What is this for?" Any answer which we think we can give is an answer of an entirely different kind from that which we can give in the case of a man-made machine or the parts of a living organism. Another distinction, of course, concerns reproduction. If we compare this in living and non-living systems we find that in non-living systems (e.g., thunderstorms or vortex rings), new examples are generated but the new ones do not exactly reduplicate the old. In the reproduction of living organisms, however, reproduction is essentially reduplication of all the essential features of the design (Pantin, p. 75). It is the fact that the organization of living creatures, whether great or small, is determined by a molecular and therefore precisely repeatable template that makes biological reproduction possible.

So we can say: (a) What organisms do is different from what happens to stones. (b) The parts of organisms are functional and are inter-related one with another to form a system which is working in a particular way or appears to be designed for a particular direction of activity. In other words the system is directive, or if we like to use the word in a very wide and loose sense, ‘purposive.’ (c) The material substances of organisms, on the one hand, and inorganic materials, on the other, are in general very different. And there is still another difference which seems to me of great importance, and that is (d) that organisms absorb and store information, change their behavior as a result of that information, and all but the very lowest forms of animals (and perhaps these too) have special organs for detecting, sorting and organizing this information -- namely the sense organs and specialized parts of the central nervous system. I shall return to this very important aspect later.

First we must make it clear, as of course Michael Polanyi has done, that we adhere to the basic assumption that all local structural or physiological organizations and events inside the living being occur according to a local biochemical determinism. That is to say that there is no firm evidence whatever against, and an immense amount of evidence for, the view that the ‘ordinary’ laws of physics and chemistry hold within the organism just as they do within a man-made machine. The problem is how to explain the stability and reproduction of even the simplest organism in space and time in terms of the organization of the structure itself.

It is a claim of molecular biologists, a claim with which we can in general agree, that they have made very large steps towards reducing the problem of the organization of the living being (including even the problem of its hereditary processes) to physical laws. Some indeed would claim to have accomplished the whole task already. We shall come back to the question of the hereditary organization later. Here we can say that what the molecular biologists have done is to develop a model of the cell which behaves very much like a classical man-made machine, or an automaton, but one in which the ‘secret of heredity’ is found in the normal chemistry of nucleic acids and enzymes. The implication of this is that parts functioning like a machine can be described as a machine even though these parts may be single molecules; and machines are understood in terms of elementary physical laws. This is an attractive analogy and is indeed one which we have all been using for a long time. As has been explained above, we repeatedly and successfully ask the question, "What is this for?" when considering the different structures in living organisms -- quite as successfully and legitimately as we can ask this of a piston, a lever, or an electric circuit in any machine designed by man.

The Nature of the Organization Shown by Living Beings

But we can easily be trapped by this useful analogy into losing sight of two basic aspects of living beings which are clearly evident to the physicist but, curiously enough, overlooked by the biologist. It is, of course, no satisfactory answer to respond to the question, "How does a man-made machine or living machine work?" by saying that it obeys the laws of physics and chemistry. As Pattee (1971) points out, if we ask, "What is the secret of a computing machine?" no physicist would consider it in any sense an answer to say, what he already knows perfectly well, that the computer obeys all the laws of mechanics and electricity. If there is any problem in the organization of a computer, it is the unlikely constraints which, so to speak, harness these laws to perform highly specific and directive functions which have of course been built into the machine by the expertise of the designer. So of course the real problem of life is not that all the structures and molecules in the cell appear to comply with the known laws of physics and chemistry. The real mystery is the origin of the highly improbable constraints which harness these laws to fulfil particular functions. This is in fact the problem of hierarchical control. And any claim that life has been reduced to physics and chemistry must in these days, if it is to carry conviction, be accompanied by an account of the dynamics and statistics and the operating reliability of enzymes ultimately in terms of present-day groundwork of physics, namely quantum mechanical concepts. So we have two questions, "How does it work?" and "How does it arise?" The second question has in fact two facets: (a) how does it arise in the development of the individual organism during the process of growth from the moment of fertilization of the egg; and (b) how does the egg itself come to get that way -- that is to say, how can we conceive of evolution as having ‘designed’ the cell?

The Idea of Hierarchy

It is the necessary concept of hierarchy in biology which pinpoints the problem. And the problem is one of hierarchical interfaces. In common language, a hierarchy is an organization of individuals with levels of authority -- usually with one level subordinate to the next one above and ruling over the next one below. For an admirable account of this, see Koestler and Smythies (1969). So any general theory of biology (which must include the concept of hierarchy) must thereby explain the origin, operation, reliability and persistence of these constraints which harness matter to perform coherent functions according to a hierarchical plan. Pattee (1970,1971) says:

It is the central problem of the origin of life, when aggregations of matter obeying only elementary physical laws first began to constrain individual molecules to a functional, collective behavior. It is the central problem of development where collections of cells control the growth or genetic expression of individual cells. It is the central problem of biological evolution in which groups of cells form larger and larger organizations by generating hierarchical constraints on subgroups. It is the central problem of the brain where there appears to be an unlimited possibility for new hierarchical levels of description. These are all problems of hierarchical organization. Theoretical biology must face this problem as fundamental, since hierarchical control is the essential and distinguishing characteristic of life (1970, p. 120).

He goes on to point out that a simpler set of descriptions at each level will not suffice. Biology must include a theory of the levels themselves.

I have said above that even the simplest biological mechanism is to a superlative degree more complex than the most complex of humanly constructed machines. It is perhaps instructive to consider this complexity as it appears when we look at the human body and brain. Professor Paul Weiss (1969) has put this very dramatically by pointing out that the average cell in our bodies contains about 105 macromolecules. The brain alone contains 1010 cells, hence about 1015 macromolecules. To get these figures themselves into perspective, it is worth remembering that the age of the galaxy in which our solar system resides is estimated at 1015 sec! This is to say each of us has in our brains about as many macromolecules as there have been seconds since our part of the cosmos began to assume its present form.

This is just another way of putting the problem that Schroedinger poses in his book, What is Life? The problem is mainly that of the contrast between the degree of potential freedom, on the one hand, and, on the other hand, the perseverance and the essentially invariant pattern of the functions of such systems. (By ‘degrees of freedom’ we mean simply the number of variables necessary to describe or predict what is going on. Thus there is a potential freedom amongst trillions of molecules making up the brain, or for that matter the whole body.)

Consider this for our nervous system, and following this our thoughts, our ideas, our memories. Schroedinger was forced to the conclusion that, as he put it, "I. . . that is to say, every conscious mind that has ever said or felt I. . . am the person, if any, who controls ‘the motion of the atoms’ according to the laws of nature." This puts the problem of the boundary conditions, which have to be maintained all the time in both simple and complex examples of biological mechanisms, as it appeared to one of the most able physicists of his time who had given particular thought to these problems. Polanyi, as we have seen, assumes that all molecules work according to natural laws, but concludes that, since no one has accounted for hierarchical organization by these laws, there must be principles of organization which will in due course be found not to be reducible to the laws of physics and chemistry. Many others would be rather more cautious. Thus the physicist Pattee (1970) expresses himself as neither satisfied with the claim that physics explains how life works nor the claim that physics cannot explain how life arose. In his view (i) the concept of autonomous hierarchy involves collections of elements which are responsible for producing their own rules as contrasted with collections which are designed by an external authority to have hierarchical behavior. He then (ii) assumes, of course, that they are part of the physical world and that all the elements obey the laws of physics. He limits his definition of hierarchical control (iii) to those rules or constraints which arise within such a collection of elements but which affect individual elements of the collection. Finally, and perhaps most important, he points out (iv) that collective restraints which affect individual elements always appear to produce some integrated function of the collection. In common language this is to say that such hierarchical constraints produce specific actions or are ‘designed for’ some purpose.

It is in considering the third of the above four statements, in relation to classical mechanics, that the difficulties are seen to be at their greatest. Classical physics appears to provide no way in which an explanation can be reached because it requires a ‘collection’ of particles which constrains individual particles in a manner not deducible from their individual behavior. However, it has been pointed out that in quantum mechanics the concept of the particle is changed and the fundamental idea of a continuous wave description of motion produces the stationary state or local time-independent collection of atoms and molecules. So it seems to be not impossible that hierarchical structures could be reducible to quantum mechanics, although, as we shall see later, the whole scheme of quantum mechanics is now in such confusion that, to the outsider, it seems far from clear to what extent they will be able to help. But even if structural hierarchies can be explained ultimately in this way, there is still something missing when we come to biological systems. Complexities of physical structure seldom if ever, by themselves, provide any feature which seriously suggests to biologists that such structures are in any sense alive. As has been said above, what organisms do is different from what happens to stones. The piece missing in the hierarchies of the non-biological world is, once again, function. What is so exceptional about enzymes and what creates their hierarchical significance is the simplicity of their collective function which results from their very detailed complexity. This is the core of what is meant by integrated behavior.

Self-programming

We are generally content with the view that a physical system, at least a macrophysical system, may appear completely deterministic. But the attempt to reduce living systems to such, that is to say formal reductionism, fails in part because the number of possible combinations or classifications is generally immensely larger than the number of degrees of freedom. And then, as we have seen, living systems are self-programming; this means that the particles of which they are composed form an internal simplification, or self-representation, and these systems of self-representation which assume control of the whole seem utterly baffling in many cases because they appear to originate spontaneously. Again this means that the organism is self-programming. This concept of living organisms being uniquely different from non-living systems in having an internal self-representation raises a point of profound importance. It is difficult to know where in the animal kingdom one has the need to postulate ‘self-consciousness,’ ‘self-awareness’ or, to use Eccles’ phrase, ‘the experiencing self.’ We come to the conclusion that as we proceed from man downwards through the animal series, the lower we go the less useful (as predictive of animal behavior and as leading to an understanding of animal nature) the concept becomes. Until with the lowest animals and with the plants the usefulness of the idea becomes vanishingly small. But if it be true that all living organisms have internal self-representation, does this not amount to saying that the seeds of self-consciousness are present in all living creatures -- from the virus and bacterium upwards?

Another theoretical physicist, Walter M. Elsasser (1966), has approached some of these problems in an original manner by considering the number of internal configurations in which a complex system may exist in theory. Astronomers assume the existing finite total of atomic nuclei is of the order 1080 but, as we have seen, the lifetime of our galaxy is assumed to be no more than 1018 sec. Elsasser argues that the number of distinguishable events which can occur in a finite universe is correspondingly limited. In considering these systems of increasing complexity we must soon reach a point where a number of internal configurations in which the system may exist will vastly exceed the number of actual examples of any one given class that can possibly be collected in our universe. It follows that, if the discrepancy between the number of possible states and the number of possible samples is large enough, we can assert without fear of contradiction that no two members of a class, e.g., no two members of an animal or plant species, not even two bacteria, can ever be in the same internal state.

This leads Elsasser to suggest another characteristic of living organisms as distinct from non-living. He says that in physics the classes of things, e.g., atoms, protons, electrons, etc., are very homogeneous. It is a fundamental assumption that all the helium atoms in the universe are identical; though when we come to larger aggregations, however fully homogeneous the class, the objects would have to be not only chemically equivalent but also in the same quantum state. That is to say, for complete homogeneity all the members of a class have to be at the absolute zero point of the temperature scale so that their molecules are in the ground state. But the point is that in principle we do have, and can work with, the ideal of homogeneous classes in physics. And all fundamental questions of theory may be evaluated in terms of these. This can never be the case in biology, even in principle, as the number of individuals in any class in existence at one time is far too small to allow statistical prediction to have any physical significance. The resulting conclusion is that while physics is a science dealing with essentially homogeneous systems and classes, biology is a science of inhomogeneous systems and classes. In physical terms one may say that an organism must be a system that is endlessly engaged in producing, regenerating, or increasing inhomogeneity, and thereby the phenomenon of individuality, at all levels of its functioning.

Polanyi seems so convinced of the impossibility of the physical explanation of these biological constraints that he often appears to be speaking as a vitalist. That is to say, he is coming near to returning to the original idea of an indwelling vital principle guiding the organism in some manner completely independent of its physical nature. Elsasser does not go as far as this, and he suggests that there is room for (and we must assume the existence of) separate laws -- biotonic laws, as he calls them -- which are compatible with the quantum laws but not deducible in principle from them. Two other physicists have considered this matter carefully, E. H. Kerner (in Waddington 1970), and D. Bohm (in Bastin 1971). Bohm indeed appears to find not only room for, but, even within physics itself, a necessity for ‘hidden variables,’ which the usual scheme of quantum theory has ruled out as a matter of principle. Kerner, considering this, hesitates as yet to espouse either biotonic law or the incompleteness of quantal law, for he feels that no clear set of observations seems thus far to compel either. And we must not forget that a quantum-mechanical calculation even on one particular bacterial cell would be incorrect for every other cell, even of the same species -- a point clearly made by Elsasser in his conclusions about the heterogeneity of the material with which the biologist has to deal. Finally one must here bring in again the most important biological discovery of recent years, and this is the discovery that the processes of life are directed by programs -- which, besides manifesting activity, also in some extraordinary way produce their own programs. Professor Longuet-Higgins (in Waddington 1970) sums this up from the biological point of view by showing that it results in the biological concept of the program being something different from the purely physical idea of the program. And we can now point to an actual program tape in the heart of the cell, namely the DNA molecule. Even more remarkable is that programmed activity which we find in living nature will not merely determine the way in which the organism reacts to its environment; it actually controls the structure of the organism and its replication, including the replication of the programs themselves. This is what we mean by saying once again (a statement that can hardly be reiterated too often) that life is not merely programmed activity but self-programmed activity.

Monod (1970, 1971) has suggested that the combination of processes which must have occurred to produce life from inanimate matter are so extremely improbable that their occurrence may indeed have been a unique event (an event of zero probability). Monod also rightly points out that the uniqueness of the genetic code could be the result of natural selection. But even if we assume this, the extraordinary problem remains that the genetic code is without any biological function unless and until it is translated, that is, unless it leads to the synthesis of the proteins whose structure is laid down by the code. Now Monod shows that the machinery by which the cell (Or at least the non-primitive cell, which is the only one we know) translates the codes "consists of at least fifty macromolecular components which are themselves coded in DNA." Hence the code cannot be translated except by using certain products of its translation. As Sir Karl Popper comments (1972, 1974), "this constitutes a really baffling circle: a vicious circle, it seems, for any attempt to form a model or a theory, of the genesis of the genetic code." In fact this undreamed of breakthrough of molecular biology, far from solving the problem of the origin of life, has made it, in Sir Karl Popper’s opinion, a greater riddle than it was before. Thus we may be faced with a possibility that the origin of life, like the origin of the universe, becomes an impenetrable barrier to science and a block which resists all attempts to reduce biology to chemistry and physics.

The Second Frontier

We come now to the question: How far does the existence of conscious awareness, as we ourselves experience it, constitute a new domain over and above that established by the phenomena of the lower ranges of the biological world?

But first we must consider for a moment what exactly we imply by the term, for it has many overtones of meaning. But we can at least say that it involves three basic components: First, an inward awareness or sensibility -- what might be described as ‘having an internal perception.’ Second, an awareness of self, of one’s own existence. Third, the idea of consciousness includes that of unity, implying, in some rather vague sense, the fusion of the totality of the impressions, thoughts and feelings, which make up a person’s conscious being, into a single whole. As Lashley put it, the process of awareness implies a belief in an internal perceiving agent, an ‘I’ or ‘self’ which does the perceiving. This in its turn implies that the agent selects and unifies elements into a unique field of consciousness. Next, it follows again, that this perceiving self (a) transcends time and space, bringing into immediate relation events remote from one another in these dimensions, and (b) makes possible in man the creation of aesthetic and ethical values held to be absolute.

There are four main attitudes of mind open to those who consider these problems. They are as follows: (I) We may accept the Cartesian dichotomy as essentially valid, which of course commits us to Dualism. This may be of two types or intensities: (a) one that allows a two-way causal interaction between mental and physical events, which we may call the strong form of dualism; or (b) the weak form (epiphenomenalism) which allows mental events to be effects but never causes. (II) We may accept Berkeley’s position and regard mental entities as real and the idea of material entities as at best a convenient abstraction. (III) We may acknowledge material entities as real but dismiss the idea of mental entities as an abstraction. Finally (IV) we may assert that certain events are at one and the same time both mental and material -- the mental, so to speak, being the interior view of that which has a physical exterior. This is usually known as the ‘Double Aspect Theory’ or ‘Identity Hypothesis.’

Of the above I, for the moment, rule out (II) as being, for most scientists and many philosophers, regarded as verging on the absurd. Similarly (III) can be eliminated as clearly false since it negates the whole of experience (though quite a number of physiologists and a vast number of scientists who have not thought deeply on the matter are attracted to it as being superficially convenient and ‘tidy’). So we are left with (I) and (IV), both of which involve us in some form of ‘dualist’ commitment. In fact a great many biologists and physicists of great reputation -- Paul Weiss, Polanyi, Elsasser, Eccles and Sherrington (to mention only a few) are presumably dualists, of one type and degree or another. And this leads us to the views of philosophers and to the central and ever-present problem of reductionism.

Amongst philosophers and logicians, particularly amongst those who have given special attention to scientific problems, many names could be mentioned, including that great thinker, L. T. Hobhouse, whom I like to mention first because I owe so much to his writings. When Hobhouse speaks of what he calls "the correlation of governing principles" -- a concept which involves the recognition of abstract moral law and eternal values which are good in themselves -- he has surely passed far beyond the possibility of any form of scientific reductionism. Again the views of Sir Karl Popper and the logician William Kneale are, in some sense at least, unashamedly dualist. The former indeed sees no future at all for philosophical reductionism. To these I might have added the name of Professor Stephan Koerner, and finally that bright comet in the present-day firmament which, according to certain observers, comes trailing dazzling clouds of uncertain composition, namely Dr. Noam Chomsky.

Whitehead came to a position of what could be called ‘panpsychism.’ Philosophically this is of course eminently respectable and indeed most attractive. As a biologist I have long been immensely impressed by and beholden to Whitehead’s philosophy of organism (Process and Reality), in that it seems to me that he is the first great philosopher who really took trouble to comprehend the biological developments of his time. My trouble with panpsychism, as advanced by Whitehead and, for instance, Charles Hartshorne, is that I see no conceivable scientific possibility of investigating its significance. It is easy enough to assume some sort of psychic element in the ultimate physical particles; indeed Eddington himself toyed with that idea. It may be that, as Carl von Weiszaecker (1968) has boldly suggested, since the concept of a particle itself is just the description of a connection which exists between phenomena, there may, if we are prepared to jump into strict metaphysical language, be no reason why what we call ‘matter’ should not in fact be ‘spirit.’ This I think amounts to saying that not only physical theories but biological theories portray not nature itself but our knowledge of nature. Again the trouble here is that I see no conceivable scientific possibility of confirmation.

Nor does the combination of physical units, in so far as modern physics reveals them, suggest to us how, or by what laws, psychic units could similarly combine and so produce what we recognize as the mental. Moreover there is a lack of parallelism between the laws of the combination of the physical units and those governing the development of mind. We can indeed assume with panpsychism that the mental, spiritual, artistic and ethical values which we experience really are in some sense one with the electrons and other primary components of which the world is made. But yet it does not appear to be so. Consequently a great leap of faith is required to believe it -- a leap without, so it seems to me, any scientific evidence. Yet reductionism requires a much greater faith. In the former case we are required to believe something which is eminently sensible but which cannot be scientifically confirmed; in the second we are required to believe in a source of value added to or injected into a natural process as complexity develops, which we are unable to understand -- either this, or we have to regard values as pure epiphenomena.

One might choose many different examples to illustrate the basic problem of reductionism and its refutation. However, I think that, rather than coming at once to biology, one secures a better perspective by starting with physics. First, we must say that, as a working hypothesis, reductionism is the major basic tool found to be in use among the great majority of active experimental scientists. And with good reason: for when and where it is successful, it achieves the most impressive of all scientific advances. In fact as a working tool it is indispensable, and all of us use it all the time. But many scientists go on from there to accept it without question, not merely as a tool, but as a philosophy. That is, they assume that all the activities of our minds and bodies, all the changes and complexities shown in the study of animate or inanimate matter, are controlled by the same set of fundamental laws.

To the ordinary working scientist there is an obvious course of action, perhaps one should call it a temptation. Having first assumed that there is a basic set of fundamental laws, the temptation is to proceed from there to what seems an obvious corollary, that everything obeys the same fundamental laws. Then the only scientists who are studying anything really fundamental are those who are working on these laws. A physicist colleague of mine to whom I am much indebted (Anderson 1972) has pointed out, in a discussion of the topic "More is Different," that if this were so, then the only scientists who would certainly be regarded as carrying out ‘fundamental’ work would be some astrophysicists, some elementary particle physicists, some logicians and other mathematicians, and a few more. This reductionist point of view, which seeks knowledge by analysis, almost inevitably leads its proponents to assume, quite unwarrantably, that all that is then required is to work out the consequences of these laws by the prosecution of what is called ‘extensive science,’ whereupon all truth will be revealed! But there is a tremendous fallacy here. For even the apparent success of the reductionist hypothesis in certain areas does not by any means imply the practicability of a ‘constructionist’ one -- to reduce everything to simple fundamental laws does not imply the ability to start from those laws and reconstruct the universe. In fact, "the more the elementary particle physicists tell us about the nature of the fundamental laws, the less relevance they seem to have to the very real problems of the rest of science, much less of society."

Actually it is a mistake to be too analytical in one’s approach and to assume that all new and fundamental laws come from logical analysis. They do not. Take the arguments for the building of a thousand billion electron volt accelerator. We often hear it argued that, in short, intensive research goes for the fundamental laws, extensive research for the explanation of phenomena in terms of known fundamental laws. It is often assumed to follow that, once new fundamental laws are discovered, a large and ever increasing activity begins in order to apply the discoveries to hitherto unexplained phenomena. Thus the frontiers of science extend all along a long line from the newest and most modern intensive research, over the extensive research recently spawned by the intensive research of yesterday, to the broad and well developed web of extensive research activities based on intensive research of past decades. Hence, on this view, ordinary physicists are applied particle physicists, chemists are applied physicists, biologists are applied chemists, psychologists applied biologists, social scientists applied psychologists, etc. Anderson states, "I believe this is emphatically not true: I believe that at each level of organization, or of scale, types of behavior open up which are entirely new, and basically unpredictable from a concentration on the more and more detailed analysis of the entities which make up the objects of these higher level studies." True, to understand worms we need to understand cells and macromolecules, but not mesons and nucleons. And even the comprehension of cells and macromolecules can never tell us all the important things that need to be known about worms. At each level in fact there are fundamental problems requiring intensive research which cannot, be solved by further microscopic analysis but need, as Anderson says, some combination of inspiration, analysis and synthesis."

Popper, in a recently published consideration of the problem of scientific reductionism, commences by asking three questions: (1) Can we reduce or hope to reduce biology to physics or to physics and chemistry? (2) Can we reduce to biology or hope to reduce to biology those subjective conscious experiences which we may ascribe to animals, and, if question (1) is answered in the affirmative, can we reduce them further to physics and chemistry? (2) Can we reduce, or hope to reduce, the consciousness of self and the creativeness of the human mind to animal experience, and thus, if questions (1) and (2) are answered in the affirmative, to physics and chemistry?

Before proceeding to answer these questions, Popper makes the following points: First, he suggests that scientists have to be reductionists in the sense that nothing is as great a success in science as successful reduction. Indeed it is perhaps the most successful form conceivable of all scientific explanations, since it results in the identification of the unknown with the known. Second, Popper suggests that scientists have to be reductionists in their methods, either naive or else more or less critical reductionists, and sometimes desperate critical reductionists, since, as he points out, hardly any major reduction in science has ever been completely successful. There is almost always an unresolved residue left by even the most successful attempts at reduction. Third, Popper contends that there do not seem to be any reasons in favor of philosophical reductionism. But nevertheless the working scientists should continue to attempt reductions for the reason that we can learn an immense amount even from unsuccessful attempts at reduction, and that problems left open in this way belong to the most valuable intellectual possessions of science. In other words, emphasis on our scientific failures can do us a lot of good.

Popper proceeds to discuss some of the classical examples of reductionism. Einstein wrote in 1920: "According to our present conceptions the elementary particles (that is, electrons and protons) are nothing else than condensations of the electromagnetic field. . . . our view of the universe presents two realities . . . namely, gravitational ether and electromagnetic field or -- as they might also be called -- space and matter." By using the term nothing else here, Einstein implied that this was an example of complete reduction -- as Popper remarks, "reduction in the grand style." Einstein was not the only one and by 1932 almost all leading physicists -- Eddington, Dirac, Einstein, Bohr, de Brogue, Schroedinger, Heisenberg, Born and Pauli -- accepted uncompromisingly the reductionist view. Popper gives a quotation from R. A. Millikan (1932) in which this physicist says that nothing more beautifully simplifying has ever happened in the history of science than the whole series of discoveries culminating about 1914, which finally brought about practically universal acceptance of the theory that the material world contains but two fundamental entities, namely, positive and negative electrons.

But, as Popper points out, this reductionist passage was written in the very nick of time, for it was in the same year that Chadwick announced his discovery of the neutron and Anderson (1933) first discovered the positron. Nevertheless, many of the greatest physicists, such as Eddington (1936), continued to believe that with the advent of quantum mechanics the electromagnetic theory of matter had entered into its final state and that all matter consisted of electrons and protons. Popper (1972) points out that, though we still believe in the repulsive forces as being electromagnetic and still hold Bohr’s theory of the periodic system of elements in a modified form, everything else in this beautiful reduction of the universe to an electromagnetic universe with two particles of stable building blocks has by now disintegrated. An immense number of important new facts has been learnt, but the simplicity of the reduction has disappeared. This refutation of the reductionist position started with the discovery of neutrons and positrons and continued with the discovery of new elementary particles ever since. But particle theory is not even the main difficulty. "The real disruption is due to the discovery of new kinds of forces, especially of nuclear forces irreducible to electromagnetic and gravitational forces." So now we have at least four very different and still irreducible kinds of forces in physics: gravitation, weak decay interactions, electromagnetic forces and nuclear forces.

In discussing Pauling’s work (1959) on the nature of the chemical bond, Popper further asks: even supposing that we have a fully satisfactory theory of nuclear forces, of the periodic system of the elements and their isotopes, and especially of the stability and instability of the heavier nuclei, have we thereby a fully satisfactory reduction of chemistry to physics? The answer is ‘No.’ For, an entirely new idea had to be brought in, an idea which is somewhat foreign to physical theory -- the idea of evolution, of the history of our universe, of cosmogeny. This is so because the present theory of the periodic system explains the heavier nuclei as being composed of lighter ones, ultimately as being composed of hydrogen nuclei (protons) and neutrons (which might in turn be regarded as a kind of composition of protons and electrons). This theory assumes that the heavier elements have properties which can only actually result from a very rare process in the universe which makes several hydrogen nuclei fuse into heavier ones. These heavier elements are at present regarded as products of super-novae explosions. The present estimate is that, since hydrogen forms 25% of all matter by mass and helium 75% of all matter by mass, all the heavier nuclei appear to be extremely rare -- not more than 1 or 2% by mass. Hence the earth and presumably the other planets are made of extremely rare materials. The present most widely accepted theory of the origin of the universe -- that of the hot big bang -- claims that most of the helium is the product of the big bang itself and occurred within the very first minute of the existence of the expanding universe, and that the background radiation which is now being studied so intensively provides some evidence of the date of this initial explosion. Moreover it is only under the circumstances of the intense gravitational contraction, which leads to super-novae outbursts, that the heavier elements have been formed. Two things of great interest emerged from these considerations. First, in the conditions of the once supposed universally distributed primeval nebula, existence of gravitational forces could never have been envisaged and consequently the existence of heavy elements could never have been envisaged. These can thus be regarded as genuine emergents in the strict sense. In so far as chemistry has been reduced, it has not been reduced to physics but to cosmology (or as Popper says, even to cosmogeny). And present views seem to imply that the possibility of ever reducing chemistry to physics are remote indeed.

Popper points out that nuclear forces are thus potentialities and become operative only under conditions which are extremely rare, namely tremendous temperatures and pressures. He goes on to suggest that this comes very close to a theory of essential properties which have the characteristics of predestination or pre-established harmony. At any rate "a solar system like ours depends, according to present theories, on their pre-existence." The same close approach to the idea of pre-established harmony applies to the production of heavy metals by gravitational forces and, if this is the best that can be done, then any philosophy of pre-established harmony is an admission of the failure of the method of reducing one thing to another. "Thus the reduction of chemistry to physics is far from complete even if we admit unrealistically favorable assumptions. Rather, this reduction assumes a theory of cosmic evolution or cosmogeny and in addition two kinds of pre-established harmony in order to allow sleeping potentialities, or relative propensities of low probability built into the hydrogen atom to become activated. Thus we are operating with emergent properties." In fact the so called reduction of chemistry is to a physics that assumes evolution, cosmology, cosmogeny and the existence of emergent properties.

Karl Popper also develops the thesis that the idea of problem-solving is quite foreign to the subject matter of non-biological sciences but seems to have emerged together with life. Even though there is something like natural selection at work prior to the origin of life, we cannot say that for atomic nuclei survival is a ‘problem’ in any sense of the term. Nor can we say that crystals have problems of growth or propagation or survival. But life, as Popper says, is faced with the problem of survival from the very beginning, indeed we can describe life if we like as problem-solving, and living organisms as the only problem-solving complexes in the universe. This, of course, does not mean that we have to suppose that all life has a consciousness of the problems that have to be solved. This is obvious nonsense. Popper agrees that there can be little doubt that many animals possess consciousness and can be, at times, even conscious of a problem. But, he says: "The emergence of consciousness in the animal kingdom is perhaps as great a mystery as the origin of life itself" He will, however, agree that there can be little doubt that consciousness in animals has some function and can be looked at as if it were a bodily organ. We have therefore to assume, "difficult as this may be, that it is a product of evolution, of natural selection." Of course, for the behaviorists, who tend to deny the existence of consciousness altogether (a position quite fashionable at present), there is no problem. But, as Popper says, "a theory of the non-existence of consciousness cannot be taken any more seriously than a theory of the non-existence of matter." These theories, he says, solve the problem of the relationship between body and mind by a radical simplification. It is the denial either of body or of mind. But, as Popper says, "in my opinion it is too cheap." In fact there seems to be no prospect whatsoever of reducing the human consciousness of self and the creativeness of the human mind to any other explanatory level. Here Jacques Monod (1971) would appear to agree with Popper in that he calls the problem of the human central nervous system ‘the second frontier,’ comparing its difficulty with the ‘first frontier,’ the problem of the origin of life itself. Popper indeed believes that the reduction of chemistry to physics, of biology to chemistry, of animal conscious or subconscious experience to biology, and of consciousness itself and the creativeness of the human mind to animal experience, are all problems the complete success of which seems most unlikely if not impossible.

So, after this very rambling discussion, I end with the query included in my title: Faced with this extraordinary impasse (or rather not one but a whole series of them), Does Process Thought Help?

 

REFERENCES

Anderson, P. W. 1972. "More is Different: Broken Symmetry and the Nature of the Hierarchical Structure of Science." Science, 177, 393-396.

Ayala, F. J. and Dobzhansky, T. (eds.). 1974. Studies in the Philosophy of Biology, New York and London: Macmillan.

Bastin, T., (ed.). 1971. Quantum Theory and Beyond. London: Cambridge University Press.

Bronowski, J. 1969. Nature and Knowledge. London and Eugene: University of Oregon Lecture Publications.

Elsasser, W. M. 1966. Atom and Organism. Princeton University Press.

Hobhouse, L. T. 1927. Development and Purpose. Second edition. London: Macmillan.

Koestler, A., and Smythies, J. R. (eds.). 1969. Beyond Reductionism. London: Hutchinson; New York: Macmillan.

Lewis, J. (ed.). 1974. Beyond Chance and Necessity. London: Garnstone Press.

Monod, J. 1971. Chance and Necessity. London: Collins.

Pantin, C.F.A. 1968. The Relations Between the Sciences. London: Cambridge University Press.

Pattee, H. 1970. "The Problem of Biological Hierarchy." In Waddington, C. H. (ed.). 1970, pp. 117-136.

Pattee, H. 1971. "Can Life Explain Quantum Mechanics?" In Bastin, T. (ed.). 1971, pp. 307-320.

Popper, K. R. 1972. Objective Knowledge: An Evolutionary Approach. Oxford: Clarendon Press.

Popper, K. R. 1974. In Ayala, F. J. and Dobzhansky, T. H. (eds.). 1974.

Schroedinger, E. 1944. What is Life? London: Cambridge University Press.

Thorpe, W. H. 1951, 1963. Learning and Instinct in Animals. London: Methuen; Cambridge:

Harvard University Press.

Thorpe, W. H. 1974. Animal Nature and Human Nature. New York: Doubleday (Anchor Press) and London: Methuen.

Waddington, C. H. (ed.). 1970. Towards a Theoretical Biology. 3 Volumes. Edinburgh University Press; Chicago: Aldine Publishing Company.

Weiss, P. 1969. "The Living System: Determinism Stratified." In Koestler, A. and Smythies, J. R. (eds.) 1969, pp. 3-42.

Weiszaecker, C.von. 1968. Theoria to Theory, 3,7-18.

Whittaker, E. 1949. From Euclid to Eddington. London: Cambridge University Press.

 

RESPONSE TO THORPE’S PAPER

By Bernhard Rensch

Bernhard Rensch is at the Zoologisches Institut of the Westfaelischen Wilhelmss-Unversitaet in Muenster, West Germany.

 

You mentioned that there seems to be no prospect of reducing the human consciousness of self to any other explanatory level. In my opinion we can do this by analysing the ontogenetical development of this concept in a young child. In my paper I point out that the concept of one’s own self becomes gradually developed in early youth. The child soon learns to distinguish his own body from the environment, because reciprocal feelings only arise when he touches a spot of his own body, and when strong feelings, particularly pain, arise in his body. In this way the child begins to distinguish two kinds of psychic phenomena: those which indicate his own body and those which have to do with the environment. Later on the concept of one’s own self becomes enhanced by remembering personal experiences, knowing one’s own name and so on. At last also concepts of extramental ‘things’ originate.

The basic facts, which I mentioned, are also experienced by higher animals. We can therefore assume that they have at least prestages of such self-consciousness. Social animals can therefore learn to act according to their rank in the society. And apes surely have a concept of their own self. This could for instance be proved by experiments with a mirror. As soon as they begin to recognize themselves in a mirror, they begin to took to the mirror when they clean or touch a spot of color which the experimenter had put on their front, or when they clean their teeth or try to inspect their backside. (Gallup 1968; Lethmate and Duecker 1923 [references given at close of Rensch’s essay]).