return to religion-online

Mind in Nature: the Interface of Science and Philosophy by John B. and David R. Griffin Cobb, Jr.


John B. Cobb, Jr. is Professor of Theology at the School of Theology at Claremont, Avery professor of Religion at Claremont Graduate School, and Director of the Center for Process Studies. David Ray Griffin teaches Philosophy of religion at the School of theology at Claremont and Claremont Graduate School and is Executive Director of the Center for Process Studies. Published by University Press of America, 1977. This book was prepared for Religion Online by Ted and Winnie Brock.


Chapter 2: Three Counter Strategies to Reductionism in Science by Francis Zucker


Francis J. Zucker is affiliated with the Max-Planck-Institut in Starnberg, West Germany, and is now with the Center for the Philosophy and History of Science at Boston University.

Reductionism and its Counter-Strategies

I will briefly describe three research programs in this essay that are motivated by opposition to physical reductionism.

All scientific analysis seeks to isolate elements which, in suitable combination, account for the appearances and events in some domain of nature; and this enterprise, carried to completion, necessarily lands one with the basic entities of physics. Thus physical reductionism is rightly called the methodological ‘superparadigm’ of modern science, and it seems odd indeed to look for research programs in opposition to it.

To be sure, the physical reductionism I have in mind is ontological, not methodological: it is the position that the universe ‘consists of’ the basic entities disclosed by physics, which are, in their diverse arrangements, all there ‘really is’ in the world -- all else, including the subject himself and his perceptions, being reduced to the derivative status of ‘epiphenomena,’ which H. Jonas defines as the powerless byplays of physical happenings that follow their own rules entirely (Jonas 1966, p. 88). But can methodological reductionism be so easily separated from this ontology? Do we not, as scientists, insist on passing beyond method to existence? If we acknowledge scientific truth to be the most certain we possess, then surely its objective correlatives can be no mere ‘thought economies’ (Mach) or ‘tools for manipulating nature’ (instrumentalism), but they must in some sense be ‘real.’ Once the physical constituents are granted an independent existence, though, they appear to usurp the whole: gathered together in systems of arbitrary complexity, but all conforming to their inherent laws of combination, they now become coextensive with the universe they so fully explain -- including, therefore, the reflecting subject, which is capable of juxtaposing itself to the rest of the world, and of enunciating the theory of the very entities that constitute both. Though this feat demands a greater faith in the miraculous than we may wish to muster -- and prompts some of us to dismiss such an ontology at the instant -- it yet appears no mere trivial matter to disentangle the two phases of reductionism and thus avoid pouring out the baby with the bath.

The task would perhaps be simpler were it not bedeviled by the heritage of Cartesian dualism, which I think still serves most of us in the West as the point of departure in our ontological deliberations. When in the seventeenth century a reality split into an ‘extended’ and a ‘thinking’ substance (res extrensa and res cogitans) replaced the hierarchically structured, divinely governed universe of Antiquity and the Middle Ages, it did provide modern science with an ontological frame more congenial to its rise. But this frame, which had to encompass so radically fragmented a universe, struck some minds almost immediately, and ever since, as being unequal to its task. Ontological unity was thereupon sought by opting for a monist solution: for idealism (the monism of the res cogitans), or materialism (that of the res extensa), or for the hoped-for middle ground of phenomenalism. Materialism, which pictured the basic entities in mechanical terms (with billiard ball-like atoms), represents physical reductionism in its classic form.1 Today’s physics, having turned progressively more abstract and rejecting thing-like models for its elementary particles, may puzzle the old-style materialist in its apparent break with the res extensa, but does not in itself challenge his monist claim.

If we reject the idealist and phenomenalist monisms for the reason cited (the reality claim of scientific analysis), we can only hope for an escape from ontological reductionism by outflanking dualism itself. Both Whitehead and his contemporary, Husserl, held that dualism could indeed be overcome philosophically, by seeking the fundamental unity in a domain of reality that lies anterior, in some sense, to the sharp subject-object split, justified though that split be in certain contexts. ‘Process’ is Whitehead’s label for that domain, the ‘pre-predicative’ and the ‘life world’ are Husserl’s. In their attempts at clarifying the two-faced reductionism issue, there seem to me to be important differences between the two philosophers,2 but these need not concern us here so long as we have license to believe that disentanglement is philosophically possible at all, that anti-reductionism can mean opposition to the misplacement of ontological unity, and not to the scientific enterprise as such.3

Philosophical consideration may thus help clear the air, but I believe that only in conjunction with the development of science itself can it lead to the construction of an ontology of nature, i.e., of a ‘body-social of scientific knowledge’ that is neither a hangover from the old hierarchical order nor captive to a monist pseudo-unity. If you have no clear vision of the perfect society, Marcuse says, register your protest against what hurts most in the old; the new will supposedly emerge in the sequel. While this attempt at emancipation through negation may not lead far with respect to the body-social, I will try it here in describing the three research programs in terms of the ‘No’ each of them says to one of the basic strands of the reductionism syndrome: to the dualism that spawned it, to the ‘nothing-but’ of its monism, and to the fragmenting sort of mathematical conceptualization it one-sidedly encourages.

The first strategy tries to explode the ontological claim of physical reductionism from within, so to speak. It takes a leaf from the tremendous changes within physics between the nineteenth and the twentieth century, and decides to continue riding the tiger. ‘Think physics to the end’ is C. F. von Weizsaecker’s way of putting it: since physics itself, as the development of quantum mechanics shows overcame the fragmentation of physical reality into building stone-like basic entities, so it might also, as it approaches its final, ‘unitary’ form (von Weizsaecker’s term for the still unknown theory that subsumes the currently known theories plus the long searched-for elementary theory in one system), overcome the fragmentation of the whole of reality into distinct domains of the physical and the mental. Indeed, von Weizsaecker’s researches suggest that the axioms of unitary physics are precisely the formal expression of the preconditions of (conceptualizable) experience; that the subject as knower, in other words, is encountered at the very base of physics, as Kant had (in a slightly different manner) already surmised. Thus physics itself, for whose sake Descartes had so radically separated the ‘extended substance’ from total reality, may be able to subvert the ontological reductionism which was an offspring of that separation.

To explain the second strategy, let us recall the display of the reductionist order of the sciences along the ‘Comtean ladder,’ which arranges the forms of life and the corresponding specialized disciplines in an ascending series of rungs starting with physics at the base, followed by chemistry, biology, psychology, sociology and, depending on one’s inclination, history and religion. The ‘simples’ of each rung, i.e., the basic conceptions of that discipline, are ‘nothing but’ a complicated structure composed of the simples of the rung beneath it. If human behavior, for example, is conceived as made up of a network of pre-programmed responses (psychology), these can be reduced, say, to conditional reflexes (biology), which are in turn nothing but complicated physical chemistry. Anti-reductionism, in the common understanding, here argues for the irreducibility, in some sense, of the higher forms of reality to the lower -- of biology to physics, or of religion to sociology -- and this the second strategy attempts to do. To this end, it accents the relative autonomy of each rung by formulating the simples appropriate to it, i.e., by conceptually ‘assimilating’ (to use Bohm’s term [Bohm, 1974, p. 58]) each type of fact into its own sort of order. I therefore term this strategy, ‘Cultivate the simples appropriate to each order,’ and cite Waddington’s ‘epigenetic landscape’ along with his ‘chreods’ as an outstanding example of its fruits. This strategy tries to meet Whitehead’s demand for a natural philosophy that regards "the red glow of the sunset. . . as much part of nature as the molecules and electric waves by which men of science would explain the phenomenon" (Whitehead, 1970, p. 29). That the ontologically unprejudiced exhibition of the domains of ‘sunset colors’ and of electric waves, each in its own terms, does indeed allow us to account for the "coherence of things" (ibid.) without succumbing to the nothing-but of the waves, is one of the points I shall try to make below. The problem is to discover in what sense this coherence, which of course includes the methodologically reductionist relation, is nevertheless richer than it. I suggest that language is sufficiently powerful to express this enrichment, and that it is a form of knowledge which is being thus expressed, albeit not of scientific knowledge in the sense of Weizsaecker ("testable predictions on precisely formulated alternatives"); from this point of view, the first and second strategy appear to be complementary. It must be admitted, however, that the second is weaker than the first: opposition to physicalist monism dictates its employment, but it cannot by itself overthrow it. For the reductionist can always argue that, while it may be good heuristics to develop concepts peculiarly fitted to each of the rungs (if they are not already available in ordinary language), these will be necessarily anthropomorphic and their sole function in the scientific enterprise is to invite reduction.

The third strategy is rather speculative. Like the second, it seeks the simples in structuring any domain of knowledge, but tries to give this search a sharper edge by applying a lesson learned from physics. So far, simples have been taken as concepts that seem elementary in an intuitive way: a leaf, for example, is an intuitively obvious constituent in plant morphology. As one learns one’s way about in any field of inquiry, one gradually becomes aware of the ‘chreods’ that lie close to the level of immediate perception through the senses or the intellect. But perhaps our ordinary perceptive powers are not sufficiently acute to discover the deeper-lying chreods. Indeed we know from physics that, as we dig deeper, the basic notions become very abstract, i.e., non-intuitive. The first strategy is satisfied with pointing out that, in the end, at least the axioms of unitary physics become transparent. It is possible, however, that progress on the higher rungs, for example in biology, depends on an enrichment of our perceptive faculties, and since it is structures we wish to recognize in science, this means an enrichment of our mathematical imagination. This is precisely what David Bohm has been after in his work with ‘implicate’ (or ‘enfolded’) orders, which so far he has discussed only in relation to physics, but clearly means to apply in other fields as well, such as in perception theory and in cognitive psychology (Bohm 1973 and 1974). Bohm believes that if we learn to think in ‘non-local’ orders, i.e., in orders quite other than those based on the Cartesian res extensa, we will be able to understand the aspect of wholeness in quantum mechanics, which in a formal sense we already know to be there, intuitively as well. Implicate orders express "the intimate interconnection of different systems that are not in spatial contact" (Bohm and Hiley 1975), and will be needed wherever spatial (or temporal) wholeness is to be given a mathematical form. The example presented below under the title, ‘Is there a mathematics for wholes?’, is an attempt to apply a primitive instance of implicate order to plant morphology. The germ of this idea was already known to Whitehead, and to the mathematician F. Klein, both of whom speculated about its use in classical mechanics (Whitehead 1898). In adapting it to biology, George Adams (d.1963) introduced the notion of ‘formative forces,’ which correspond to the ‘formative causes’ mentioned by Bohm (Bohm 1973, p. 22). The true testing ground for the implicate-order strategy, it seems to me, may indeed be biology rather than physics, where abstract methods are so powerful as to perhaps make it dispensable: just as the old style building-block materialist was refuted not by philosophical polemic, but by the one authority in which he trusted, i.e., by physics itself, so the nothing-but reductionist in contemporary biology will modify his views should it be possible some day to provide him with a mathematical language that fills the currently existing gap between our formal knowledge of gene structure and combinations, and our intuitive apprehension of growth and shape. This language, both Adams and Bohm agree, will have to be that of an implicate topological or projective order, not the explicate metrical order of classical physics. What the precise relationship between the novel morphology and traditional biochemistry might be is a question I have not been able to resolve; Bohm’s and Adams’ expectations diverge on this point.

Think Physics to the End

Why should one expect physics to develop toward a unitary theory, and what could be the meaning of such a theory?

Physics develops in a sequence of ‘closed’ theories, to use Heisenberg’s term for a mathematical structure with associated physical semantic that cannot be improved upon by means of ‘small’ changes (as, for example, Newton’s law of gravitation cannot be improved by modifying the number two in the exponent of the distance term). When a ‘deeper’ closed theory is found (as, in the case of gravitation, general relativity), the older theory is not simply discredited, but its predictions are upheld within certain parameter ranges specified by the newer theory, which adds correct predictions of its own outside those ranges. Classical mechanics united terrestrial and celestial kinematics in a unified dynamics. During the past century, electromagnetic theory united electrostatics, magnetostatics, and network theory with optics in one stroke; special relativity combined classical mechanics with electromagnetic theory; general relativity combined the theory of gravitation with physical geometry and special relativity; and quantum mechanics united much of physics with, at least in principle, all of chemistry. It is at least as puzzling to think of an infinite progression of ever more general theories in physics as it is to postulate a final, unitary theory. (What is still missing is chiefly a theory of elementary particles, which Weizsaecker believes to be implicit in quantum mechanics itself, to become explicit once the correct symmetry group is applied.)

Weizsaecker conceives the unitary theory, which encompasses all other as special cases, to be the formal expression of the preconditions of experience (Weizsaecker 1971a and 1971b). This thesis was inspired by Kant, but goes well beyond Kant: only the regulatives of science, e.g., the principle of causality, were in Kant’s opinion a priori; the special laws would have to be formulated on the basis of special experience. If unitary physics spans all special laws, however, then all physics dispenses with special experience and depends solely on its preconditions. In principle, then, unitary physics ought to be deducible from a sufficiently detailed analysis of terms such as time, logic, observation, number. So long as this task appears too formidable, we must try to construct the theory by working from both ends: by axiomatizing the existing most general theory, i.e., quantum mechanics, in a manner that invites interpretation in terms of plausible preconditions, and by logically analyzing the preconditions that occur to one upon reflection so as to reconstruct the theory. In trying to close the gap, one finds additional preconditions of which one had previously been unaware, and, looking in the other direction, one tries out new axioms, for example concerning symmetry conditions suggested by the local analyses. The most cursory analysis of experience shows it to presuppose the structure of time: we learn from facts of the past to predict future events, and we test these when they are no longer future but present or past. Weizsaecker tries to develop a logic of temporal propositions that incorporates these features. The logic would be a probability theory based on a time-dependent axiomatics, which somehow must include the axiom of indeterminism (or its equivalent, the ‘superposition’ axiom).

Granted this much, Weizsaecker can reconstruct the Hilbert space structure of quantum mechanics for the case of the most elementary objects conceivable. An object of this sort is not a smallest something in physical space; it is, rather, the smallest unit of information, i.e., a single yes-no decision, termed an ‘ur,’ or a ‘simple alternative.’ Its Hilbert space is a two-dimensional complex vector space, and a simple mathematical development shows that complex bodies constituted by urs admit of a natural description in a three-dimensional real space.

Thus Weizsaecker explains the structure of space from quantum mechanics, and quantum mechanics from the structure of time. Although the details of his project are far from being carried out, the basic scheme stands: the unity of physics, and therefore (since knowledge and the known are not to be separated) the unity of physical nature, reflect the unity of time. ‘Physical’ nature, to Weizsaecker, means nature objectified -- any part of nature, including, for example, a living cell, or even the human mind which, insofar as it is analyzable in terms of yes-no questions, is fully subject to the laws of unitary physics. Therein lies Weizsaecker’s ‘reductionism.’ It does not reduce mind to palpable 19th-century matter, nor even to a something situated in physical space; and it leaves open the possibility of other, non-objectivating modes of encountering mind, in which another ‘Thou’ is met. If matter is what obeys the laws of physics, then it is an aspect of all that exists in the universe, and rather than juxtaposing it to life or mind, it is but a mode of experiencing these. A reductionism that claims no more than this is a reductionism with all the poison drawn from it.

Cultivate the Simples Appropriate to Each Order

Ordinary language provides us with elementary concepts on all levels of nature. Deliberate scientific work begins with a search for elementary terms -- the ‘primitives’ or ‘simples’ -- even more peculiarly fitted to the analysis of the subject at hand, in one with the search for patterns in which to view their interrelations. It has been a principle of good craftsmanship with all empiricists from Occam to Bridgeman to tailor their terms and theories as closely to the phenomenally given as possible. Occam’s razor, Bridgeman’s admonition to make the least down payment on future conceptualizations (Bridgeman 1959, p. 10), are born of the same spirit of faithfulness unto the phenomena. (Mach, too, was of this persuasion.) Tackling color theory in this spirit, as I will now show, one immediately obtains the simples appropriate (in a clearly specifiable sense) to that field; these simples are not the electromagnetic frequencies high school physics tells us colors ‘really’ are. Examples of this sort, taken from well-established sciences, may serve as practicing ground for biology and the higher-rung domains in which the shaping of the appropriate simples is still an open problem. Here I will confine myself to the ‘lowest’ level of that discipline (optics) where it is closest to physics, and where the nature of its relative autonomy can therefore best be studied.

Let us recall, to begin with, the ‘color circle’ on which we usually represent the universe of possible hues and saturations. (A third elementary determinant of color, brightness, need not concern us in the present context; its prepresentation would require a third dimension.) As we move around the circle clock-fashion starting, say, at red, we pass through the successively neighboring hues: orange, yellow, green, blue, violet, purple, and back to red. At the rim of the circle, the hues are at their most saturated (darkest); as we move inward along a spoke, the hue remains constant but becomes progressively less saturated, until we reach white at the center. Actually, this qualitative (mathematically speaking: topological) representation of the color world is two steps removed from physics, not one. The representation I want is obtained by noting that any color can be matched by superposing three standard (but arbitrarily chosen) ‘primary lights,’ and choosing as coordinates the relative contribution of each of these primaries. As a result, one now finds oneself in a so-called projective, rather than topological, plane, in which the color ‘circle’ is actually a noncircular closed curve whose exact shape need not concern us here, with the White center somewhere inside. The point to hold onto is that the mathematics on this particular rung of color science (which used to be referred to quite universally as that of ‘psycho-physics’), is neither quantitative (metric), as in physics, nor purely qualitative (topological), as in the description that satisfies our common-sense curiosity in the structure of the color world, but in between, namely, projective. I will now discuss the appositeness of two types of color primitives with respect to this rung.

All of us are familiar with Newton’s decomposition of sunlight (white) by means of a prism: he allowed a ray to enter through a small hole, and displayed a sequence of highly saturated colors from red through green to violet on a screen beyond the prism. It is a characteristic of these ‘spectral colors’ that they cannot be further resolved into constituent hues by passage through another prism; that they stand in a one-to-one correspondence with a particular angle of refraction (i.e., with a spatial property); and that, in the context of physical optics, each turns out to be quantifiable in terms of a spatial periodicity (the wavelength). It is important to note, however, that only the spectral colors correspond one-to-one to a wavelength; every other color corresponds to a set of possible spectral-color distributions, the so-called spectral ‘metamers.’ In other words, the Newtonian spectral wavelengths designate one-to-one every point on the rim of the color circle (or projective closed curve) between red and violet, but do not designate in a unique fashion the purple segment of the rim, nor any of the points inside the circle.

Almost two hundred years ago, the poet Goethe experimented with prismatic colors and hit upon an entirely different set of color simples (Goethe 1791). He held up a prism against a white wall, expecting to see the Newtonian spectrum displayed, and noticed instead that the wall remained white except where it was crossed by dark cracks, which the prism resolved into hues much brighter than the Newtonian ones. Goethe concluded that the minimum condition for the genesis of prismatic colors was a simple black-white border, and the colors he then saw, termed ‘edge colors’ today, he took as the true primitives for a theory of color, while he considered the Newtonian spectral colors to be compound, i.e., derivative. It is a simple matter today to prove that, by combining these primitives pairwise in two possible ways, every color point in the projective (or topological) plane is covered in strict one-to-one correspondence. The two combinations are ‘parallel’ and ‘series’: i.e., one edge-color filter is inserted in front of one light source, the other in front of a second, and the two beams are superposed on a screen (parallel combination), or the two edge-color filters are placed one behind the other in front of a single source (series combination) The Newtonian spectral colors are obtained through the series combination of complementary pairs of edge colors (pairs that add up to White when combined in parallel), and thus are in fact compound in terms of these simples, as Goethe had claimed. Conversely, the edge colors can be resolved into Newtonian spectra; there is no reason, other than ontological prejudice, for calling one primitive ‘more fundamental’ than the other.

The Newtonian simples are quantifiable and specify color uniquely insofar as it links up with physics; the edge colors are projectively defined and specify the colors whenever we talk of the color universe as a whole -- as of course we do in any theory of color, not only on the psychophysical level, but on the next higher (topological) level as well. It turns out that there are still higher rungs in color theory (which Goethe tried to structure with his notion of the ‘Urphaenomen,’ the ‘archetypal phenomenon’), and that on each of these the edge colors retain their basic role.

How do the levels of color physics (wavelengths) and psycho-physics (projective color plan) cohere? The reductionist answer is that the electromagnetic input spectrum, processed by the neurophysiology of the cones in the retina, generates the color ‘signal space’ represented by the color plane. Can our second strategy add any further dimension to this coherence?

In most textbooks on color science, the following relation between the wavelengths and c corresponding to complementary hues is mentioned, with the added remark that it seems to be purely empirical,’ i.e., an accidental relation and not part of any theory:

(1) (a - )(c - b)c,

where a, b, c are numerical constants. I would like to suggest, however, that there is a way of viewing this relation such that it becomes perfectly meaningful, provided only we look in the ‘counter-reductionist’ direction, i.e., we try to understand the lower rung in terms of the higher, instead of explaining the higher in terms of the lower. The richer coherence we seek depends on that understanding.

Complementary hues, it can be shown, lie on a straight line through the White point in the projective plane. Their designation in terms of wavelengths, as in (1), is a foreign body in the color plane, since a projective (or topological) universe knows nothing of a metric; the wavelengths are simply imported from the level of physics, i.e., from interference measurements made in physical space. To parametrize hues in a manner intrinsic to the color plane (i.e., properly ‘assimilated’ to that order), we must proceed differently. The hues appearing angularly distributed about the White point, we introduce a projective (non-metric) angular measure, which is given by

(2) (d - h) (h c - e) =f,

where d, e, f are arbitrary constants, and h, hc are the hue-parameter values of a complementary pair. It is clear that, by setting the arbitrary constants in (2) equal to the particular values a,b,c in (1), the hue scale will be in the units of wavelength. In other words, the class of permissible metrics for hue, specified by the projective measure (2), includes the metric of physical space as a special case. It is decisive to realize that there is nothing matter-of-course about this state of affairs. What we are here being told is that the metric of physical space is a specialization of a projective measure not descriptive of phenomena in physical space (such as the projective measure relating the electromagnetic tensors in vacuum, which of course must include the metric of physical space as a special case), but of phenomena in a space of sense experience. In other (psycho-physical) spaces of sense experience (I have specifically examined only that of acoustic pitch and of optical brightness), the relation to the physical metric seems to be similarly that of a higher geometry (albeit affine, rather than projective) to one of its metric models. We are therefore able to understand that some of the physical quantities are born out of the more qualitative mathematical structures of sense experience as a straightforward quantification thereof. This manner of coherence between two adjacent rungs is not explicable in terms of the signal transformation scheme mentioned before; rather, an evolutionary principle of neurophysiological realizability seems to be involved that is yet to be clarified.

Is there a Mathematics for Wholes?

We normally think of a plane as made up of the totality of its points, but the converse is equally possible: a point can be thought of as the totality of planes passing through it, and in fact is thus conceived in the field of mathematics mentioned in the preceding section, viz., projective geometry. Figures and theorems of point-like and of plane-like character appear always in parallel in that geometry and with equal justification, however odd the latter at first may seem to our intuition, which is unused to structures whose parts are larger (in ordinary space) than the whole. We can even specialize projective geometry in a manner parallel to the specialization that gives us Euclidean geometry, to obtain what Whitehead termed ‘anti-space,’ and Adams ‘counter-space’ (Adams and Wicher 1960): a space characterized by an ‘absolute point center,’ corresponding to the ‘absolute plane at infinity’ that lifts Euclidean geometry out of the totality of geometries contained in projective space. What this geometry (and projective geometry in general) teaches us is that structure need not be internal, it can be external to the object as viewed in ordinary space. Thus we have the choice, in plane projective geometry, of constructing an ellipse (using a straightedge only) point by point, or else tangent by tangent; we can feel, as we work with the latter, how the periphery shapes the figure. (In three dimensions, it is tangent planes rather than lines; and in the differential geometric and topological generalizations, it is tangent surface elements rather than planes.) Whitehead was apparently the first to wonder why this plane-like geometry should not be applicable in nature, when its parallel, the point-like geometry, is so ubiquitous; he did begin noticing projective elements in the science of statics, and F. Klein’s student, E. Study, explored the ‘plane-wise’ representation of mechanical rotation, an idea further developed by G. Adams (in unpublished manuscripts). It is also clear that electromagnetic theory can be understood in these terms; in fact, the Maxwell equations are a purely projective theory in vacuum, the metric entering only through the material properties, such as the permeability and the dialectric constant.

The exploration of the projective and counter-space points of view in physics does promise to throw new light on old relations, but seems unlikely, unless I am mistaken., to produce new results. Perhaps physics can serve as a training ground for the novel mathematical imagination here required -- and this would be a worth-while effort, provided Adams is right in thinking that counter-space geometry will lead to new results in biology (Adams and Whicher 1960). His point is that morphological changes in living things, if we view them intuitively, do seem to proceed from, or in, the periphery -- think of the blastula or neurula in embryology, or of the unfolding of leaves in botany. Accordingly, he introduces force fields that act in the periphery rather than in a point as in physics, and conceives of every growing point (such as the tip of the shoot) as the absolute center of a counter-space. In this way Adams does seem to catch some growth processes in mathematical form, but it is not clear to me how the surface-like forces are supposed to act on the material they shape. Adams’ language here sounds vitalist; the forces are said to ‘cause’ morphological changes by interacting with matter at points where it is in a ‘chaotic state.’ Yet this is surely incorrect; at least on the microlevel matter is extremely highly organized. (Its correctness on the level of macro-shape throws no light on our problem.) I think Bohm (1973) is right in invoking the causa formalis in connection with implicate orders, ruling out, for the formative forces, an explanatory role of the causa efficiens type, which is reserved to physicalist explanation. Even in saying this, of course, we are merely posing a difficult problem, not solving it.4

If this approach is still too primitive, how can we further develop it in the direction of Bohm’s implicate orders? What the two have in common is that Bohm’s paradigm, the Hilbert space implicate order, is also a projective geometry, though one greatly enriched in comparison with the ordinary projective geometry we have so far considered. Two large steps separate the two. The first corresponds to the transition from ‘geometric optics’ (rays and phase fronts) to ‘physical optics’ (waves, interference). Whereas in the ordinary projective case, space and counter-space (explicate and implicate) are rigidly correlated, their association with a wave equation unfreezes them to some extent by introducing periodic or even nonperiodic motion. The form of this motion (explicate) is represented implicately by spectra, which are thus a further training ground on the road to the more basic implicate orders. Into every point of the intensity distribution across an aperture, for example, the entire spectrum is enfolded as a superposition of moving planar structures; and conversely, every spectral wave front sweeps over all points in space, thus contributing to the intensity everywhere. The second step then introduces operators to achieve the full Hilbert space structure. I cannot imagine that anything less will do for the ‘new morphology.’ Just as quantum mechanics is needed to, and does perfectly, fill the gap between the atomic structure of crystals and their beautiful shape in space, so too it will be needed to link the biochemistry of genetic code and cellular processes to the growth and shape of living things -- provided, that is, that such a link can be found at all. Quantum mechanics thus plays a central role in the first as well as the third strategy, but for different reasons: in the first, the analysis of its meaning takes the sting out of reductionism, in the third it becomes a guide in the development of our perceptual and conceptual appreciation of living forms.

I mention in conclusion the work of R. Thom (1970/71), which presents an enrichment of our morphological understanding perhaps more immediately applicable to the morphological sciences than quantum mechanics can be at the present moment; in fact, I suspect Thom’s method to be indicative of the bridge that needs to be constructed between the macro- and the micro-realm. The topological mathematics he employs lends itself to interpretation in terms of implicate orders, although Thom is not himself interested in making this explicit. It remains to be seen whether his approach (or the cohomology theory used by Bohm, or any other approach now known) is adequate to the mathematical innovation sought: to the direct description of growth, and of the metamorphosis of forms, in space and time.

 

NOTES

l) For an excellent discussion of ontological physical reductionism as a fission product of Cartesian dualism, see Jonas 1966, pp. 38-92.

2) Some points of similarity between Whitehead’s and Husserl’s philosophies are discussed by R. Wiehl in his thorough Introduction to Whitehead 1971.

3) This is also the position of D. Bohm.

4) The fact that George Adams was a disciple of Rudolf Steiner, who taught a ‘Western style’ of consciousness expansion called ‘Anthroposophy,’ and thought well of projective geometry, explains his constant references to the ‘cosmic’ and ‘spiritual’ significance of counter-space. Although I fear his hopes for the idea may be inflated, I think nevertheless that its suggestive wealth merits attention.

 

REFERENCES

Adams, G. and O. Whicher 1960. Die Pflanze im Raum und Gegenraum. Stuttgart: Verlag Freies Geistesleben. (An earlier, shorter version appeared in England in 1952, under the title The Plant between Sun and Earth.)

Bohm, D. 1973. "Fragmentation and Wholeness." Van Leer Jerusalem Foundation Series. Humanities Press.

Bohm, D. 1974. "Holography and a New Order in Physics." Technology and Society (Bath University Press, England) 8/2:58. A fuller version is given in Bohm, D., Foundations of Physics 1:359-381 (1971)and 3:139-168 (1973).

Bohm, D. and B. Hilley. 1975. "On the Intuitive Understanding of Non-Locality as Implied by Quantum Theory." In Foundations of Physics 5:93-109

Bridgeman, P.W. 1959. The Way Things Are. Cambridge: Harvard University Press.

Goethe, J. W. 1791. "Beitraege zur Optik." In Kuhn, D., Matthaei, R., Schmid, G., Troll, W., and Wolf, K L. (eds.), Die Schriften zur Naturwissenschaft. Vol. 1 of Leopoldina (Halle) edition, Weimar, 1947. For an excellent discussion, see Goegelein, C., Zu Goethes Begriff von Wissenschaft. Munich: Carl Hanser Verlag, 1972.

Jonas, H. 1966. The Phenomenon of Life. New York: Harper & Row.

Thom, R. 1970/71. In Waddington, C. H. (ed.), Towards a Theoretical Biology. Vol. 3, pp. 89-116, and Vol. 4, pp. 68-82. Edinburgh University Press.

Weizsaecker, C. F. von. 1971a. In Bastin, T. (ed.), Quantum Theory and Beyond. Cambridge University Press, pp. 229-262.

Weizsaecker, C. F von. 1971b. Die Linheit der Natur. Munich: Carl Hanser Verlag. Whitehead, A. N. 1898. Universal Algebra. Cambridge University Press.

Whitehead, A. N. 1970. The Concept of Nature. Cambridge University Press.

Whitehead, A. N. 1971. Abenteuer der Ideen. With an Introduction by R. Wiehl. Frankfurt: Suhrkamp Verlag.

Viewed 173647 times.