21 September 2010

What makes for good theology?

One of my authoritative others, Robert Jenson, once defined theology thus:
Theology is the persistent asking and disciplined answering of the question: Given that the Christian community has in the past said and done such-and-such, what should it do now? The question may be divided: (1) What has the Christian community in fact said and done? and, (2) What should it say and do in the future? (Story and Promise, vii)
Of course, central to that proclamation and action is the Eternal Love who has created all things and is drawing all things back to Godself.

I have always believed that, if this is what theology is really about, whatever else it does it should fascinate us and challenge us. Recently I discovered that Augustine believed something very similar: ‘the eloquent should speak in such a way as to instruct, delight, and move their listeners’ (doc. Chr. 4.74). And building on that formula for eloquent oratory, he gave the following instruction to would-be preachers (and theologians):
The aim of our orator, then, when speaking of things that are just and holy and good – and he should not speak of anything else – the aim, as I say, that he pursues to the best of his ability when he speaks of these things is to be listened to with understanding, with pleasure, and with obedience. (doc. Chr. 4.87)

17 September 2010

Read well to write well

Some advice on writing from an unexpected source:
Eloquence is picked up more readily by those who read and listen to the words of the eloquent than by those who follow the rules of eloquence. (Augustine, doc. Chr. 4.8)
Of course he was referring to preaching and rhetoric (Augustine was one of the great rhetors of his day), but the advice applies equally to the written word.

Actually, the advice is probably redundant for any decent writer. Certainly all the professional writers (of fiction) that I know became what they are because of their love of books. They could no more stop reading than they could stop breathing. By the same token, anyone who has to be told to get their nose out of the guides to writing a bestseller and read some good books instead will probably never be a good writer (successful, perhaps, but never really good because they write for the wrong reasons).

16 September 2010

Christendom as the inversion of Christianity

I am currently editing a massive volume by Enrique Dussel. Here is a little something from it:
‘Christendom’ is the inversion of Christianity; it is the Christianity that has negotiated by the bureaucratic corruption of its institutions the state’s divine legitimation (in exchange for cushy jobs for the ecclesial bureaucrats). It is the ‘inversion’ of the Christianity of Jesus of Nazareth.

Hardly a new sentiment, but it seemed a particularly appropriate quotation with which to acknowledge the papal visit to the UK.

14 September 2010

Praise him in the typo

I was on retreat at Alnmouth Friary last month and was struck by a change in wording in the Eucharistic prayers. In its older forms the opening sentences of the Eucharistic prayers include the lines:
Let us give thanks to the Lord our God.

It is right to give him thanks and praise.

In Common Worship the response now reads,
It is right to give thanks and praise.
A very different sentiment!

It would be nice to think that this was just a typo like the one in Friday morning prayers in the shorter form of Celebrating Common Praise. However, I fear that it was actually a cack-handed attempt to impose inclusive language on the text. Now I have nothing against inclusive language properly implemented. Indeed part of my day job is ensuring that inclusive language is properly implemented in the texts I edit. However, introducing inclusivity should not change the meaning of the text quite so much. So from now on, whenever I take part in Church of England Eucharists, you will hear me say:
It is right to give God thanks and praise.

11 September 2010

Some writing advice from Walter Benjamin

I am taking things easy while recovering from an operation. So here, in lieu of a thought of my own, is some writing advice from Walter Benjamin’s ‘Writer’s Technique in Thirteen Theses’:
VII. Never stop writing because you have run out of ideas. Literary honour requires that one break off only at an appointed moment (a mealtime, a meeting) or at the end of the work.

05 September 2010

The rediscovery of cosmology

5.25 Introduction: ‘Newtonian’ limits to Newtonian physics

From the perspective of Newtonian physics, reality could be exhaustively understood in terms of particles moving in well-defined ways under the influence of certain forces. Of course it was recognised from the outset that real life was more complex than that. Many everyday situations involved too many factors to be amenable to such straightforward treatment. In such cases physicists had to be satisfied with approximations. Nevertheless it was assumed that, in principle, these awkward cases could be treated exactly.[1]

Take, for example, the motion of planets round the Sun. Using his laws of motion, Newton was able to provide an exact solution to the two-body problem – the case of two physical bodies interacting gravitationally but isolated from other influences. However, Newton’s successors were unable to create exact solutions for larger ensembles of bodies (e.g. the Solar System or even just the Sun, Earth and Moon considered in isolation from all the rest – the three-body problem). Instead they had to adopt a method of approximations – beginning with the simple case, they asked how the presence of an additional element might perturb the orbits of the two bodies, then they calculated the effect of that change on the third body, then corrected the original calculations in the light of that, and so on to higher and higher degrees of accuracy.

It was not until the end of the nineteenth century that astronomers finally abandoned the search for an exact solution to the three-body problem. In 1889 a young mathematical physicist, Henri Poincaré, won a competition sponsored by the King of Sweden with an essay demonstrating the impossibility of such a solution.

5.26 Recognising chaos

Poincaré may justly be called the father of chaos theory. In addition to demonstrating that there were physical systems which could not be precisely analysed using Newtonian physics, he was among the first physicists to comment on the extreme sensitivity of many physical systems to small variations in initial conditions. Little notice was taken of his remarks when he made them in 1903 but, since then, physicists have become much more conscious of the extent to which such chaotic behaviour is to be found in the physical world. This new awareness of chaos and complexity is not so much a recent discovery as a gradually changing perception resulting from a range of factors.

The research that has resulted from Poincaré’s own work on perturbation theory is one of these factors. This has revealed the existence of chaotic behaviour in simple isolated systems. Take, for example, the motion of balls on a snooker table. It can be shown that their motion is so sensitive to external factors that in order to predict the position of the cue ball after a minute of motion (and collisions), one would have to take into account the gravitational attraction of electrons on the far side of the galaxy! Even something as apparently simple as the tossing of a coin or the motion of a water droplet on a convex surface is so sensitive to minute variations in the environment as to be unpredictable.

A second area of research that has encouraged physicists to take chaos more seriously is that of turbulent flow in fluids. Its relevance to engineering and meteorology ensured that this was a growth area in research. Unlike the simple situations described above, turbulence is not merely a matter of uncertainties in the system created by random motion at the molecular level. That aspect of fluid dynamics can be handled statistically. The real issue is the sudden emergence of random motion on a macroscopic scale – eddies and currents involving large collections of molecules. Such situations are bounded but unstable – in many such cases we are now able to generate equations that tell us the boundaries within which the motion will take place. Inside those boundaries, however, the particles involved are subject to irregular fluctuations quite independent of any external perturbation.

A third aspect in the development of chaos theory has been the availability of more and more powerful electronic computers. They have allowed physicists to extend classical perturbation theory to situations that previously were too complicated to calculate. As a result more and more situations have been revealed to be chaotic.

Finally, the widespread acceptance of quantum theory may also have played an important part in changing the attitudes of physicists to unpredictable situations. This is not to suggest that quantum theory is directly relevant to chaos at an everyday leve1.[2] However, the acceptance of quantum uncertainties may have made it easier for physicists to accept a degree of unpredictability about the physical world at other levels.

5.27 Coming to terms with chaos

This new awareness of complexity implies a profound change in the way in which many physical scientists view the world, a new perception of the relation between freedom and necessity. This can be summarised in the apparently paradoxical statement that chaos is deterministic. The situations described above are not completely anarchic. On the contrary, we are dealing with ensembles of bodies moving at the everyday level where Newtonian laws of motion still hold sway. The behaviour of these chaotic situations is generated by fixed rules that do not involve any elements of chance. Many of the physicists of chaos would insist that, in principle, the future is still completely determined by the past in these situations. In an accessible introduction to the subject Alan Cook advocates that the term ‘deterministic chaos’ should always be used (Cook, 1998:31-41) However, these are situations which are so sensitive to the initial conditions that, in spite of the determinism of the associated physical laws, it is impossible to predict future behaviour. Deterministic physics no longer has the power to impose a deterministic outcome. According to one of the classic papers on the subject, ‘There is order in chaos: underlying chaotic behaviour there are elegant geometric forms that create randomness in the same way as a card dealer shuffles a deck of cards or a blender mixes cake batter’ (Crutchfield et al., 1995:35).

At first glance this may seem entirely negative. The admission that chaos is far more widespread than previously realised appears to impose new fundamental limits on our ability to make predictions. If prediction and control are indeed fundamental to science then chaos is a serious matter. On the other hand, the deterministic element in these chaotic situations implies that many apparently random phenomena may be more predictable than had been thought. The exciting thing about chaos theory is the way in which, across many different sciences, researchers have been able to take a second look at apparently random information and, while not being able to predict exact outcomes, nevertheless explain the random behaviour in terms of simple laws. This is true of meteorology. It can also be applied to dripping taps or to many biological systems (e.g. the mathematical physics of a heart-beat).

5.28 Implications for the philosophy of science

As we have just hinted, the emergence of a science of chaos has profound implications for our understanding of what science is and can do.

One may disagree with the notion that the raison d’être of science is prediction and control. Nevertheless, prediction still retains a central place in the scientific method. How else are we to test our scientific models? The classical approach is to make predictions from the model and devise experiments to test those predictions. Here, however, we are faced with situations in which such predictions seems inherently impossible. In fact, what is required is that we take a more subtle approach to prediction. What we observe may well be random (or pseudo-random). However, the deterministic element in mathematical chaos implies that the random observations will be clustered into predictable patterns.
A second important implication of chaos theory has to do with the continuing tendency to reductionism in the sciences (see 6.11-6.11.1). Chaos and complexity highlight the fact that only in the very simple systems that formed the backbone of classical physics is it true that the whole is merely the sum of the parts. Chaotic systems simply cannot be understood by breaking them down into their component parts and seeing how they fit together again.
Closely allied to this challenge to reductionism is a question about the possibility of completeness in physics. The reality of chaos undermines the hope that such completeness can be achieved by an increasingly detailed understanding of fundamental physical forces and constituents. It also provides a physical basis for the concept of emergence that is so important in philosophical and theological perspectives on the life and human sciences. The behaviour of chaotic systems suggests that interaction of components at one level can lead to complex global behaviour at another level – behaviour that is not predictable from a knowledge of the component parts. Indeed some chaos scientists suggest that ‘chaos provides a mechanism that allows for free will within a world governed by deterministic laws’ (Crutchfield et al., 1995:48).
However, we consider caution is needed at this point. Chaotic randomness is not complete randomness. True, we are unable to predict the detailed outcome of a chaotic scenario. However, the mathematics of chaos does permit us to predict the limits of the possible outcomes. This is randomness within constraints – deterministic constraints. In fact, chaos theory allows us to extend our physical understanding of the world into new areas specifically by applying deterministic covering laws to situations that were previously thought to be completely random. This could be taken as evidence that determinism really works.
On the other hand, the fact that the equations we use are deterministic does not necessarily mean that nature is deterministic. The equations are maps, not the reality. It could be that the apparent determinism is an artefact of the particular way in which we have chosen to map reality – in terms of mathematical physics. The idea that our equations are only approximations to the laws that govern the macroscopic world is an important part of Polkinghorne’s position on divine action [see 10.9(iv)(a) and Polkinghorne 1998a:64-66).

5.29 Conclusion

We have seen that, though Newtonianism remains very influential, in a number of areas modern physics has broken with the Newtonian paradigm. It has given rise to questions of interpretation which relate directly to theology – in the areas of quantum theory and chaos theory (re determinism), the Big Bang origin and final fate of the universe, and the question of evidence for design. A number of these areas will be considered again in our discussion of divine action in Chapter 10.

[1] See 1.17 on Laplace and the determinism of the Newtonian universe.
[2] Except in that Heisenberg’s Uncertainty Principle [5.11 (ii)] sets a limit on the precision of our knowledge of any system.

04 September 2010

Modern cosmology and the rediscovery of purpose?

5.20 Some contemporary cosmological enigmas

Modern cosmology offers us mathematical models of the possible large-scale structure of the universe. Like any other mathematical model, the actual features depend on the numbers that we choose to put in the equations. Thus in 5.17 we noted that different values of the total mass of the universe will give rise to very different cosmological models. In general, the overall structure of many physical systems is strongly influenced by the numerical values of a relatively small number of universal constants (e.g. the gravitational constant). Since the 1970s physicists have become increasingly aware that the physical conditions that enable life to exist are very sensitive to the values of a number of these constants. If they had been only slightly different, life as we know it could not have evolved.

5.20.1 The chemical composition of the universe

As we noted in 5.16.1, the overall chemical composition of the universe was determined by physical conditions during the first seconds of the Big Bang. However, the elements on which life depends (such as carbon, nitrogen, oxygen, sulphur and iron) are the product of nuclear reactions within stars. In both situations the processes by which the chemical elements are formed are governed very precisely by the strengths of four fundamental physical interactions: gravitation, electromagnetism, and the weak and strong nuclear interactions.

If the relative strengths of these forces were different, the resultant universe would also be different. For example, increasing the strong nuclear interaction by 3 per cent relative to the electromagnetic interaction gives a cosmological model in which none of the known chemical elements could form. Conversely, decreasing it by 1 per cent gives a model in which carbon atoms would be unstable. Both scenarios would preclude carbon-based life. Other tiny variations in these forces might have given rise to a universe which was 100 per cent helium or one in which supernova explosions could not occur (since these explosions are thought to be the chief way in which the chemicals necessary for life are ejected from stars, this too would preclude the evolution of life).

5.20.2 ‘Anthropic’ features

The actual chemical composition of the universe is just one way in which the universe appears to have been finely tuned to permit the evolution of life. Until their explanation by inflationary cosmology (5.19), the horizon and fine-tuning problems were also seen in this light by many physicists, and there are still elements within inflation which look fine-tuned (Craig, 2003:158-61)  The various unexplained factors that have been perceived as necessary for the emergence of life have come to be known as ‘anthropic’ features (or coincidences).

5.21 Possible responses to the ‘anthropic’ coincidences

There is no obvious physical reason why the parameters mentioned in 5.20.1 should have the observed values. However, very small changes in any of these key parameters would have resulted in a grossly different universe; one in which life as we know it would almost certainly be precluded. The set of life-permitting cosmological models is a vanishingly small subset of the set of all theoretically possible cosmological models. How should the scientist-theologian respond? Murphy and Ellis provide a list of possibilities (1996:49-59).

One response to these enigmas might be to adopt a hard-nosed empiricism and say, ‘So what? It is meaningless to speak of our existence as improbable after the event.’ However, few cosmologists seem prepared to ignore these cosmological coincidences in this fashion.

Another possible response would be to deny the contingency of physical laws and parameters. For example, some physicists speculate about possible developments in physics that would demonstrate that only this precise set of laws and parameters is possible. This is the approach that leads Drees to be cautious about drawing any inferences from the coincidences (1996:269-72).

An important further caution is provided by philosophers who question whether we can really apply the formal measure of probability to a range of possibilities about the universe of which we know so little (see Manson, 2000, McGrew et al., 2003). Nevertheless, however, vaguely defined, the coincidences remain striking and seem to call for some explanation.

A third type of response is to invoke some form of anthropic ‘principle’.

5.22 The Weak Anthropic Principle

The approach which does the least violence to conventional modes of scientific thought is to invoke a Weak Anthropic Principle (WAP). Barrow and Tipler describe it thus:
The observed values of all physical and cosmological quantities are not equally probable but they take on values restricted by the requirement that there exist sites where carbon-based life can evolve and by the requirement that the Universe be old enough for it to have already done so. (Barrow and Tipler, 1986:16)
In other words, our existence as observers functions as a cosmological selection effect. There can be no observations without observers. Our observations must satisfy the conditions necessary for our existence.

However, the WAP does not take us very far towards an explanation of the observed coincidences. In conjunction with a conventional Big Bang cosmology, it still gives the impression that our existence is an accident of vanishingly small probability. Thus, in practice, it usually appears in conjunction with a cosmological model that suggests that there is a sense in which all possible universes actually exist. Three such strategies are to be found in the literature.

The first is to extend the closed Big Bang model to permit an endless series of expansions and contractions: the so-called cyclic Big Bang (see 5.18). Each passage through a singularity is supposed to randomise the physical parameters that give rise to the anthropic features. Advocates of this approach argue that in an infinite series of closed universes there will certainly be a subset whose physical features permit the evolution of life and the function of the WAP is to remind us that only in such an atypical subclass of universes could life evolve. The main difficulty faced by this scenario is justifying the assumption that, while the singularity randomises the laws and constants of nature, it leaves the geometry of spacetime untouched. If, as seems reasonable, passage through a singularity also affects the geometry of the universe, we should expect an open Big Bang after a finite number of cycles, thus putting an end to any hope of an infinite sequence of universes.

A second approach would be to opt for Linde’s version of inflationary cosmology (see 5.19). In an infinite chaotic universe in which an infinite number of ‘bubble’ universes are created by the decay of the false vacuum, we should expect every possible stable state to appear an infinite number of times. Again the WAP is a reminder of the atypical nature of the universe in which we find ourselves.

The third and, currently, most popular strategy for relaxing the uniqueness of our universe is to adopt a many‑worlds interpretation of quantum mechanics [5.13 (iii)]. Again it is sufficient to invoke the WAP to ‘explain’ our atypical cosmos.

5.23 The Strong Anthropic Principle

For some cosmologists the WAP does not go far enough. Their response is to invoke the existence of rational carbon-based life forms as an explanation of the anthropic features of the universe. Barrow and Tipler formulate a general version of the Strong Anthropic Principle (SAP) thus: ‘The Universe must have those properties which allow life to develop within it at some stage in its history’ (Barrow and Tipler, 1986:21).

            One version of the SAP is Wheeler’s Participatory Anthropic Principle (PAP). This asserts that the existence of the cosmos and the detailed course of its evolution are dependent on the existence of rational observers at some epoch. In his own words, ‘Observership is a prerequisite of genesis’ (Wheeler, 1977:7). It is essentially an extension of his own particular interpretation of quantum mechanics (see 5.13(i)). All properties, including the very existence of the universe, are brought about by the inter-subjective agreement of observers. Thus the universe may be likened to a self‑excited circuit. Past and future events are so coupled as to obviate any need for a first cause.

Against the PAP, Barrow and Tipler point out that the capacity of human observers to ‘create’ in this way is very limited indeed (Barrow and Tipler, 1986:470). Thus the PAP seems to entail the present or future existence of a community of beings with a ‘higher degree of consciousness’ than our own. They suggest that the process by which the appropriate kind of inter-subjective agreement is reached is the sequential coordination of separate sequences of observations, ‘until all sequences of observations by all observers of all intelligent species that have ever existed and ever will exist, of all events that have ever occurred and will ever occur are finally joined together by the Final Observation by the Ultimate Observer’ (Barrow & Tipler, 1986:471). The theistic implications of such a statement are obvious.[1] However, Barrow and Tipler avoid such a theistic conclusion by modifying the PAP into their Final Anthropic Principle (FAP). They state the FAP in the following terms: ‘Intelligent information-processing must come into existence in the Universe, and, once it comes into existence, it will never die out’ (Barrow and Tipler, 1986:23). In other words, intelligent life-forms have cosmological significance by virtue of their future capacity to understand and manipulate matter on a cosmic scale.

This belief leads them to develop a non-theistic ‘physical eschatology’. Tipler has amplified this further in his The Physics of Immortality (1995).[2] Humankind may not exist forever but human culture will persist, being preserved and developed by self-replicating intelligent machines. The transfer of our cultural software to alternative forms of hardware is one factor in encouraging the indefinite growth of the capacity to process information and to manipulate matter. They envisage the inevitable expansion of human culture to the point where it engulfs the entire cosmos. But let them have the final word:
if life evolves in all of the many universes in a quantum cosmology, and if life continues to exist in all of these universes, then all of these universes, which include all possible histories among them, will approach the Omega Point. At the instant the Omega Point is reached, life will have gained control of all matter and forces not only in a single universe, but in all universes whose existence is logically possible; life will have spread into all spatial regions in all universes which could logically exist, and will have stored an infinite amount of information, including all bits of knowledge which it is logically possible to know. And this is the end. (Barrow and Tipler, 1986:676f.)
And, in a footnote, they add, ‘A modern-day theologian might wish to say that the totality of life at the Omega Point is omnipotent, omnipresent, and omniscient!’ (Barrow and Tipler, 1986:682 note 123).

In spite of the metaphysical tone of much of their discussion, Barrow and Tipler stress that the FAP makes clear predictions about the kind of universe we can expect to observe. Most importantly, they argue that, in order for life literally to engulf the universe, the universe must be closed. It must eventually begin to collapse under its own gravitation toward a final singularity.

5.23.1 Is it science?

Implicit in Barrow and Tipler’s insistence on the predictive power of the Anthropic Principles is a claim that they be accorded scientific status. Predictive capacity is a keystone of Popper’s well-known Criterion of Falsifiability. But what sort of scientific status is being claimed?

The SAP claims that the statement, ‘Observers exist’, in some sense constitutes a scientific explanation of the anthropic features of the cosmos. Two ways of interpreting this are possible.

It may be a claim that rational observers are the efficient cause of the universe. However, this would imply that time reversal is a reality on a cosmic scale and that in a very strong sense intelligent observers have (will have?) created their own reality.

Alternatively, the SAP may be read as a denial of the sufficiency of efficient causes as scientific explanations of certain physical problems. This implication of the SAP has caused some scientists and philosophers to reject it out of hand. However, it should be recalled that it was only with the rise of the mechanical model of the world that efficient causes were accepted as complete explanations in physics. Furthermore, the biological sciences have proved remarkably resistant to this view of scientific explanation.

By contrast, the WAP does not claim to be explanatory: it is merely a selection effect. However, like the SAP, it has a covert content. It is pointless unless it is used in conjunction with a cosmological model which postulates an ensemble of universes. Thus it functions as a way of commending to the scientific establishment certain speculative cosmologies which have so far failed to convince when restricted to more conventional forms of scientific argumentation.

5.24 Anthropic design arguments

It is hard to resist the impression of something – some influence capable of transcending spacetime and the confinement of relativistic causality – possessing an overview of the entire cosmos at the instant of its creation, and manipulating all the causally disconnected parts to go bang with almost exactly the same vigour at the same time, and yet not so exactly co-ordinated as to preclude the small scale, slight irregularities that eventually formed the galaxies, and us. (Davies, 1982:95)[3]
As this quotation from Paul Davies suggests, the apparent fine-tuning of the cosmos is a rich source of material for new forms of design argument for the existence of God. Several such design arguments appear in recent theological (and scientific) literature.

Anthropic design arguments use aspects of cosmic fine-tuning as evidence that the universe was designed to permit (or, in stronger forms, to necessitate) the evolution of rational carbon-based life forms. There can be little doubt that, from the perspective of Christian faith, such features are suggestive of design. However, design arguments based on these features make certain assumptions that may make one cautious about placing too much reliance on them.

To begin with, they assume that the anthropic features of the cosmos are, in themselves, improbable. However, quite apart from the difficulties of assigning probabilities to these parameters, such an assumption is far from proven. As we noted earlier (5.21), it is conceivable that future developments in physics might render these very features quasi-necessary. In such a situation, this entire class of design argument would collapse. There is a hint of the God of the gaps about such arguments:[4] the universe appears to be a highly improbable structure: we cannot give a rational explanation of these cosmological features: therefore, they constitute evidence of an intelligent designer. And, like the God of the gaps, the role of this deity shrinks with the expansion of scientific understanding. This shrinkage is illustrated neatly by the above quotation from Davies, which refers to the horizon problem now explained by inflationary cosmology.

A second assumption of anthropic design arguments is that the ultimate goal of creation is the existence of rational carbon-based life forms (i.e. humankind). This is in agreement with the dominant view of Western Christian theology. However, it is arguable that the anthropocentricity of western Christianity is derived from sources other than the Christian revelation. For example, instead of presenting humankind as the end of creation, Genesis 1 may be read as insisting that the end of God’s creative activity is his Sabbath rest in the presence of all his creation. A move towards less anthropocentric readings of the Bible (and Christian tradition) is a common feature of contemporary theologies of creation (see Chapter 6). This change of emphasis involves rethinking these arguments and recognising that ‘anthropic’ is an unfortunate term. What is remarkable is that the universe arose in a way fruitful for the formation of carbon-based life. Conditions permitting the simplest colonies of bacteria to arise would still be extremely remarkable.

As we discuss in 10.14-10.15, it is important to distinguish arguments about the initial character of the universe, the ‘settings of the dials’, from arguments about how it happened that, roughly ten billion years later, there arose a ‘Goldilocks planet’ like Earth, a rocky planet at the appropriate sort of distance from the appropriate sort of star to allow life to arise. Anthropic arguments as they are usually understood apply to discussion of the initial ‘settings’ of the universe. The major positions at present are:

a) versions of the many-universes position combined with the WAP (5.22)
b) the fine-tuning of the universe attributed to some sort of designer entity.

The choice between these options will tend to be motivated by metaphysical presuppositions about the coherence of design arguments (see Polkinghorne 1998b:18-22).

[1] Barrow & Tipler see analogies between this Ultimate Observer and the God of Berkeleian idealism. An alternative analogy would be with the self-surpassing deity of process thought.
[2] See also 7.7.
[3] At the same time an inference to the existence of ‘some influence capable of transcending spacetime and the confinement of relativistic causality’ is very far from being an inference to belief in any established religion.
[4] See 10.3 for more on ‘the God of the gaps’.

03 September 2010

Modern cosmology and universal history

5.15 The beginnings of scientific cosmology

The first step towards a scientific cosmology was taken in 1823 when the German astronomer Wilhelm Olbers discussed a paradox that has subsequently been associated with his name. He simply asked ‘Why is the sky dark at night?’ The paradox becomes apparent when you calculate the brightness that should be expected given the assumptions that were current about the overall structure of the universe. If the universe is infinitely large, Euclidean (i.e. the shortest distance between two points is a straight line) and stars (or galaxies) are distributed evenly throughout it, the sky should not be dark at all but as bright as the surface of the average star!

This might have been explained by arguing that the universe is relatively young so that light from distant stars has not had time to reach us. However, by the nineteenth century it was widely accepted that the Earth (and, hence, the universe) was very old. Thus a more popular explanation was that the universe consisted of a finite number of stars concentrated into a finite region of an infinite space – the island universe model of cosmology in which the Milky Way (our own galaxy) constituted a unique island of matter and energy in an infinite void.

In the 1920s astronomers were able to show that some nebulae (clouds of luminous gas and dust) were too far away to be part of the Milky Way – they were island universes or galaxies in their own right. One of the discoverers of extragalactic objects, Edwin Hubble, went a stage further. In 1924 he announced the discovery that light from distant galaxies was systematically redder than light from nearby galaxies and that the degree of red shift was proportional to the distance. This provides a simple explanation for Olbers’ Paradox: if light from distant galaxies is redder, it contributes less energy to the overall brightness of the night sky than light from nearby galaxies. Eventually there comes a point where a galaxy is so distant that it is simply invisible (the ‘event horizon’).

The simplest explanation for this red shift is that it is a case of the Doppler effect. This is the phenomenon that causes the pitch of a train whistle to vary as the train approaches or recedes. According to this explanation, the light is reddened because the galaxies are moving away from us. Since the degree of reddening is also a measure of the speed of recession, Hubble was able to show that more distant galaxies are receding from us faster than nearby ones.

5.16 The Big Bang

At first sight this observation might suggest that the Earth was located at the centre of some cosmic explosion. However, the fact that all motion is relative implies that observers elsewhere in the universe would make similar observations. This observation is consistent with an expanding universe. To illustrate this one might paint spots on a balloon and blow it up. As the balloon expands, the spots recede from each other and more distantly separated spots recede more rapidly.

Extrapolating backwards in time from the observation that the universe is expanding leads to the suggestion that there might have been a time in the distant past (between 10 and 20 billion years ago) when the entire universe was concentrated into a single point. This point would be unimaginably hot and dense. At this ‘t = 0’ the universe would begin to expand rapidly, if not violently. As it expands and cools, matter as we now know it begins to appear. Small variations in the density of that matter lead to condensation and the eventual formation of stars, galaxies and planets. Gradually the mutual gravitational attraction of matter slows the expansion of the universe. The result is the basic picture of the universe as portrayed by modern cosmology.

5.16.1 Evidence for a Big Bang?

Taking the Big Bang as our educated guess about the origin of the universe, we naturally ask what would such a universe look like? Can we deduce potential observations from the hypothesis of a primordial fireball? The answer is ‘yes’.

Since light travels at a finite velocity, observations of distant objects are also observations of conditions in the past. In the distant past, the universe was smaller and therefore denser than it is today. We would therefore expect distant objects to be closer together than those nearby – there is some evidence from radio astronomy that this is the case. We would also expect observations of very distant objects to be consistent with a younger, hotter universe.

In an effort to discredit this theory, Fred Hoyle and some colleagues calculated the chemical composition of a Big Bang universe. This is relatively straightforward since the bulk of the chemical elements would be generated in the first couple of seconds of violent expansion and cooling. Much to their surprise, the outcome of their predictions was very similar to the observed chemical composition of the universe (about 80 per cent hydrogen and 20 per cent helium – all the rest is a mere trace explicable as the result of supernova explosions at the end of the first generation of stellar evolution).

But the most convincing evidence for the Big Bang came from an accidental discovery in 1965. Two young American astronomers, Penzias and Wilson, were attempting to pioneer astronomy in the microwave part of the spectrum. They picked up a very faint signal which seemed to be coming from every part of the sky. At first they thought it was a problem with the telescope. Only when they had thoroughly checked all their equipment did the full significance of their observation became apparent. In the 1940s, George Gamow had predicted that the Big Bang should have left a trace of itself in the form of microwave radiation spread evenly across the sky. Furthermore, the predicted strength of this radiation was comparable with observed results.

5.17 The shape of things to come

The fact that mutual gravitational attraction is causing the expansion of the universe to slow down suggests three possible future scenarios, depending on how much matter there is in the universe. The more matter, the greater the gravitational attraction and the more rapidly the expansion of the universe will slow down. If there is sufficient matter, the gravitational attraction will eventually overcome the expansion and the universe will begin to collapse again. This leads to a family of so-called closed cosmological models. If the total mass of the universe is less than that critical mass, expansion will continue indefinitely – an open universe. At the critical mass itself, the expansion will cease in the infinitely far future.

But what is the mass of the universe? Direct observations of luminous objects suggest a mass that is only a tiny fraction of the critical mass. This would suggest an open universe. However, studies of galaxy clusters reveal that their masses are much greater than what we might expect from their luminosity. In other words, much of the mass of the universe is in the form of ‘dark matter’ that is observable only through its gravitational effects. Estimates of the amount of dark matter vary but many sources suggest that it is sufficient for the actual mass of the universe to be quite close to the critical mass.[1] We discuss the implications of these predictions for theology in 10.20.

5.18 Is the Big Bang a moment of creation?

Strictly speaking the point associated with the Big Bang itself is a singularity – a point at which our laws of physics break down. In itself, this does not imply an absolute beginning. Nevertheless, it is tempting to read the Big Bang as having theological significance. After all, it does seem remarkably like a moment of creation.

This temptation received strong papal endorsement in 1951. Pope Pius XII announced that ‘everything seems to indicate that the universe has in finite times a mighty beginning’. He went on to claim that unprejudiced scientific thinking indicated that the universe is a ‘work of creative omnipotence, whose power set in motion by the mighty fiat pronounced billions of years ago by the Creating Spirit, spread out over the universe’. To be fair, he did also admit that ‘the facts established up to the present time are not an absolute proof of creation in time’.

Such pronouncements are guaranteed to provoke controversy. Even members of the Pontifical Academy of Sciences were divided over the wisdom of the Pope’s remarks. While Sir Edmund Whittaker could agree that the Big Bang might ‘perhaps without impropriety’ be referred to as the Creation, George Lemaître, one of the pioneers of the Big Bang theory, felt strongly that this was a misuse of his hypothesis (see 1.15).

Beyond the Christian community there was even greater unease. One of the fundamental assumptions of modern science is that every physical event can be sufficiently explained solely in terms of preceding physical causes. Quite apart from its possible status as the moment of creation, the Big Bang singularity is an offence to this basic assumption. Thus some philosophers of science have opposed the very idea of the Big Bang as irrational and untestable.

One popular way to evade the suggestion of an absolute beginning has been to argue that the universe must be closed. If it will eventually return to a singular point, why should it not then ‘bounce’? This is the so-called cyclic universe (see 5.22). Other astronomers opposed to the Big Bang, proposed instead a steady state theory. Fred Hoyle took a lead in this proposal. As we indicated in 1.15, his motives were explicitly theological. The steady-state theory argued that, in spite of appearances, the universe was infinitely old and did not evolve over time. Although defended by some very able scientists, this theory suffered a number of major setbacks which led to its demise. In order to maintain a steady state in the face of universal expansion it was necessary to postulate the continuous creation of matter from negative energy – ingenious, but contrived. There was the embarrassment of Hoyle’s failed attempt to show that the Big Bang could not account for the chemical composition of the universe (5.16.1). Finally, the steady state theory was not able to accommodate the new data that appeared – particularly the existence of the microwave background (5.16.1).

5.19 From Big Bang to inflation

The Big Bang theory has been very effective in predicting phenomena that have subsequently been observed by astronomers (5.16.1). However, the theory also raises a number of questions that it is unable to answer.

One of these questions is the so-called ‘horizon problem’. Observations reveal that above a certain scale (about 1024 metres[2]) the universe is highly uniform in structure. However, this degree of uniformity is an embarrassment to cosmologists. According to relativity theory, there should be no causal connection between points separated by distances greater than c multiplied by t (where c is the velocity of light and t is the age of the universe). Extrapolating this back to the Big Bang suggests that the primordial universe was partitioned into about 1080 causally separate regions (Barrow and Tipler, 1986: 420). Nevertheless, all these disconnected regions had to expand at the same rate to maintain the observed degree of uniformity!
Equally embarrassing for conventional Big Bang theory is that fact that although the universe is highly uniform it is not perfectly uniform. According to current theories, galaxy formation depends upon the existence of small initial irregularities in the Big Bang itself. These are amplified by cosmic expansion to the point where gravitation can begin the process of stellar condensation (Barrow and Tipler, 1986:417). If the initial irregularities are too large, the result is the rapid and widespread formation of black holes instead of stars. If the initial irregularities are sufficiently small, the precise expansion rate of the cosmos becomes critical – too rapid and the irregularities will not be amplified enough for galaxy formation to occur; too slow and the cosmos will be closed with a lifetime too short to permit biological evolution. Evidence of the existence of such irregularities in the early universe has been supplied by the COBE (COsmic Background Explorer) satellite’s observations of small irregularities in the cosmic microwave background. There is no mechanism within conventional Big Bang theory to account for these primordial irregularities. This is often called the ‘fine-tuning problem’.

            The widespread expectation among cosmologists that the actual mass of the universe is close to the critical mass (5.17) is a further problem for Big Bang theory in that it offers no explanation of this coincidence. Yet another difficulty arises from the fact that, according to particle physics, the cooling of the early universe after the Big Bang should lead to the production of topological anomalies, particularly magnetic monopoles. Indeed, monopoles should be the dominant matter in the universe. And yet, no monopole has ever been observed, directly or indirectly.

            In the 1980s dissatisfaction with these shortcomings of conventional Big Bang theory led Alan Guth to propose an alternative – the inflationary universe theory. According to this theory, in the earliest moments of its existence the region we now think of as the universe contained an excited state known as a false vacuum. This false vacuum possessed a repulsive force that caused this region to expand far more rapidly than would be possible in conventional Big Bang theory. In an unimaginably short period (perhaps 10−37 s) this region doubled in size at least 100 times. However, due to the peculiar properties of false vacuum, its energy density remained unchanged. In other words, the total energy contained in the region grew enormously. This false-vacuum state is extraordinarily difficult to imagine, but the best non-mathematical account available is provided by Brian Greene (2004: Chs. 9-10).

            The period of inflation must have been extremely short because the false vacuum is unstable. A false vacuum ‘decays’ into other forms of matter, converting its energy into a tremendously hot gas of elementary particles; essentially the same conditions as are predicted by conventional Big Bang theory for the period before the formation of the first atoms of hydrogen and helium. However, the inflationary phase means that the universe was originally much smaller than had previously been thought (perhaps a billion times smaller than a proton); small enough for it to have become uniform before inflation began. This primordial uniformity would then have been preserved during the inflationary phase and beyond, thus solving the horizon problem. Again unlike conventional Big Bang theory, the inflationary approach leads to an explanation of why the actual mass of the universe is close to the critical mass. Inflationary models may also account for the fine-tuning problem, if the irregularities in the cosmic background are understood as quantum fluctuations blown up by inflation (Greene, 2004:305-10).

            Guth’s original version of inflation was later shown to be unsatisfactory but this approach has spawned an entire family of inflationary models. One popular version is that developed by Andrei Linde. In this ‘chaotic’ version, different regions of the false vacuum decay at different times; each region becoming a separate ‘bubble universe’. However, because the false vacuum as a whole is expanding at an exponential rate, it is outgrowing the decay process. In other words, it is spawning ‘bubble universes’ ad infinitum. For comment on the relation of inflationary models to the problem of God’s action, see 10.15. Greene is one of those working on a ‘superstring’ account of the early universe, using descriptions in eleven dimensions. This approach shows some promise, but also reflects just how difficult it is to devise experimental tests of phenomena which only occur in the very special conditions of the early universe (see Greene 2004:Chs 12-13).

[1] At the moment of writing (August 2004) the prevailing view is that the so-called ‘cosmological constant’ which would accelerate the universe’s expansion (see Guth, 1997:37–42 for the history of this term) may be non-zero, and sufficiently large to guarantee an ever-expanding universe.
[2] The scientific convention for writing very big and very small numbers is used here. 102 is 10 multiplied by itself, or a hundred – 106 is ten multiplied by itself four more times, or a million. 1024 is 10 multiplied by itself twenty-four times, or a million million million million. Numbers less than 1 are written with negative indices – so 10-9 would be one-billionth.

02 September 2010

Theology and the new physics: the rediscovery of the observer

5.10 The observational basis of quantum theory

Einstein’s explanation of the null result of the Michelson–Morley experiment led to a radical revision of our understanding of space and time. If anything, the explanation for Lord Kelvin’s other ‘cloud’ – the spectrum of black-body radiation – has led to even more radical changes in our understanding of the world.

5.10.1 The ultraviolet catastrophe

In line with Kelvin’s warning, the first crack in the edifice of classical physics came with attempts to explain the colour of hot objects using classical physics and electromagnetism. The light from these objects is a mixture of different frequencies (colours). Observations reveal that such objects have a distinctive spectrum (pattern of energy distribution at different frequencies). However, attempts to explain this in classical terms failed abjectly – they predicted instead that the amount of energy would tend towards infinity at the high-energy (violet) end of the spectrum – an ultraviolet catastrophe.

Enter Max Planck. In 1900 he suggested that physics should abandon the assumption that electromagnetic energy is continuous and wavelike. If, instead, energy can only be absorbed and emitted in discrete packets (or quanta), theory can be made to fit observations exactly. However, while his suggestion certainly gave the right answer, its abandonment of a cherished assumption of classical physics gave it an air of contrivance that led to its relative neglect for several years.

5.10.2 The photoelectric effect

Another anomaly that concerned physicists at the beginning of the century was the ability of light to eject electrons from metal. The principle is simple – the light imparts energy to electrons which then effectively ‘evaporate’ from the surface of the metal. The classical analogy with the evaporation of water suggests that some degree of evaporation should occur regardless of the frequency of the light, provided it is sufficiently intense. In reality, there is a clear threshold frequency, which varies from metal to metal, below which the effect will not occur.

It was Einstein who, in 1905, rehabilitated Planck’s quantum theory and explained this anomaly by assuming that the energy imparted by the light is packaged (quantised) in a manner that is related to the frequency of the light rather than spread evenly over the wavefront. Furthermore, he assumed that the way in which electrons absorbed that energy is also quantised – so that they can only acquire the energy necessary to escape if the light is of sufficiently high frequency. Light of frequencies lower than this threshold has no effect regardless of the intensity of the light source.

5.10.3 Collapsing atoms and spectral lines

In 1898 Antoine-Henri Becquerel had discovered an entirely new physical phenomenon – radioactivity. This growth area in physics rapidly led to the realisation that atoms were not simply inert billiard balls. On the contrary, they have an internal structure. By the end of the first decade of the twentieth century sufficient research had accumulated for physicists to be able to begin making models of this structure. It was clear that atoms had a very small, dense, positively charged nucleus surrounded by negatively charged electrons.

Ernest Rutherford proposed a planetary model for the atom – electrons in orbit around a nucleus like planets around a star. The fly in the ointment was electromagnetism. An electric charge moving in a circle emits energy. If electrons were classical particles emitting energy in this way, they would very rapidly dissipate all their energy and fall into the nucleus.

A solution was offered by a young Danish physicist, Niels Bohr, whose model of the atom was mentioned in 1.9. Again the key was the abandonment of continuity in favour of quantisation. Bohr simply ruled out the possibility of electrons occupying every possible orbit. Instead they are confined to certain discrete energy levels. Although outlandish, his suggestion had the added attraction that it explained another anomaly – the fact that the light emitted by hot gases is emitted only at certain frequencies (spectral lines).

5.10.4 When is a particle a wave?

The above phenomena indicated that under certain circumstances light can behave in a particle-like manner rather than its usual wave-like manner. The next step in the development of a quantum view of the world was due to an aristocratic French physicist, Prince Louis de Broglie. In his 1924 doctoral thesis, de Broglie proposed that, under certain circumstances, particles might be observed behaving in a wave-like manner. His prediction was confirmed in 1927 by Clinton Davisson and Lester Germer. They found that low-energy electrons fired at a nickel surface were deflected into a series of low and high intensity beams. In other words, diffraction – a characteristic property of waves – occurred.

Similarly, if you take a beam of electrons and pass it through a pair of slits (a classical wave experiment), you get diffraction and interference – properties characteristic of a wave rather than a particle. If you reduce the intensity of the electron beam to a single electron at a time, the detector on the other side of the slits will still gradually accumulate a trace that looks like an interference pattern (see Figure 5.1). Explaining this in classical terms is impossible – if the electrons go through one slit or the other the pattern would look quite different.[1] 

5.11 The quantum revolution
By the early 1920s, these anomalies had grown into a gaping hole in the fabric of physics. At the same time, the explanations proffered by physicists such as Einstein and Bohr held out the promise of a radical reconstruction. The task of integrating these insights into a coherent theory of sub-atomic physics fell to Werner Heisenberg and Erwin Schrödinger. Although they were working independently, their approaches were sufficiently similar to be formally merged into quantum mechanics. This new theory constituted a radical shift in the conceptual foundations of physics. We mention here three key aspects of quantum mechanics.

(i) Wave–particle duality

De Broglie’s demonstration of the possibility of electron diffraction highlights one of the fundamental features of the new theory – wave-particle duality. This is one of the properties enshrined in the fundamental equation of quantum mechanics, the Schrödinger wave equation – so-called because it takes a mathematical form characteristic of classical wave equations. However, this equation does not refer to physical waves but rather to probabilities, e.g. the probability of finding an electron in one location rather than another. The final outcome may be determinate (an electron in a particular location), but the probability distribution of the possible outcomes has the mathematical form of a wave. This peculiar feature of a very successful equation has led to the intractable problem of how we should interpret quantum mechanics (see 5.13).

(ii) Uncertainty

Uncertainty is one of the best-known implications of quantum mechanics. In 1927 Heisenberg argued that key physical quantities (e.g. position and momentum) are paired up in quantum theory. As a result, they cannot be measured simultaneously to any desired degree of accuracy. Attempts to increase the precision of one measurement result in less precise measures of the other member of the pair.

Take an electron, for example. We might try to determine its position by using electromagnetic radiation. Because electrons are so small, radiation of very short wavelength would be necessary to locate it accurately. However, shorter wavelengths correspond to higher energies. The higher the energy of radiation used, the more the momentum of the electron is altered. Thus any attempt to determine the location accurately will change the velocity of the electron. Conversely, techniques for accurately measuring the velocity of the electron will leave us in ignorance about its precise location. Looked at conservatively, this is an epistemological issue: quantum uncertainty is a principle of ignorance inherent in any measuring technique used on such a small scale. However, Heisenberg himself took a more radical view – he saw this limitation as a property of nature rather than an artefact of experimentation. This radical interpretation of uncertainty as an ontological principle of indeterminism implies that quantum mechanics is inherently statistical – it deals with probabilities rather than well-defined classical trajectories. Such a view is clearly inimical to classical determinism. Equally clearly, this is a metaphysical interpretation that goes beyond what is required by the mathematics of the Uncertainty Principle.

(iii) Radical interdependence

In spite of his crucial role in the early development of quantum mechanics, Einstein was very uneasy about its implications and, in later years, organised a rearguard action against it. His aphorism ‘God does not play dice’ highlights the depths of his distaste for quantum uncertainty. His strongest counter-argument was to call attention to a paradoxical implication of quantum mechanics now known as the Einstein–Podolsky–Rosen (EPR) Paradox.

Take, for example, a pair of protons whose quantum spins cancel out. Now separate them and measure the spin of one proton. Because they were paired, they had a combined wave equation. Measuring the spin of one proton ‘collapses’ that wave equation and determines the spin of the other. It appears that a measurement in one place can have an instantaneous effect on something that may be light years away.

For Einstein this was proof that quantum mechanics must be incomplete. To him this result only made sense if the spins were determinate (but unknown to us) before the protons were separated. In this case, measurement would merely tell what was always the case. But, according to the orthodox interpretation of quantum mechanics, it is not merely a matter of ignorance. The spin is not determined until it has been measured. In other words, the pair of protons cannot be regarded as separate entities until the measurement has been made.

Some years later, a quantum logician turned this paradox into a testable prediction that now bears his name – Bell’s Inequality. This is an equation which should be true if two principles (assumed by Einstein and his colleagues in formulating the EPR Paradox) hold in the world:
The principle of reality: that we can predict a physical quantity with certainty without disturbing the system, and
The locality principle: that a measurement in one of two isolated systems can produce no real change in the other.
Taken together, these principles imply an upper limit to the degree of co-operation that is possible between isolated systems. In 1982 a team of physicists at the University of Paris led by Alain Aspect demonstrated experimentally that this limit is exceeded in nature. In other words, our physical descriptions of the world in which we live cannot be both real and local in the above sense.

What this means in practice is a greater emphasis on describing quantum-mechanical systems as a whole. This runs counter to the tendency of classical physics towards ‘bottom-up thinking’– treating systems as collections of separate entities, and trying to reduce their properties to the individual properties of the simplest possible components. The quantum world, which deals with the simplest entities we know, seems to resist this reduction – it is in Karl Popper’s famous phrase ‘a world of clouds’ as well as ‘clocks’ (quoted in Polkinghorne, 1991:44). ‘Bottom-up’ thinking has served science extremely well, we simply indicate here that it has its limitations.[2]

5.12 Shaking the foundations

The quantum view of the world departs from classical assumptions in four main ways.

1.         Determinism has given way to an emphasis on probabilities. We simply do not have access to enough information to make deterministic predictions. And this is widely held to be a feature of the world rather than an observational limitation.
2.         Reductionism has given way to a more holistic approach to physical systems.
3.         Closely allied to this, locality (the impossibility of information being propagated instantaneously) has given way to correlation-at-a-distance.
4.         Most basic of all, the classical assumptions of continuity and divisibility (that between any two points there is an infinite number of intermediate values) have given way to quantisation – for certain physical quantities, the range of permissible values is severely restricted.

5.13 Schrödinger’s cat and the meaning of quantum theory

The EPR Paradox described in 5.11 (iii) introduces us to one of the basic problems of quantum mechanics – the relationship between measurement and reality. This is highlighted by a famous thought-experiment involving a hapless cat. The cat is in a box together with a canister of poisonous gas connected to a radioactive device. If an atom in the device decays, the canister is opened and the cat dies. Suppose that there is a 50–50 chance of this happening. Clearly when we open the box we will observe a cat that is either alive or dead. But is the cat alive or dead prior to the opening of the box?

Interpretation (i) Quantum orthodoxy (Copenhagen interpretation)

The dominant view in quantum mechanics is that quantum probabilities become determinate on measurement – that the wave function is collapsed by the intervention of classical measuring apparatus. This means that the cat is neither alive nor dead until the box is opened. The cat is in an indeterminate state.

This interpretation is usually allied with a tendency to extreme instrumentalism (see 1.7). On such a view the probabilities generated by the Schrödinger wave equation do not correspond to any physical reality. There simply is no reality to be described until an act of measurement collapses the wave function. Quantum mechanics is merely a useful calculating device for predicting the possible outcomes of such acts of measurement.

In spite of its dominance in the textbooks, this interpretation is hardly satisfactory. To begin with, it may be regarded as proposing a dualism in physical reality: two worlds – an indeterminate quantum world and a determinate classical world. Then there is the problem of what constitutes classical measuring apparatus. At what level does the wave function actually collapse?

The act of measurement that collapses the wave function cannot be limited to scientific instruments. After all, why should we assume that our scientific measurements are solely responsible for collapsing the wave function? This would give rise to a most peculiar world – one that was indeterminate until the evolution of hominids.

Some physicists, e.g. Wigner and Wheeler, have identified the classical measuring apparatus of the Copenhagen interpretation with consciousness. If so, they must be using a much broader definition of consciousness than is usual. What level of consciousness would be needed to make something determinate? Is the cat sufficiently conscious to determine the outcome of the experiment? Would earthworms do? What about viruses? The effect of pursuing this line of inquiry is to move towards a form of panpsychism – the doctrine that every part of the natural world, no matter how humble, is in some sense conscious!

An alternative might be to postulate a transcendent world observer – a divine mind whose observations collapse the wave functions on our behalf. In effect this would be the quantum-mechanical version of Bishop Berkeley’s idealism.[3] This is memorably summarised in a couple of limericks:

There was once a man who said ‘God
Must think it exceedingly odd
If he finds that this tree
Continues to be
When there’s no one about in the quad.’

And the reply:

Dear Sir, Your astonishment’s odd:
I am always about in the quad.
And that’s why the tree
Will continue to be,
Since observed by Yours faithfully, God.

The problem with this attractive solution to the measurement problem is that it proves too much. Invoking a divine observer leads to the question of why there should be any quantum measurement problem at all. Why should anything be left indeterminate for us to determine by our measurements? Is God only interested in those aspects of creation that are above a certain size?

Returning to the classical measuring apparatus, perhaps we should put the emphasis on ‘classical’ rather than ‘measuring’ – stressing not so much our intervention in the system as a transition from the world of the very small, in which quantum principles operate, to the everyday world of classical physics. This neo-Copenhagen interpretation has the merit that it avoids the absurdities of the consciousness-based approaches. However, we are still faced with the difficulty of identifying an acceptable transition point. How small is small? (There is now evidence that a molecule of fullerene containing around seventy carbon atoms can exhibit wave-particle duality.) One suggestion is that we choose the level at which physical phenomena become so complex that they are irreversible. Another, from Roger Penrose, is that gravity provides the key (Penrose 1994:Ch.6, 1997:Ch.2)..

Interpretation (ii) Hidden variables (neo-realism)

Einstein was not alone in finding this interpretation of quantum mechanics objectionable. A few physicists have persisted in arguing that the statistical nature of quantum mechanics implies that it is only really applicable to ensembles of particles (just as an opinion poll is only meaningful if a reasonable sample of the population has been polled). In other words, quantum mechanics is an incomplete description of reality. They maintain that underlying this level of indeterminacy there is an objective foundation.

The best-known hidden-variables theory is that of the physicist and philosopher David Bohm (see Bohm, 1980). What Bohm did was to distinguish between the quantum particle, e.g. an electron, and a hidden ‘guiding wave’ that governs its motion. Thus, in this theory electrons are quite clearly particles. When you perform a two-slit experiment, they go through one slit rather than the other. However, their choice of slit is not random but is governed by the guiding wave, resulting in the wave pattern that is observed.

The main weakness of Bohm’s theory is that it looks contrived – which it is. It was deliberately designed to give predictions that are in all details identical to conventional quantum mechanics. His aim was not to make a serious counterproposal but simply to demonstrate that hidden-variables theories are indeed possible.

It is sometimes suggested that hidden-variables theories have been ruled out by the Aspect experiment (5.11 (iii)). This is a misunderstanding of the experiment. What it did was to show that attempts to explain quantum phenomena cannot be both deterministic and local. Hidden-variables theories, with their underlying determinism, must be non-local, maintaining the existence of instantaneous causal relations between physically separated entities. Such a view contradicts the simple location of events in both classical atomism and relativity theory. It points to a more holistic view of the quantum world. Indeed Bohm himself stressed the holistic aspect of quantum theory in his later years, after his conversion from Marxism to theosophy.

Interpretation (iii) The many worlds interpretation

The third main class of interpretations starts from the assumption that scientific theories ought to be self-interpreting. The Schrödinger wave equation in quantum mechanics is smooth, continuous and deterministic. There is nothing in it that corresponds to the collapse of the wave function.

In 1957 Hugh Everett surprised his more conventional colleagues by proposing that the Schrödinger wave equation as a whole is an accurate description of reality. There is no collapse of the wave function. Whenever there is a choice of experimental outcomes, all the possibilities are realised. Somewhere Schrödinger’s cat will be really dead and somewhere it will be really alive. With each decision at the quantum level the universe splits into a number of isolated domains, each corresponding to a different outcome. In one universe the cat dies, in another it lives.

Most physicists find this extremely unattractive. One of the most venerable assumptions of the scientific method is Ockham’s razor – non sunt multiplicanda entia praeter necessitatem; i.e. entities are not to be multiplied beyond necessity. In practice this leads to a very strong aesthetic bias in favour of the simplest possible explanation.

Only quantum cosmologists beg to differ. They attempt to apply quantum mechanics to the entire universe. Clearly this leaves no room for a separate classical measuring apparatus. In this context, a many-universes approach such as was described above may seem an attractive non-theistic alternative to the notion of a transcendent world observer. But one wonders which option requires the larger act of faith!

5.14 Quantum consciousness

In classical mechanics, with its close association with Cartesian dualism, the physical world was neatly divorced from the realm of consciousness. As far as classically minded materialists were concerned, the latter was a mere side effect of biochemical interactions. However, as noted above, the dominant Copenhagen interpretation of quantum mechanics envisages a greatly expanded role for the observer. Granted the traditional association of temporal perception with consciousness, the rediscovery of time by modern physics may point in the same direction.

Such considerations have given rise to the suggestion that consciousness itself may be interpreted as a quantum phenomenon. Perhaps the best-known proponent of a quantum explanation of consciousness is Roger Penrose (1989; 1994; 1997). He rejects the currently popular view that human consciousness is essentially computational (that minds are analogous to computer programs) because, in his opinion, this model fails to account for intuitive problem-solving. The brain must be non-algorithmic (i.e. it does not operate by mechanically following a fixed set of procedures in the manner of a computer). Further he argues that classical physics is inherently algorithmic in its nature. Thus consciousness is not explicable in classical terms.

The obvious candidate for a non-algorithmic process in physics is the quantum-mechanical collapse of the wave function. Penrose suggests that the brain uses quantum collapse to solve problems non-algorithmically. But by what means? He pins his hopes on structures called microtubules that occur within cells, speculating that quantum effects within the microtubules may be co-ordinated across groups of neurones to provide the basis for such intuitive processes. However, as many physicists and neurophysiologists have pointed out, this is highly speculative – a weakness that Penrose himself acknowledges.

[1] For a discussion of the two-slit experiment in terms of quantum theory see Davies, 1990:108–11.
[2] For a brief discussion of ‘bottom-up’ and ‘top-down’ thinking see Peacocke (1993: 53–55). See also our ‘note on emergence’ in 6.11.1. We take up the question of ‘top-down causation’ in relation to divine action in 10.9(iii)[c]. .
[3] George Berkeley (1685-1753) – a philosopher famous for his apparent denial of the reality of any external world.