Process Physics, time, and consciousness: Nature as

0 downloads 0 Views 4MB Size Report
Sep 1, 2016 - as habit-bound with a potential for creative novelty and open-ended evolution. ...... should be run alongside the line to indicate which point on it is effectively acting as the ...... Under this banner we can not only ..... archway. Although its semicircular shape indicates where the bricks should be placed, once ...
Process Physics, time, and consciousness:

Nature as an internally meaningful, habit-establishing process

(September 1, 2016)

By Jeroen B. J. van Dijk

([email protected])

Table of Contents

1. Introduction 1.1 Getting to know Process Physics in terms of time, life and consciousness

1 4

2. Time 2.1 From the process of nature to the geometrical timeline 2.1.1 Aristotle’s teleological physics 2.1.2 Galileo’s non-teleological physics 2.1.3 The deficiencies of the geometrical timeline

7 7 7 12 19

2.2 From the geometrical timeline to time-based equations 2.3 From time-based equations to physical laws 2.3.1 The flawed notion of physical laws

21 24 27

2.4 From geometrization to the timeless block universe 2.5 Arguments against the block universe interpretation 2.5.1 The real world out there is objectively real and mindindependent (or not?) 2.5.2 Events in nature reside in a geometrical continuum (or not?) 2.5.3 Relativity of simultaneity means that our experience of time is illusory (or not?)

29 34

3. Doing physics in a box 3.1 The Newtonian paradigm 3.1.1 The exophysical aspect of the Newtonian paradigm 3.1.2 The decompositional aspect of the Newtonian paradigm 3.2 Measurement and information theory 3.2.1 Looking at measurement in a purely quantitative, informationtheoretical way 3.2.2 The Modeling Relation: relating empirical data to datareproducing algorithms 3.2.3 From information acquisition to info-computationalism 3.2.4 3.2.5

Information, quantum and psycho-physical parallelism From psycho-physical parallelism to measurement as a semiosic process 3.3 From doing physics in a box to doing physics without a box

37 43 48 56 56 57 60 69 74 76 83 86 92 98

4. Life and consciousness

102

4.1 The evolution of the eye 4.2 From info-computationally inspired neo-Darwinism to ‘lived-through subjectivity’ as a relevant factor in evolution 4.2.1 From the info-computational view to information as mutualistic processuality 4.2.2 From the non-equilibrium universe to the beginning of life as an autocatalytic cycle 4.2.3 From environmental stimuli to early subjective experience 4.2.4 From early photosensitivity to value-laden perception-action cycles

105

4.3 Perceptual categorization, consciousness and mutual informativeness 4.3.1 Integration, differentiation and the mind-brain’s mutual informativeness

126

4.3.2 4.3.3

Self-organization and the noisy brain Self-Organized Criticality and action-potentiation networks

5. Process Physics: A biocentric way of doing physics without a box 5.1 Requirements for doing physics without a box 5.2 Process Physics as a possible candidate for doing physics without a box 5.3 Process Physics: going into the details 5.3.1 Foundationless foundations, noisiness, mutual informativeness, 5.3.2 5.3.3 5.3.4 5.3.5

and lawlessness Process Physics and its roots in quantum field theory Process Physics and its stochastic, iterative update routine From pre-geometry to the emergence of three-dimensionality Process Physics, intrinsic subjectivity and an inherent present moment effect

6. Overview and Conclusions

108 112 113 117 118

131 132 135 140 140 143 146 148 152 154 160 167 175

Appendix A: Addendum to §2.5.2 ‘Events in nature can be pinpointed geometrically (or not?)’

181

References

184

List of figures

2-1: Bronze ball rolling down an inclined plane (with 𝒔-𝒕 diagram)

14

2-2: The Earth-Moon system in a temporal universe and a block universe

31

2-3: Minkowski space-time diagram

50

3-1: Simplified universe of discourse in the exophysical-decompositional paradigm 70 3-2: Rothstein’s analogy (between communication and measurement)

75

3-3: Steps towards Robert Rosen’s Modeling Relation

78

3-4: Universe of discourse with von Neumann’s object-subject boundary

87

3-5: From background semiosic cycle of preparation, observation and formalization to foreground data and algorithm

94

4-1: Conscious observer as an embedded endo-process within the greater embedding omni-process which is the participatory universe 4-2: Subsequent stages in the evolution of the eye

103 105

4-3: Stationary and swarming cortical activity patterns in non-REM sleep and wakefulness

130

4-4: Varying degrees of neuroanatomical complexity in a young, mature, and deteriorating brain

134

5-1: Schematic representation of interconnecting nodes

156

5-2: Artistic visualization of the stochastic iteration routine

159

5-3: Tree-graphs of large-valued nodes 𝑩𝒊𝒋 and their connection distances 𝑫𝒙

161

5-4: Emergent 3D-embeddability with ‘islands’ of strong connectivity

162

5-5: [Dk-k]-diagram

164

5-6: Fractal (self-similar) dynamical 3-space

165

5-7: Gray-Scott reaction-diffusion model

166

5-8: Fractal pattern formation leading to branching networks

170

5-9: Seamlessly integrated observer-world system with multiple levels of self-similar, neuromorphic organization 5-10: Structuration of the universe at the level of supragalactic clusters

172 173

Abstract Ever since Einstein’s arrival at the forefront of science, mainstream physics likes to think of nature as a giant 4-dimensional spacetime continuum in which all of eternity exists all at once in one timeless block universe. Accordingly, much to the dismay of more process-minded researchers, the experience of an ongoing present moment is typically branded as illusory. Mainstream physics is having a hard time, though, to provide a well-founded defense for this illusoriness of time. This is because physics, as an empirical science, is itself utterly dependent on experience to begin with. Moreover, if nature were indeed purely physical – as contemporary mainstream physics wants us to believe – it’s quite difficult to see how it could ever be able to give rise to something so explicitly non-physical like conscious experience. On top of this, the argument of time’s illusoriness becomes even more doubtful in view of the extra-ordinary level of sophistication that would be required for our conscious experience to achieve such an utterly convincing, but – physically speaking – pointless illusion. It’s because of problems like these that process thought has persistently objected against this ‘eternalism’ of mainstream physics. Just recently, physicist Lee Smolin even brought up some other major arguments against this timeless picture in his controversial 2013 book ‘Time Reborn’. And although he passionately argues that physics should take an entirely different direction, he admits that he has no readily available roadmap to success. Fortunately, however, over the last 15 years or so, a neo-Whiteheadian, biocentric way of doing foundational physics, namely Reg Cahill’s Process Physics, has been making its appearance on the scene. According to Process Physics, nature is a routine-driven or habitbased process, rather than a changeless world whose observable phenomena are governed by eternally fixed and highly deterministic physical laws. Counter to the currently prevailing view of such a law-abiding natural world, Process Physics suggests that the natural universe comes into actuality from an initially undifferentiated, orderless background of dispositional activity patterns which is driven by a habit-establishing, iterative update routine. In the Process Physics model, all such habit-establishing activity patterns are ‘mutually in-formative’ as they are actively making a meaningful difference to (i.e. ‘informing’) each other. This mutual in-formativeness among inner-system activity patterns will thus actively give shape to ongoing structure formation within the system as a whole, thereby renewing it through stochastic (hence, ‘novelty-infusing’) update iterations. In this way, the system starts to evolve from its initial featurelessness to then branch out to higher and higher levels of complexity, thus leading to a neural network-like structure formation.

Because of this noise-driven branching behavior, the process system can be thought of as habit-bound with a potential for creative novelty and open-ended evolution. Furthermore, threedimensionality, gravitational and relativistic effects, nonlocality, and near-classical behavior are spontaneously emergent within the system. Also, the system’s constantly renewing activity patterns bring along an inherent present moment effect, thereby reintroducing time in terms of the system’s ongoing change as it goes through its cyclic iterations. As a final point, subjectivity – in the form of mutual informativeness – is a naturally evolving, innate feature, not a coincidental, later-arriving side-effect.

1 1. Introduction

This paper is intended as a follow-up to Lee Smolin’s Time Reborn (2013). In thi controversial, yet well-received book he tried to argue against contemporary mainstream physics’ firm belief in the unreality of time. Starting with the basics, we’ll first follow the historical path from 1) the geocentric teleological physics of Aristotle who thought of time as an abstracted measure of motion; 2) the helio-centric non-teleological physics of Galileo who turned time into a quantifiable one-dimensional coordinate line; and 3) the mechanistic physics of the Newtonian-Laplacean clockwork universe, with its concepts of absolute space as a 3-dimensional geometrical volume that exists independently of its contents (i.e. the physical constituents of the ‘clockwork universe’) and absolute time as an externally running chain of intervals that pass by at the same rate for everyone and everything in the entire universe. Then, at the end of this list, we’ll find what is today almost unanimously considered to be one of the great highpoints in the history of physics and probably the absolute climax in the history of thinking about time, which is 4) Einstein’s Special Theory of Relativity which, due to Minkowski’s block universe interpretation, led to the now well-established belief that nature is actually a giant 4-dimensional spacetime continuum in which all of eternity exists together at once as a huge static and timeless expanse. That is, following the line of reasoning in the block universe interpretation, which argued that the relativity of simultaneity1 necessarily involved the unreality of time passing by, many physicists became convinced that our experience of time had to be illusory. Supported by the wave of public enthusiasm that followed after Arthur Eddington’s confirmation of Einstein’s prediction of the bending of star light around the Sun (1919), this belief in the unreality of time grew in popularity until it reached the status of a logical necessity – a rock-solid truth. Mainstream physics is having a hard time, though, to provide a truly watertight defense for this illusoriness of time. This is firstly because physics, as an empirical science, is itself utterly dependent on experience – since it is instrument- as well as sensory-based. Secondly, if our experience of time were indeed illusory, it’s still exceptionally difficult to see how and why it should ever have evolved at all. After all, in the context of the prebiotic universe – which is, according to current mainstream belief, an entirely inanimate and purely 1

The situation that different observers moving at different speeds may not experience the same well-separated events in the same order so that it cannot be confirmed if these events are actually simultaneous or not.

2

physical world – the emergence of such an extraordinarily sophisticated and convincing illusion like our conscious experience of time would be utterly pointless and inexplicable. It would thus be entirely impossible to explain in a logically acceptable way how our conscious illusion of an ever-changing present moment could ever relate to such a becomingless whole as the timeless block universe (Čapek 1976, 521). Or, to put it more graphically, it would become impossible to explain why we are not living in the reign of George III (Ibid.; McTaggart 1964, 160) – or any other past or future ruler for that matter. Last, but not least, then, although the block universe interpretation may indeed seem well-structured and crystalclear at first sight, on closer scrutiny its arguments in favor of the unreality of time are not as firm and sound as one might hope for.2 It’s because of reasons like these that process thought has always been opposed against the portrayal of nature as a purely physical and timeless realm. In hindsight, we can now say that its minority opinion has forced process thought into a long-lasting uphill battle. The empirically gathered evidence of mainstream physics had proven so useful and convincing, and its mathematics so esthetically pleasing, that any criticism didn’t stand a chance if it were based on philosophical grounds alone. Just recently, however, leading physicist Lee Smolin managed to breathe some new life into the debate that – so it was commonly thought – had already been won by mainstream physics long ago. In his critically acclaimed book Time Reborn, he persuasively argues against the existence of eternally valid laws of nature. That is, he claims that it is a mistake to think that such local ‘laws’ could ‘govern’ the behavior of the universe at its largest scale as well: “My argument starts with a simple observation: The success of scientific theories from Newton to the present day is based on their use of a particular framework of explanation invented by Newton. This framework views nature as consisting of nothing but particles with timeless properties whose motions and interactions are determined by timeless laws. The properties of the particles, such as their masses and electric charges, never change, and neither do the laws that act on them. This framework is ideally suited to describe small parts of the universe, but it falls apart when we attempt to apply it to the universe as a whole. All the 2

While relativity of simultaneity seems to lead logically and inescapably towards the negation of the passage of time, it is by no means an absolute fact (Čapek 1976, 508). That is, the relativity of simultaneity will only occur under certain specific circumstances, namely it requires 1) well-separated events that 2) must have come into actuality before they can ever 3) be detected by observers that are moving relative to one another with a significant enough difference in velocity.

3

major theories of physics are about parts of the universe … When we describe a part of the universe we leave ourselves and our measuring tools outside the system. We leave out our role in selecting or preparing the system we study. We leave out the references that serve to establish where the system is. Most crucially for our concern with the nature of time, we leave out the clocks by which we measure change in the system. [But, w]hen we do cosmology, we confront a novel circumstance: It is impossible to get outside the system we’re studying when that system is the entire universe.” (Smolin 2013, xxiii)

So, what Smolin objects against particularly is any attempt to extrapolate our conventional way of doing ‘physics in a box’ to the universe at large. Indeed, it’s been of course an historically hugely successful method 1) to isolate a small subsystem from the rest of the universe; then 2) to try to extract empirical data from it; and then, finally, 3) to put together a ‘lawful’ physical equation on the basis of these data so that the behavior of this isolated subsystem can be represented and predicted with great precision. But, as noted above by Smolin, this way of doing physics in a box necessarily requires that we leave ourselves, our preparatory actions, and also our entire measurement instrumentarium outside the system to be observed, something that cannot be done when trying to attend to nature as a whole. In fact, this way of doing physics in a box – or what we will later on also refer to as ‘exophysical-decompositional physics’3 – inevitably leads to more such impracticalities, all kinds of paradoxes, the dubious belief that the natural universe is entirely timeless, and unanswerable questions, such as ‘why these laws?’ and ‘why did the universe start out with the initial conditions from which it has grown into its current state?’ (cf. Smolin 2013, 97-98). To find a way out of these problems, Smolin suggests that we should drop the idea of eternally valid ‘laws of nature’ and exchange it for something else. That is, following in the footsteps of process philosopher Charles Sanders Peirce (1839-1914), he argues that nature is not being governed by predetermined laws, but develops habitually instead. Inspired by this idea, Smolin becomes convinced that, in order for physics to get rid of its problems with time and its unanswerable questions, it should take an entirely different direction, although he admits that he has no readily available road map to success.

3

The term ‘exophysical’ refers to an external, non-participating observer looking out onto an allegedly entirely physical world. Moreover, ‘decompositional’ refers to the nature-dissecting acts of decomposition that have to be performed before physics as we know it can be done in the first place (cf. van Dijk 2016).

4

Fortunately, however, some 15 years ago or so, a neo-Whiteheadian, neurobiologically inspired, biocentric way of doing physics without a box, namely Reg Cahill’s4 Process Physics, arrived on the scene and ever since it has managed to grow into a serious alternative for contemporary mainstream physics. As such, Process Physics aims to model the universe from an initially orderless and uniform pre-space by setting up a stochastic, selfreferential modeling of nature. In Process Physics, all self-referential and initially noisy activity patterns are ‘mutually in-formative’ as they are actively making a meaningful difference to each other (i.e. ‘in-forming” or ‘actively giving shape to each other’). Due to this mutual in-formativeness, the initially undifferentiated activity patterns will act as ‘start-up seeds’ that become engaged in self-renewing update iterations. In this way, the system starts to evolve from its initial featurelessness to then “branch out” to higher and higher levels of complexity – all this according to roughly the same basic principles as a naturally evolving neural network. Because of this self-organizing branching behaviour, the process system can be thought of as habit-bound with a potential for creative novelty and open-ended evolution. Furthermore, nonlocality, threedimensionality, gravitational and relativistic effects, and (semi-)classical behaviour are spontaneously emergent within the system. Also, the system’s constantly renewing activity patterns bring along an inherent present moment effect, thereby reintroducing time as the system’s ‘becomingness’. As a final point, subjectivity – in the form of ‘mutual informativeness’ (which is used in Gerald Edelman’s and Giulio Tononi’s extended theory of neuronal group selection to explain how higher-order consciousness can emerge) – is a naturally evolving, innate feature, not a coincidental, later-arriving side-effect.

1.1 Getting to know Process Physics in terms of time, life and consciousness

In order to properly introduce Process Physics, first, a proper outline of our contemporary mainstream physics and its problems must be given. Therefore, in Chapter 2, named ‘Time’, we’ll discuss the most important technicalities having to do with how mainstream physics deals with time. To be more specific, we’ll first take a look at the role of time in Aristotle’s teleological physics. After that, the more recent history of time as a geometrical dimension 4

Professor of physics at Flinders University in Adelaide (Australia), and winner of the 2010 Gold Medal of the Telesio-Galilei Academy of Science.

5

will be sketched, starting with Galileo’s one-dimensional timeline, and then going from Newton’s absolute space and time to Einstein’s 4-dimensional spacetime, which motivated Minkowski to develop his interpretation of nature as an entirely static and timeless block universe. Then, Chapter 3, on ‘doing physics in a box’, aims to give a comprehensive analysis of the basic workings of our contemporary mainstream physics, together with an outline of some of its intrinsically problematic features, which, in the context of this paper, are its denial of the reality of time and its claim that consciousness must ultimately be illusory. If, indeed, consciousness is not illusory, as process philosophy, phenomenology, and the system sciences like to argue, then it would be interesting to sort out how it arose in living organisms, and how it enables these organisms to get to know the natural world in which they live. This and more will be discussed in Chapter 4, called ‘Life and Consciousness’, where the main topics of interest will be the emergence of life through autocatalysis and the coming into actuality of higher-order consciousness. Since subjectivity is here seen as the process of sense-making as the organism goes through its cyclic perception-action loops,5 one of the main conclusions is that consciousness is not confined to some elusive center of subjectivity buried deep within the brain, but extends well into the organism’s environment. That is, a sense of self and world gets to be sculpted by the process of sense-making as it runs its course within the seamlessly interconnected ‘organism-world system.’ Both the emergence of life and that of consciousness are particularly relevant to how Process Physics hangs together, because both of them can be explained as self-organizing processes that come into actuality from a primordial background of initially undifferentiated processuality. This has a striking resemblance with how Process Physics works. That is, Process Physics is a biocentric way of doing physics without a box, which introduces a nonformal, self-organizing modeling of nature. As such, the Process Physics model is not based on law-like physical equations as in mainstream physics, but on a stochastic iteration routine that reflects the Peircean principle of precedence (Peirce 1992, 277). By modeling nature with the help of ‘recursive routine’ rather than ‘timeless laws’, Process Physics manages to set up a dynamic network of dispositional relations through which higher-order relational patterns can emerge from an initially uniform and undifferentiated background (cf. Cahill et al. 2000, 192-193). In so doing, the Process Physics model will gradually start to exhibit many features also found in our own universe: non5

Perception-action loops is actually shorthand for ‘sensation-valuation-motor activation-world manipulation’ loops.

6

locality; emergent three-dimensionality; emergent relativistic and gravitational effects; emergent semi-classical behavior; creative novelty; habit formation; mutual informativeness;6 an intrinsic present moment effect with open-ended evolution, and more.

6

Please note that, in cognitive neuroscience, mutual informativeness is also characteristic of the process of subjectivity (cf. Edelman and Tononi 2000, 126-130).

7

2. Time

Although time has played a major role in physics ever since the early 1600s when Galileo started to specify it in terms of chronologically arranged intervals along a geometrical, unidirectional line, there is still no common agreement on what it actually is (cf. Davies 1995 279-283; Davies 2006, 6-8; Smolin 2013, 240-242). Despite the impressive theoretical and technological progress over the last four hundred years, physicists and philosophers alike continue to be troubled by the elusiveness of time and our incomplete understanding of it. So, for sake of clarity, let’s first try to reconstruct how the concept of time was historically introduced into physics, and see how it developed over the years.

2.1 From the process of nature to the geometrical timeline With his detailed and careful study of the behavior of falling objects, Galileo Galilei (1564 – 1642) basically established the blueprint model for our contemporary mainstream physics. In those days, geometry – the ancient mathematical discipline dedicated to the specification of abstract shapes and lengths – was the most prominent piece of equipment in the scientific tool box. Therefore, the accounts of motion that were formulated by Galileo’s contemporaries would typically be based on geometry, if not directly then at least indirectly. Accordingly, all things having to do with motion first had to be looked at through the filter of geometry, for instance, by comparing traveled distances of thrown projectiles or the depth of impact pits left behind by falling objects.

2.1.1 Aristotle’s teleological physics

For a long time, the most influential account of motion had been that of Aristotle (384322 BCE), who, in his time, had introduced a way of doing physics that was very much enddirected and purpose-laden, and thus in fine agreement with his teleological philosophy: “Aristotle’s vision for physics … depended on [the] division of sub- and superlunar cosmic domains. Five basic elements existed in Aristotle’s cosmos: earth, water, fire, air and aether.

8

Each element had a ‘natural’ motion associated with it. Earth and water ‘naturally’ sought movement toward the Earth’s center. Air and fire naturally rose toward the celestial domain. The aether was a divine substance constituting the heavenly spheres. These ‘natural’ inclinations seemed self-evident to Aristotle and did not require separate tests. Only many centuries later would a new breed of scientists such as Galileo (in the late sixteenth and early seventeenth centuries) demand that a hypothesis such as natural motion be validated through experiments.” (Frank 2011, 47)

Along these lines, Aristotle’s explanation for what seemed to be the two most obvious forms of motion – ‘free fall’ and what may be called ‘continued travel’ – were very much enddirected. In the Aristotelian framework, free fall would be explained by appeal to the striving of earthly matters to move towards their natural endpoint, namely the heart of the universe, which was, according to the then prevailing wisdom, the center of the Earth. Continued travel, on the other hand, was in Aristotle’s view the result of so-called ‘antiperistasis.’ This is the phenomenon through which the motion of a projectile like a spear or arrow is continued as compressed air coming from the front of the projectile fills in the empty gap that it leaves behind, thus pushing the projectile forwards with a constant thrust.

Both forms of motion were later given a new, improved interpretation by Galileo, but for now, we’ll keep our focus on Aristotle’s view of nature a little bit longer: In line with the teleological principle that earthly matters and water are naturally driven toward Earth, he thought that the speed of something falling down would actually depend on the amount of earth-seeking elements it contained, or, in other words, on its weight. He had been led to think so, amongst others, by the observation that heavier objects, when being dropped in the water, sink to the bottom in an observably faster way than lighter ones do. From this he concluded (wrongly, as it turned out later on) that the rate of falling had to be proportional to the weight of the object and inversely proportional to the viscosity of the medium. Put short: heavy objects had to fall faster than lighter objects. And although we now think of this belief as being quite naïve and over-impulsive, it still managed to persist for a staggeringly long period of time – almost two millennia. Nonetheless, over all those years, Aristotle’s accounts of the two forms of motion – free fall and continued travel – still had to endure some fair amount of skepticism. That is, some critical minds found that there was something wrong with Aristotle’s teleology. After

9

all, the problem with those purpose-based accounts of motion is that they merely re-describe what is found to be the case. For instance, the explanation that ‘things fall towards the ground because they strive towards Earth’ basically amounts to a tautology – saying the same thing twice in different words. Unfortunately, this tautological reasoning did not only occur in Aristotle’s explanation of free fall. That is, since it was based on the teleological principle of horror vacui,7 Aristotle’s antiperistatic explanation of continued travel was found to suffer from the same weakness. As soon as one tries to explain that air will fill up any gap left behind by an arrow because ‘nature abhors a vacuum,’ it will immediately become apparent that this line of reasoning is just as trivial and pointless as the above tautological explanation behind natural motion.

At the time when Galileo first started to think about motion, one of the two components in Aristotle’s account of motion – the phenomenon of ‘antiperistasis’ – had already been replaced by the idea of ‘impetus’ or ‘impressed force’: “After leaving the arm of the thrower, the projectile would be moved by an impetus given to it by the thrower and would continue to be moved as long as the impetus remained stronger than the resistance, and would be of infinite duration were it not diminished and corrupted by a contrary force resisting it or by something inclining it to a contrary motion.” (Buridan as translated in: Zupko 2005, 107)

Notwithstanding this revision by Buridan, and some others that preceded him, the general framework in the pre-Galilean era was still very much Aristotelian. Despite the earliermentioned criticism with regard to the tautology of purpose-based teleological arguments, Aristotle’s account of free fall, based on the belief that heavy weights naturally outpace all lighter ones when falling to Earth, was still very much the established view. The falseness of this belief, however, managed to remain unnoticed for almost two thousand years, because no rigorous testing was being performed and also because Aristotle and his followers unintentionally threw up a smoke screen that basically prevented them from taking a better look. Specifically, Aristotle’s ‘rule’, saying that heavy weights would always hit the ground

7

The term horror vacui, typically paraphrased in English as ‘nature abhors a vacuum,’ is often attributed to Aristotle, and refers here to ‘antiperistasis’ the alleged phenomenon through which vacuum behind a projectile in flight is filled up by air coming from the front tip of the projectile.

10

sooner than lighter ones, caused his physics to become specifically geared towards comparative proportions. That is, in line with the naïve belief in faster-falling heavy objects, Aristotle’s rule was converted into a neat quantitative expression, relating weight and speed to each other in a proportional way:

𝑊1 /𝑊2 = 𝑉1 /𝑉2 , with 𝑊 = weight and 𝑉 = speed

(2.1-1)

The technicalities that came with this expression arguably kept Aristotle and his followers busy enough to overlook the fact that it was actually quite wrong. Initially, these ratios were only used in an after-the-fact manner. But, with time, it became apparent that falling objects started from a resting position and had to pick up their pace upon release, instead of immediately dropping down at full speed. This is when it was decided that the speed of the objects would have to depend on the distance covered. Accordingly, it was concluded that falling objects would increase their speed the deeper they fell.8 And what is particularly noteworthy in this case, is that the buildup of speed was not being linked with the lapse of time, but with the distance covered. To be able to understand the motives behind the linkage of speed with covered distance rather than with time elapsed, we will first have to look into Aristotle’s thoughts of how movement and time were related with each other: “ … because movement is continuous so is time; for (excluding differences of velocity) the time occupied is conceived as proportionate to the distance moved over. Now, the primary significance of before-and-afterness is the local one of ‘in front of’ and ‘behind’. There it is applied to order of position. But since there is a before-and-after in magnitude, there must also be a before-and-after in movement in analogy with them. But there is also a before-and-after in time, in virtue of the dependence of time upon motion. Movement, then, is the objective seat of before-and-afterness both in movement and in time; but conceptually the before-and8

Please note that the average speed still had to obey Aristotle’s so-called ‘law of motion’: 𝑉 ∝ 𝐹/𝑅, (with 𝑉 = speed, 𝐹 = motive force, and 𝑅 = resistance of medium) which expressed Aristotle’s belief that the rate of falling was proportional to weight and inversely proportional to the density of the medium. So, it was commonly agreed upon that air resistance and viscosity of water would indeed slow down falling objects, thus to a certain extent affecting the rate at which the falling speed would build up.

11

afterness is distinguishable from movement. Now, when we determine a movement by defining its first and last limit, we also recognize a lapse of time; for it is when we are aware of the measuring of motion by a prior and posterior limit that we may say time has passed [italics added]. And our determination consists in distinguishing between the initial limit and the final one, and seeing that what lies between them is distinct from both; for when we distinguish between the extremes and what is between them, and the mind pronounces the ‘nows’ to be two – an initial and a final one – it is then that we say that a certain time has passed; for that which is determined either way by a ‘now’ seems to be what we mean by time. … Accordingly, when we perceive a ‘now’ in isolation … then no time seems to have elapsed, for neither has there been any corresponding motion. But when we perceive a distinct before and after, then we speak of time; for this is just what time is, the calculable measure or dimension of motion with respect to before-and-afterness. Time, then, is not movement, but that by which movement can be numerically estimated.” (Aristotle 1957, 385-387) So, in Aristotle’s view one may speak of ‘time’ when attending to the duration aspect of motion, whereas one is dealing with ‘movement’ when the displacement aspect is at stake. Accordingly, change in place – i.e. movement, or locomotion – can be expressed by displacement as well as duration. However, as can be understood from the italicized segment in the quote above, Aristotle thought that time could become apparent only by virtue of the occurrence of movement. Nonetheless, time has a special relevance in Aristotle’s framework in that anything that changes, can only change in time. But because it is impossible to point to time in the way that one would point to an actual thing, time was considered a derived, abstract notion. As such, time was thought to be dependent on movement, rather than being fundamental to it. So, all in all, the belief in 1) faster-falling heavier objects, and 2) the abstractness and motiondependence of time, together with 3) the unavailability of precise measuring instruments, and 4) the therewith associated lack of rigorous testing procedures, caused the Peripatetic school of Aristotle to commit the error of linking the increase of speed with covered distance and not with elapsed time.

12

2.1.2 Galileo’s non-teleological physics

Thanks to a lot of hard work, dedication and an especially inquisitive mind, Galileo was able to find an entirely non-teleological alternative to Aristotle’s physics that both solved the problem of the tautological arguments and rectified the incorrect linkage between increasing speed and traveled distance. Instead of looking only at the change and differences in weights, motions and speeds of objects, he started to look specifically at the rate at which change occurred. That is, he found out that motion could be catalogued more easily by recording not only the covered distance and descended height of falling bodies, but also the rate at which these quantities would change. Just to be able do so, Galileo devised many ingenious experiments in which he tried to link change in spatial coordinates to a standard measure of duration. In this way, the total amount of change in position could not only be expressed in terms of standard spatial intervals, but it could also be measured in a chronological manner by counting up the amount of standard units of duration between the initial and the final position. For instance, by monitoring the changing water level in the reservoir of a water clock that was running at the same time as he would release some heavy mass at the top of an inclined plane, Galileo could chart the duration of the object’s descent in terms of the water level markings (this would include split times as well as the total amount of time). In turn, the covered distances at pre-marked split times could be registered by looking at the distance markings that were carved along the ramp’s downward slope. As it turns out, though, it is quite difficult to get a reliable and consistent reading in subsequent runs of such an experiment. This is because the water level is changing rather slowly in comparison to the falling body’s changing height. Therefore, later on, a more elaborate inclined plane experiment was introduced. While it is not entirely certain if this experiment – in the exact form as described below – was actually performed by Galileo, it at least combines two of his earlier innovations that had a great impact on the practice of physical experimentation: 1) the downhill ramp, with its inclined plane for rolling down bronze balls; and 2) the free-swinging pendulum, which could be used as a relatively precise indicator for the rate of time. Now, in this enhanced inclined plane experiment, a bronze ball was made to roll down the ramp which had a pendulum hanging from the backside of the platform from which the balls were released. Because of the added pendulum, the ramp could effectively also double as a makeshift metronome. That is, although the ramp’s main function was to serve as a

13

straight-lined ‘speed track’ for the bronze ball, its second job was to subdivide the total time of descent into equally long intervals. This feature was achieved by placing a series of moveable warning bells on strategic positions along the slope of the ramp (see Fig. 2-1). The precise spots where these bells should be placed could be found through synchronization with the swings of the pendulum that was hanging beneath the platform located on top of the slope. In this way, after having been released from the upper platform of the ramp, the passing bronze ball would set off the bells one after the other in an even, steady rhythm.

Table 1: Distance and time values in Galileo’s inclined plane experiment Measurements leading to Galileo’s time-squared law of fall.9 See the end of Section 2.2 for how the value of the proportionality constant k relates to current metric measures (also cf. McDougal 2012, 20). time tx

distance s

t2 = s / k

t2

with equal intervals Δt :

as measured in ‘points’

calculated with k = 33

tx squared

33

1.00

12 = 1

130

3.94

22 = 4

298

9.03

32 = 9

526

15.94

42 = 16

824

24.97

52 = 25

1192

36.12

62 = 36

1620

49.09

72 = 49

2104

63.76

82 = 64

Δt = tx – tx-1

t1 = 1 t2 = 2 t3 = 3 t4 = 4 t5 = 5 t6 = 6 t7 = 7 t8 = 8

9

The data shown here, can be found in Galileo’s original working papers on folio 107v [with ‘folio’ meaning sheet, and v standing for ‘verso’, which is Italian for ‘back side’ as opposed to r, which stands for ‘recto’ (i.e. front side)]. The working papers are being kept in Florence, in the Biblioteca Nazionale Centrale (the Central National Library). The 160 surviving sheets of the working papers are now bound as Volume 72 of the Galileo manuscripts – also known as ‘Codex 72’ or ‘Manoscritto Galileiano 72’.

14

Fig. 2-1: Bronze ball rolling down an inclined plane (with 𝒔-𝒕 diagram)

So, by introducing a ‘normalized’ measure of time (in which, for instance, the equalized time stretches between the ramp’s warning bells were taken as countable standard units) Galileo could list this measure next to the distance and/or height covered by the moving body. In so doing, he could then put together a chronologically ordered record as in table 1 (see also Smolin 2013, 31-36). Now, as it turned out, the notion of a timeline could then be derived by using the analogy between (a) a measuring tape stretching straight from the moving body’s starting position to its end point, and (b) the time interval between initial and final readings on a water clock (or even smaller intervals, such as those between the pendulum-calibrated warning bells attached to the slope of the ramp). In other words, as had

15

happened before with the notion of space,10 time was thus abstracted from the process of nature as a linear phenomenon. On the whole, there seems to be no other experiment that illustrates better how distance and duration can be made to pair up – no better demonstration of how time can be characterized as a geometrical phenomenon. That is, it demonstrates most clearly how the initially unlabeled process of nature is actually converted into (a) movable physical bodies, (b) a spatial coordinate system and (c) a linear time axis. By submitting the world to his nature-dissecting gaze and then filtering away all that was irrelevant to him, Galileo basically put nature through the wringer to thus end up with a fully geometrized ‘stage setting’ in which the act of doing physics could be performed. Like this, then, it became finally possible to put the mathematically predictable maneuvers of physical objects on display within a timeline-equipped spatial coordinate system – thus implicitly suggesting that nature truly worked in a geometrical way. But on close enough scrutiny, the abstract geometrical timeline could only be given a meaningful role through the artificial isolation of a ‘physical object’ from its local neighborhood, and all of Galileo’s other acts of abstraction that led to his first basic version of doing ‘physics in a box’. It is only through all these acts of abstraction that what we like to call ‘motion’ can be ‘soaked loose’ from the process of nature as if it were the change of an object’s position ‘through’ space and time. But, despite first appearances, object, neighborhood, measuring instrument, observer, etc., are eventually not separate entities, but rather, inseparably connected process-structures deeply ingrained within the greater embedding whole of the universe at large. So, it is because of these acts of simplifying abstraction from the process of nature that our current way of doing physics in a box could be made possible at all. And it is only in this abstracted, mathematized world that nature’s processuality can be translated into a moveable dot running along a chain of equally long time stretches.11 This very method of doing physics in a box, however, has persuaded many of us to indeed think of time as a geometrical exponent of reality that is needed to get from one event to the next. However, this belief is a typical example of the fallacy of misplaced concreteness. To put it even more strongly, it’s

10

Euclid’s magnum opus on geometry had already been published in Ancient Greece around 300 BC (cf. Byrne 1847). 11 The equally long time stretches could be the intervals between 1) the ramp’s warning bells, 2) the water level markings of the water clocks, 3) the sand level markings on an hour glass, 4) the completed swing periods of a pendulum, or, 5) any other indication of time units that can be used in an experiment.

16

already dubious to even speak of “time” as having an autonomous existence. Even Einstein himself has not been shy to bring this to the fore:

“Time and space are modes by which we think, and not conditions in which we live.” (Einstein, as quoted in: Forsee 1963, 81)

Remarkably, in opposition to the timeless and non-processual block universe that Minkowski proposed on the basis of Einstein’s Special Theory of Relativity, this same point is used in process philosophy to endorse an explicitly processual interpretation of nature: “ ... time is not in itself a fundamental reality. It is an abstraction from the process. … What really exists is a succession of events. They are by their very nature related to one another as past and future or as contemporary. These relationships are temporal. But there is not something to be called time apart from these actual relations.” (Cobb 1986, 161)

In other words, time should not be reified. Galileo’s geometrical timeline model should not be interpreted as something that has an actual existence alongside nature’s events (Griffin 1986, 152). Moreover: “ ... all the features of time … are rooted in the intrinsic reality of events, in the process by which they become concrete, or determinate, for it is here that the event includes the past events into itself and it is this inclusion that makes time irreversible. Accordingly, any approach that commits the fallacy of misplaced concreteness by equating the extrinsic side of the events with their complete reality will necessarily miss the roots of time in those events.” (Griffin 1986, 13)

Hence, looking at all this in the soberest way we can, at the end of the day, we will have to admit that geometrical timelines result from the subjective choice to abstract from the process of nature. Our long-standing Western tradition of ‘doing physics in a box’12 actually depends for a great deal on such idealizing abstraction.

12

Doing physics in a box: this term, coined by Lee Smolin in his 2013 book Time Reborn, refers to the longestablished practice of isolating some aspect of nature (or system of interest) from its surroundings and then trying to empirically identify and mathematically capture the regularities in its behaviour.

17

Despite of this, it cannot be denied that what has grown to become our present-day physics has known many tremendous successes over the years. On balance, the vast majority of those successes have come with impressive concrete consequences. After all, many previously unexplained aspects of nature are now considered familiar and well-understood phenomena as their behaviour can be traced and predicted mathematically to a near-perfect degree of precision. However, even perfect empirical agreement between some target system of interest and its mathematical model does not mean that the mathematical model is an exact representational twin version of the system in question. In fact, our nature-dissecting physics can only deal with observables, not beables13 – mainstream contemporary physics primarily has to do with the mathematization of phenomena and typically likes to evaluate nature in terms of instrument-based empirical data – backed up by sense data, if needed. Appropriately then, physical equations should not be considered to refer directly to nature-in-itself; for all practical purposes, their designated source of information is to be found in the responses of observation systems. Unlike an airplane in low-altitude flight, we cannot dive below the ‘radar’ of our own phenomenal awareness to check if the so-called ‘real-world-out-there’ exists exactly as we experience it. From early life, we have gradually learned to cut up our otherwise undivided natural world into various subsystems (particularly, target, subject, and symbol systems, as well as their respective constituent parts). But despite our thus developed nature-dissecting mindset, this will never bring us conclusive proof if these systems do indeed exist just as we infer them to be (van Dijk 2016). Therefore, we should realize very well that physics – from what it was in Galileo’s hands to what it has become in the present day and age – has until now been no more than a collection of instrument-based, mathematically expressed phenomenologies made possible by some wellconsidered acts of abstraction (see also Sections 3.1.2 and 3.2). Following the same argument, we should also recognize that the concepts of space and time, as used in contemporary mainstream physics, are ultimately phenomenologies: instrument-enabled, geometrically expressed metaphors or figures of speech for how nature appears to us, conscious nature-dissecting observers. Although they help break down the process of nature into geometrical dimensions and their contents, space and time should not be thought to exist as such. In the words of Alfred North Whitehead:

13

The term beables, coined by John Bell (1988, 174), refers to those existents purported to make up the unobservable realm ‘beneath’ our observation-based phenomenal world.

18

“This passage of events in time and space is merely the exhibition of the relations of extension which events bear to each other, combined with the directional factor in time which expresses that ultimate becomingness which is the creative advance of nature.” (Whitehead 1919, 63)

Moreover: “We habitually muddle together this creative advance, which we experience and know as the perpetual transition of nature into novelty, with the single-time series which we naturally employ for measurement.” (Whitehead 1920, 178) Here, the word ‘naturally’ should perhaps better be replaced by ‘routinely.’ After all, by following Galileo’s first example of presenting such a single-time series in a table (as in Table 1, above), we are entirely taking for granted all the idealizations and simplifications that in fact enabled him to present time as a unidirectional geometrical line. In other words, we are thus accepting the authority of tradition without really questioning Galileo’s hidden presumptions. Within certain contexts of use it may perhaps be quite convenient to interpret space and time in terms of geometrical dimensions, but these interpretations should not be taken so literally as to impose them onto the process which is nature. We should not mistake our abstractions for reality, so we should always remain critical towards claims that the physical real world should sit within space and time, or exist as a 4-dimensional spacetime continuum. At the end of the day, space and time, although all too often interpreted as actually existing, geometrically specifiable dimensions, are in fact artefacts of the human intellect that follow directly from the nature-dissecting mindset on which our still well-established tradition of doing “physics in a box” is based. This does not mean, however, that space and time are mere illusions, but rather that what we have come to think of as “space-time” is actually an intrinsic aspect of the process of nature and can thus not be usefully reflected upon without taking nature’s processuality into account.

19

2.1.3 The deficiencies of the geometrical timeline

So, however useful Galileo’s geometrical timeline has proven to be over the years, it’s quite another thing to suppose that this abstract construct should have a concrete counterpart within nature-in-itself. Although the geometrical timeline does quite well when it comes to chronologically ordering the sampled values of observables14 by associating them to a serialized chain of (otherwise vacuous) time slots, it seems to perform quite poorly otherwise. After all, while nature is all about change, process, action, evolution, etc., the timeline by itself is as static as can be (Cahill 2005b, 2).

Furthermore, the geometrical timeline does not allow for a present moment effect. The timeline doesn’t have a unique and indisputable Now which will automatically come to the fore during use. This lack of a dedicated present moment is in fact a shortcoming that even Einstein had become quite concerned about in his later years. As Carnap reported: “Once Einstein said that the problem of the Now worried him seriously. He explained that the experience of the Now means something special for man, something essentially different from the past and the future, but that this important difference does not and cannot occur within physics. … Einstein thought that scientific descriptions [whether they be formulated in physical or in psychological terms] cannot possibly satisfy our human needs; and there is something essential about the Now which is just outside the realm of science.” (Carnap 1963, 37-38)

To us, conscious human beings, things happen in a particular order. That is, things seem to change as they pass from the present into the future, thus leaving their past ‘behind’ them. Accordingly, when left to its own devices, nature typically likes to follow the path of irreversibility. Water does not spontaneously flow upwards against the slope of a mountain; milk will not unmix itself from the coffee in which it is poured; and, as we will all learn in life, we do not grow younger as we age. In classical thermodynamics – although gas particles in bulk tend towards disorderliness, thus leading to a preferred direction of time – the microscopic laws valid describing the collisions of individual particles are time-symmetric, indifferent to any distinction between a back- or forward direction of time. Accordingly, at 14

E.g., physical parameters such as height, distance, water level, or the angular position of a clock’s hand (i.e. the “time pointer”).

20

least until the advent of quantum mechanics, all the then known ‘laws of nature’ were timereversible, and to this day most of them still are: “The reversibility of basic physical processes comes from the time symmetry of the laws that underlie them. This time-reversal symmetry is usually denoted by the letter “T.” You can think of T as an (imaginary) operation that reverses the direction of time – i.e., interchanges past and future. Time-symmetric laws have the property that when the direction of time is inverted the equations that describe them remain unchanged: they are invariant under T. A good example is provided by Maxwell’s equations of electromagnetism, which are certainly T-invariant [or in other words: symmetric under time-reversal].” (Davies 1995, 209)

When seen from the perspective of timeline-equipped laws of nature themselves, it seems to make no difference at all if we start tracing the timeline from left to right, or just the other way around. There’s nothing in the mathematics of our laws that forbids them to be ‘unrolled’ counterclockwise. So it turns out that physicists, when putting these laws to the test, must first choose which direction to follow. Because the physical equations themselves do not stipulate a specific direction, they actually require external subjective choice in order for the timeline methodology to work according to plan. In fact, geometrical timelines are basically analogous to tear-off calendars, whose pages can be removed from the front to the back, but also vice versa, randomly, or in any other possible order. And – as is so well-accepted that it is mostly forgotten – tear-off calendars, as well as geometrical timelines, require social convention to know in which direction they should be read, and external manipulation to get from one time slot to the next.15 In other words, physicists availing themselves of such an artificial timeline, necessarily have to apply an additional metarule which states that an external time pointer should be run alongside the line to indicate which point on it is effectively acting as the present moment. That is, just like the tear-off calendar needs some outside help (another one’s ‘helping hand’) to remove the page of each day gone by, the timeline needs an external present moment indicator to get from one time slot to the next. It should be stressed that this present moment indicator (a) is entirely separate from the timeline itself, and (b) acts in total

15

Furthermore, there has to be agreement on the rate of sampling (e.g., a calendar, or timeline, with a page, or segment, for each day, week, month, year, or other measure of time). Next to that, one also has to decide which aspects of nature to associate with the geometrical timeline, etc. (cf. Cahill 2005b, 3; van Dijk 2016)

21

independence from any mathematical equation to which this timeline may be linked. As will be explained in more detail below, however, this external present moment indicator plays an enormously important role in the ongoing ‘mathematization of nature’.

2.2 From geometrical timeline to time-based equations

Galileo’s revolutionary idea to relate the changing position of a moving body to the rate of his own heartbeat as he felt it beating in his pulse, or to other measures of time,16 basically amounts to relating the change of one thing to the change of another. Or, to be more precise, it amounts to encoding the nonlinear change in one aspect of nature (i.e., relocation in space) in terms of the more simple and steady change of another (i.e., “relocation” in time). It should be noted, though, that comparison to another clock is basically the only way to determine at which rate the first clock’s time indicator is changing position. Therefore, to avoid an infinite regress of calibrating clocks, the most practical solution is simply to accept the first one’s rate of change as smooth and uniform over its entire range.17 Like so, in each of his different experiments, Galileo had to assume that – more so than the beats of his own heart – the time indicator markings18 would always pass by at a perfectly even rate, so that they could be used as a standard measure of time. Now, by supposing that these markings would indeed pass by in uniform fashion, it became possible to introduce an operational definition of time.19 Such an operational definition simply takes a conveniently chosen number of the same recurrent event (e.g., the swinging of a pendulum, or the passing by of indicator markings) as the standard unit of time. In this way, since the number of counts is the only thing required to specify the magnitude of 16

One of his first spontaneous experiments, was to time the swing periods of chandelier by using his pulse. In his later experiments, Galileo would also exploit several other means of measuring time, such as the rising level of a water clock, or, indeed, the increasing amount of synchronously ringing downhill bells. 17 Cf. McTaggart’s A and B series (reference needed). 18 Depending on which experiment was being performed, the time indicator markings in question were: (a) the water level markings, or: (b) the warning bell positions. 19 Conventional definitions typically refer to something more fundamental in order to specify the definiendum. However, this way of putting together definitions will typically lead to infinite regress, or circular reasoning. For instance, the short and simple definition of time as “that which is measured by a clock” depends on the definition of a clock as “a measuring instrument for time” – a dependence relation which clearly involves circularity. The only way to avoid this infinite regress and circularity, is simply to terminate the search for any more fundamental underpinnings, and, instead, to adopt an operational definition that works for all practical purposes. According to Albert’s Münchhausen Trilemma (in which these three elements of infinite regress, circular reasoning, and termination of the justification procedure form an inescapable triadic unity), every such definition necessarily has to remain non-exhaustive.

22

a time interval, there’s no need to know anymore if time is something that physically exists in the so-called “real world out there.” That is, as long as the assumption of this uniform rate leads to empirically adequate results, there’s no need to know how time actually works, but only that it works – in the abovementioned operational sense, that is. The adoption of this operational definition of time, allowed Galileo to achieve a major breakthrough that ushered in a new era in the natural sciences. By closely studying the time tables he had put together from his experiments on falling and descending objects (see Table 1) he could systematically compare the object’s change in position (especially the vertical component) to the amount of time that had elapsed during its descent: “Having placed this board [i.e., the downward track for the bronze ball] in a sloping position, by lifting one end some one or two cubits above the other, we rolled the ball, as I was just saying, along the channel, noting, in a manner presently to be described, the time required to make the descent. We . . . now rolled the ball only one-quarter the length of the channel; and having measured the time of its descent, we found it precisely one-half of the former. Next we tried other distances, comparing the time for the whole length with that for the half, or with that for two-thirds, or three-fourths, or indeed for any fraction; in such experiments, repeated a full hundred times, we always found that the spaces traversed were to each other as the squares of the times, and this was true for all inclinations of the plane, i.e., of the channel, along which we rolled the ball.” (Galileo 1632/2010, 178-179)

In so doing, Galileo found that the traveled distance was directly proportional to time squared: 𝑠 ∝ 𝑡2

This expression contains two variables, s for distance, and t for time, and says that with each elapsed time interval, the traveled distance increases quadratically. This relation may indeed seem quite obvious, since rolling down the ramp’s entire length will take the bronze ball twice the time that is needed for the first quarter distance. But in order to really make sure that his hypothesis would stand the test of time, it seems that, further on into the experiment, Galileo decided to replace the not so precise water clock with the more reliable methodology of metronome-like transit sounds made by strategically placed frets (i.e. ‘speed bumps’) or alarm

23

bells. Because this method, due to ‘double calibration’,20 ensures a high level of accuracy in making equally long time intervals, it becomes possible to introduce a standardized unit of time. And although this method left their actual size unspecified – or simply one by default – it led to all time intervals having the same duration with a very small margin of error: “The phrase “measure time” makes us think at once of some standard unit such as the astronomical second. Galileo could not measure time with that kind of accuracy. His mathematical physics was based entirely on ratios, not on standard units as such. In order to compare ratios of times it is necessary only to divide time equally; it is not necessary to name the units, let alone measure them in seconds. The conductor of an orchestra, moving his baton, divides time evenly with great precision over long periods without thinking of seconds or any other standard unit. He maintains a certain even beat according to an internal rhythm, and he can divide that beat in half again with an accuracy rivaling that of any mechanical instrument.” (Drake 1975, 98)

It was because of this increased measurement accuracy that the proportionality constant k, relating distance and time to each other, could be determined with ample accuracy: 𝑠 = 𝑘𝑡 2 , with k = 33 (see Table 1; Section 2.1.2)

The precise value of k depends effectively on the relation between the locally valid gravitational acceleration and the applied measuring unit for distance (which, in the metric SIsystem, is the meter). However, it was initially based on the traveled length during the first interval t0 → t1 as expressed in terms of Galileo’s standard unit – the ‘point’. And since Galileo normalized the time interval t0 → t1 to unity, his assumption that the velocity of a freely falling object would increase uniformly led to another value for gravitational acceleration than we use today. In the case of the measurement run for the inclined plane experiment recorded in Table 1, the value of k equaled 33. This was, indeed, the number of distance markings that could be counted between the top of the ramp and the position of the first warning bell. With the standard measure for length – named ‘points’ – amounting to approximately 29/30 ≈ 0.967 20

The ‘double calibration’ consists of: 1) synchronizing the frets or alarm bells with the back-and-forth dangles of a free-swinging pendulum; 2) synchronizing the frets or alarm bells to each other by hearing, that is, by listening if their consecutive sounds, triggered by the descent of a downward rolling ball, form an even sequence.

24

mm, the traveled distances can be calculated from s = k t 2. Accordingly, at time t1 , the traveled distance s1 would reach a value of s = (29/30)·33· 1 2 = 31.9 mm (cf., Table 1; Section 2.1.2). As a matter of fact, all other distances could be calculated in likewise fashion, for any single moment within the available time range. And although, during his lifetime, Galileo had to fight an uphill battle in order to defend all these ideas, today we realize that this groundbreaking spin-off of his initial proportionality relation 𝑠 ∝ 𝑡 2 actually embodied the first version of what has now become the ‘golden standard’: the time-based physical equation.

2.3 From time-based equations to physical laws

Elaborating on Galileo’s work, Isaac Newton then added force and mass into the mix which enabled him, amongst others, to formulate his three Laws of Motion – out of which the most famous second one, is another physical equation: 𝐹 = 𝑚𝑎. Together with yet another physical equation, known as the Law of Universal Gravitation,

𝐹𝑔 = 𝐺

𝑚1 𝑚2 𝑟2

, with the gravitational constant G = 6.67 ∙ 10-11 [N ∙ m2/kg2],

Newton was able to lay down a basic framework for describing the motion of all of nature’s physical objects within the earthly as well as the heavenly domain. Like this, the equations dealing with the gravitation of ordinary objects on earth could be unified with Kepler’s laws of planetary motion. Because of the enormous empirical success and the giant leap of understanding that this unification brought about, Newton’s work could give rise to the mechanistic ‘clockwork universe’ worldview. This worldview even motivated Pierre-Simon Laplace to claim that it should in principle be possible to calculate, from a given set of interim conditions, the entire history and future of nature as a whole: “We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at a certain moment would know all forces that set nature in motion, and all positions of all items of which nature is composed, if this intellect were also vast enough to submit these data to analysis, it would embrace in a single formula the

25

movements of the greatest bodies of the universe and those of the tiniest atom; for such an intellect nothing would be uncertain and the future just like the past would be present before its eyes.” (Laplace 1814/1951, 4) Nowadays, Laplace’s strict and absolute determinism is typically regarded as outdated.21 This is mostly due to ongoing developments in physics. With the advent of Thermodynamics (with its novel concepts of ergodicity, entropy and statistical ensembles), and Quantum Mechanics (which, in most interpretations, is taken to be inherently random and indeterministic), this rather resolute form was no longer tenable. However, although this strict Laplacian determinism had to be abandoned, most other aspects of the mechanistic worldview managed to survive. In fact, the general framework behind the mechanistic worldview not only came out alive and well, but it even went on to permeate the whole of science. As a result, our contemporary mainstream physics became very much framed in terms of what may be called the Cartesian-Newtonian paradigm. In the Cartesian-Newtonian paradigm, nature is typically interpreted as follows: 1) as an entirely physical ‘real world out there’ ready to be exploited and manipulated by us, conscious human beings; 2) as a giant collection of atomistic, elementary constituents whose behavior is governed by fixed, eternal laws of nature. The first feature refers to the apparent necessity in physics to keep an external perspective onto the system to be observed. That is, although our physical sciences are utterly experience-based as they rely on empirical observation, the observer’s experience itself always belongs to the instrumentarium of investigation, and never to its target. As such, our nature-interpreting subjectivity resides on the other, non-physical side of the epistemic cut (cf. von Neumann 1955, 352; Pattee 2001): “ … any attempt to include the conscious observer into the theoretical account, will cause the theory’s extra-systemic ‘view from nowhere’ (cf. Nagel 1986) to be handed over to a newly introduced meta-observer. At least from the theory’s perspective, the initial observer must then be treated as any other physical system (cf. Von Neumann 1955, 352). In this way, physical theory seems to be condemned to a ‘Cartesianesque’ split (Primas 1994, 610-612) that leads to the undesirable bifurcation of nature (Whitehead 2007, 27-30). Hence [our 21

To begin with, it can be questioned if the postulation of such a superintellect, is scientifically acceptable at all. After all, its existence can neither be confirmed nor falsified. Furthermore, it is also quite hard to see how it should ever be possible to gather, in one go, all of nature’s information – involving all positions, velocities, and forces of all particles in the universe.

26

current physical sciences can be labelled as] ‘exophysical’ (cf. Atmanspacher and Dalenoort 1994) which refers to an external ‘view from nowhere’ onto a world that is held to be interpretable entirely in physical terms.” (Van Dijk 2016) Furthermore, the second feature of the Cartesian-Newtonian paradigm – its corpuscular physicalism (cf. Barandiaran and Ruiz-Mirazo 2008, 297) – can be credited firstly to Galileo, who paved the way for modern physics by being the first to systematically single out material bodies as individual systems to study their behavior during free fall. Secondly, it can indeed be credited to Newton, who built upon Galileo’s geometry-inspired groundwork to arrive at his laws of motion and universal gravitation. The superior explanatory power of Newton’s laws – which made it possible to understand such diverse phenomena as planetary motion, the working of the tides, and the movement of material objects on Earth with the help of just one theoretical framework – was so impressive that his work really set the stage for all that followed: “ … the success of scientific theories from Newton to the present day is based on their use of a particular framework of explanation invented by Newton. This framework views nature as consisting of nothing but particles with timeless properties whose motions and interactions are determined by timeless laws. The properties of the particles, such as their masses and electric charges, never change, and neither do the laws that act on them.” (Smolin 2013, xxiii) Although its initial mechanistic interpretation had to be abandoned with the advent of Einstein’s relativity theories and quantum mechanics, this law-centred methodology was still quite painlessly passed on to our contemporary mainstream physics. The general procedure of predicting the future state of any system by identifying its initial conditions and then letting them be processed by ‘the laws that be’, is still the way to go (cf. Smolin 2013, 50). And even though the strict Laplacian determinism turned out to be untenable,22 the general Newtonian mode of operation is still alive and well. Nowadays, rigorous determinism is not a realistic expectation anymore, but empirical agreement has taken its place. That is, when a physical equation achieves empirical agreement with the system it is held to portray, it is typically thought to closely follow the target system’s behaviour. Accordingly, the physical equation is 22

Over the years, determinism has now taken on a somewhat less rigorous guise: reductionism. And although reductionism, in turn, comes in many different flavors, its general idea is that all of nature can be brought back to its most elementary physical foundations which should then be expressible in terms of a concise set of physical equations. Like this, the physical equation has managed to boost its status from convenient, approximating tool (man-made artifact / abstraction / simplifying idealization) to an all-encompassing, literal representation of nature (reference needed).

27

customarily considered to represent the system at hand – if not in a direct chronological sense, then at least statistically (Van Dijk, forthcoming in 2015).

2.3.1 The flawed notion of physical laws When we call our physical equations ‘laws’, this means that, all else being equal, it applies to many, many cases (cf. Smolin 2013, 99).23 In other words, in a practically equal situation, the physical equation at hand will apply without exception, again and again. And although the Cartesian-Newtonian paradigm thus facilitates the view that there could be a complete collection of fundamental laws of nature, there is more than enough reason to disagree with this (cf. Cartwright 1983, 45-55; Giere 1999, 77-90). First of all, none of our past or even present physical theories has universal validity. This can be easily exemplified by investigating situations in which Newton’s universally valid ‘laws’ of motion are to be combined with his ‘law’ of universal gravitation (Giere 1999, 90). No two interacting bodies, anywhere in the universe, will be found to exactly behave according to these ‘laws’: “The only possibility of Newton’s Laws being precisely exemplified by our two bodies would be either if they were alone in the universe with no other bodies whose gravitational force would affect their motions, or if they existed in a perfectly uniform gravitational field. The former possibility is ruled out by the obvious existence of numerous other bodies in the universe; the latter by inhomogeneities in the distribution of matter in the universe.” (Giere 1999, 90)

Moreover, all conditions would have to be thoroughly idealized (perfectly spherical, chargeless bodies within a frictionless environment, and so on). Even Richard Feynman stated that Newton’s Law of Universal Gravitation is “the greatest generalization achieved by the human mind” (Feynman 1967, 14) for it is simply not true that the force between any two bodies is given by the law of gravitation. That is, no charged objects will exactly behave according to Newton’s ‘law’ of universal gravitation, since Coulomb’s ‘Law’ applies as well (Feynman 1967, 13-14 ; Cartwright 1983, 57). Moreover, the effect of both ‘laws’ typically 23

Newton’s Second Law of Motion (𝐹 = 𝑚𝑎), for instance, was thought to pertain not just to one specific physical body. Instead, Newton deemed it universally valid for all masses in the universe.

28

differs depending on the scale of magnitude and other relevant system and environmental conditions. In view of all this, Cartwright goes on to argue that our celebrated ‘laws of nature’ are, in fact, no more than generalized theories:24 “Many phenomena which have perfectly good scientific explanations are not covered by any laws. No true laws, that is. They are at best covered by ceteris paribus generalizations – generalizations that hold only under special conditions, usually ideal conditions.” (Cartwright 1983, 45)

These special, idealized conditions fail to present the whole picture. That is, since our physical equations always refer to some carefully singled-out system of interest, the entire rest of the universe is being ignored as if it doesn’t exist.25 This neglect of the outer-system world is an absolute necessity for a physical equation (e.g., 𝐹 = 𝑚𝑎) to be valid in many other cases. Accordingly, the idea of physical equations having the status of a ‘true law’ can only be maintained thanks to theoretical neglect: “Newton’s second law, [for instance,] describes how a particle’s motion is influenced by the forces it experiences. … Each particle in the universe attracts every other gravitationally. There are also forces between every pair of charged particles. That’s a whole lotta forces to contend with. To check whether Newton’s second law holds exactly, you would have to add up more than 1080 forces to predict the motion of only one of the particles in the universe. In practice, of course, we do nothing of the kind. We take into account just one or a few forces from nearby bodies and ignore all the rest.” (Smolin 2013, 100-101)

Hence, at the end of the day, we should realize that our physical equations can only be ‘universally valid’ when they are generalizations. But when we have to admit that our equations are merely approximating generalizations, they can never be thought of as being fundamental, or as having a perfect or near-perfect fit with nature. So, despite the popular view that some of our physical equations are so widely valid that they could deservedly be

24

This includes Quantum Mechanics (Cartwright 1983, 163-216) and according to Giere, also Relativity Theory (Giere 1999, 250n13). 25 Please note that a noise factor may be built into many physical equations so that external influences can be taken into account. However, this makeshift procedure is not used for ‘laws’ since so-called laws of nature are thought to give deterministic outcomes in many cases.

29

labeled as laws, such labeling is in fact misplaced. After all, on closer scrutiny, these so-called laws are neither true, nor universal (cf. Cartwright 1984, 13 and 45; Giere 1999, 86). Despite all this, the idea that nature can ultimately be grasped fully by a single physical equation, or a small set of physical laws, is still very much alive. It is almost as if the huge empirical success of these time-based equations almost makes us forget that a great amount of abstraction, idealization, simplification, approximation, and neglect is involved. When ignoring all these manipulations we can easily become convinced that our simple equations for our local isolated systems can be extrapolated to the entire universe. And that’s in fact exactly what happened after Galileo.

2.4 From geometrization to the timeless block universe

By expressing nature’s processuality with the help of a geometrical timeline, Galileo basically set the stage for modern science. Together with the three already known spatial dimensions, the temporal dimension thus made it possible to register all conceivable motion of physical bodies by specifying, for each fixed interval on the timeline, their position in terms of three spatial coordinates (𝑥, 𝑦, 𝑧). The mechanistic Newtonian physics that was then built upon Galileo’s innovation, managed to refine and expand the still rudimentary geometrization of nature to a great extent. In fact, although Galileo had only a promising vision of mathematics being the language in which the great book of nature was written, Newton actually seemed to have pulled off the as-good-as-complete mathematization of nature – from falling apples on Earth to the faraway heavenly bodies in outer space. According to Laplace’s strictly deterministic interpretation of the Newtonian framework, the mathematically spelled out mechanical laws of nature governed the entire future unfoldment of the universe. Like this, it was thought that the universe as a whole could be described deterministically by simply taking its initial conditions26 and then making Newton’s laws work out the inevitable consequences. This idea of such an algorithmically laid down universe was indeed very much inspired by Galileo’s pioneering work. As a matter of fact, the renowned clockwork universe picture was conceivable only through Galileo’s conception of the geometrical timeline and the 26

Since the term “initial conditions” is typically used for some specific, carefully selected entry out of a larger set of temporally arranged alternatives, the term “interim conditions” is probably more accurate.

30

subsequent development of the timeline-compliant physical equations. Without this ‘geometrization of nature’s processuality’, the mathematical expression of quantum mechanics and Einstein’s special and general theories of relativity would not even have been possible. But with the implementation of timeline-compliant physical equations, all major physical theories that followed in the wake of Galileo’s theory of falling bodies, were effectively left timeless in the sense that: (1) their formulae have to make do without any dedicated and unique present moment; (2) the entire past, present, and future are, according to Laplace, already contained in the chosen initial conditions and the static, unchanging physical laws acting thereon. On top of that, (3) in quantum mechanics, all possible quantum states are held to exist simultaneously in what some interpret to be a static kind of superposition – until observation leads to the actualization of one particular possibility; and (4) Einstein’s relativity theories, with the newly introduced ideas of the 4-dimensional spacetime continuum and the relativity of simultaneity, made it clear that observers moving at different speeds would each experience a different order of occurrence for any arbitrary sequence of events relative to which they are moving. The entire combination of a) Galileo’s geometrization of time, b) the lack of a unique present moment in the physical equations that ensued therefrom, c) the lack of a preferred direction of time in these equations (cf. Davies 2005), d) Einstein’s discovery of the relativity of simultaneity, and e) the postulation of the Einsteinian-Minkowskian 4-dimensional spacetime manifold, f) the introduction of superposition in quantum mechanics (cf. Barbour 1999, 229-231 and 247; Smolin 2013, 80) motivated mainstream physics to drop time altogether. All this led physicists to argue that in reality there is no passage of time, but that all moments of all of eternity exist as a giant universal timescape in which all moments and configurations of nature are spread out as one eternally existing whole – comparable to the spacetime picture as presented in Fig. 2-2, but then for the entire universe.27 27

Remarkably, contemporary mainstream physics seems to opportunistically “smuggle” time back in by stating that the block universe entails a “causal structure” of some kind. In this vast network of cause-effect chains all events in the history of nature are thought to exist together at once – albeit with their own particular spatiotemporal coordinates (cf. Smolin 2013, 58-59). Since causal relations have a fixed order of events, with causes before effects, every causal chain, or ‘worldline’, can be said to imply the unidirectionality of time. But, this impression of time is typically blamed on an asymmetry inherent to the spatiotemporal states of the world (or: slices of the block universe) as they exist side by side within a causal order, rather than an asymmetry of time as such (cf. Davies 2006, 9). Arguably, however, this line of reasoning is flawed, since it labels as nonexistent what has already been abstracted away beforehand (i.e. the process of nature loses its processuality as it is reduced to static slices that are frozen solid within an asymmetrical causal order).

31

a)

b)

Fig. 2-2: The Earth-Moon system in a temporal universe and a block universe. The temporal view (Fig. 2-2a) and the block universe view (Fig. 2-2b) of the Earth-Moon system. In the temporal view, the Earth and Moon move through space in time, while in the block universe view, all instances of the Earth and Moon exist together at once in a giant timeless space-time continuum. In the block universe view, all experience of the Earth and Moon moving from one moment into the next is thus held to be illusory. For ease of illustration, the images show only two spatial dimensions as well as modified, unrealistic sizes and distances. Images inspired by illustrations from (Davies 2006, 9) and extensively edited from a Wikimedia Commons image of the lunar phases (original image: © Orion 8 CC BY-SA 3.0).

32

In the words of theoretical physicist Julian Barbour: “The most direct and naive interpretation [of the Wheeler-DeWitt equation] is that it is a stationary [time-independent] Schrödinger equation for one fixed value (zero) of the energy of the universe. ... The Wheeler-DeWitt equation is telling us, in its most direct interpretation, that the universe in its entirety is like some huge molecule in a stationary state and that the different possible configurations of this ‘monster molecule’ are the instants of time. Quantum cosmology becomes the ultimate extension of the theory of atomic structure, and simultaneously subsumes time. We can go on to ask what this tells us about time. The implications are as profound as they can be. Time does not exist. There is just the furniture of the world that we call instants of time.” (Barbour, 1999, 247)

Of course, this line of reasoning draws heavily on ideas from quantum theory. All in all, however, in the confrontation between believers and disbelievers in the passage of time, the relativity of simultaneity was what ultimately settled the score, thus making relativity the prime incentive for physicists like Barbour to try to make quantum physics comply with timelessness as well. But even before this quantum theory-based proposal of timelessness, Minkowski’s block universe interpretation of Einstein’s Special Theory of Relativity was, in conjunction with Eddington’s famous solar eclipse experiments,28 already persuasive enough to make quite some researchers join the camp of the time-refuters. The remarkable agreement between prediction and experiment, as well as the straightforwardness of Minkowski’s geometrical interpretation, eventually seem to be the main reasons for physicists to have become so convinced of the timelessness of nature. The fact that the basics of Einstein’s special relativity could be so graphically explained by his captivating thought experiments 29 probably contributed to its appeal as well: “Besides the existence of a universal speed limit that all observers agree on, special relativity depends on one other hypothesis. This is the principle of relativity itself. It holds that speed, other than the speed of light, is a purely relative quantity – there is no way to tell which 28

Please note that Eddington’s experiment pertained to predictions based on Einstein’s General Theory of Relativity, not Special Relativity. However, because General Relativity can be considered an elaboration of Special Relativity, it could still contribute to the adoption of the idea of timelessness within the scientific community. 29 These experiments were 1) the ‘clock-hit-by-light experiment’, pertaining to what would happen if one were to chase after a light beam reflected off the face of a running clock, and 2) the ‘train-and-platform experiment’, involving two lightning bolts striking at simultaneously for one observer and at different times for the other.

33

observer is moving and which is at rest. Suppose two observers approach each other, each moving at a constant speed. According to the principle of relativity, each can plausibly declare herself at rest and attribute the approach entirely to the motion of the other. So, there’s no right answer to questions that observers disagree about, such as whether two events distant from each other happen simultaneously. Thus, there can be nothing objectively real about simultaneity, nothing real about ‘now.’ The relativity of simultaneity was a big blow to the notion that time is real.” (Smolin 2013, 57-58)

Accordingly, we may summarize the argumentation for the unreality of time as follows:

1. Initial assumption: The universe is an entirely physical world with an objectively real existence; 2. The relativity of simultaneity: Observers moving at different speeds will – under certain circumstances(!) – not agree if two non-identical, well-separated physical events happen simultaneously or not; 3. Because of this lack of agreement, it must be concluded that there is no objectively real Now; 4. Hence, there is nothing to divide the past from the future (cf. Čapek 1976, 507); 5. Consequently, any passage of time would be a sheer impossibility; 6. Therefore, the entire history of the universe – containing not only all of its past and present moments, but also all moments yet to come – must be considered to exist all together a once, in one massive 4-dimensional block of frozen spacetime (cf. Fig. 2-2).

It was basically this line of reasoning – inspired significantly by the easy-to-misinterpret geometry of Minkowksi’s spacetime construct – that made the case for the timeless block universe, thereby basically marginalizing any possible process-oriented interpretation of nature.30 Fortunately, though, the publication of Lee Smolin’s Time Reborn has drawn renewed attention to the fact that we are in dire need of a more process-friendly, habit-driven physics. In Smolin’s view, this new physics should then be operating according to a principle 30

Next to the philosophical branch of Process Thought, we may think, for instance, of David Bohm’s, Milič Čapek’s, and Ilya Prigogine’s processual worldviews (cf. Griffin 1986).

34

of precedence, rather than proclaiming the reign of timeless law (Smolin 2013, 147). Moreover, in such a physics, the block universe interpretation would become obsolete and be replaced by an interpretation in which the cosmos is seen as a giant dynamic network of habitestablishing activity patterns. On that account, space and time should not be conceived of in terms of an abstract geometrical coordinate system. Instead, it would be better to think of them as being a process – seamlessly interwoven with the process of nature as a whole. Similar to the quantum vacuum – a well-accepted concept from quantum field theory – space is not to be seen as an absolutely empty void, but rather as a fiercely boiling ocean of activity (reference needed) – indeed, a process. And although, by lack of any further explanation, this may perhaps still sound rather speculative and premature, in Chapter 5 (on Process Physics) this will be discussed in much greater detail.

2.5 Arguments against the block universe interpretation

Despite the enormous appeal of Einstein’s relativity theories, their huge impact on further developments within theoretical as well as experimental physics, and the many practical benefits they have given us over the years, there is still more than enough reason to handle them with a good deal of caution. Especially the block universe interpretation – in which nature is viewed as utterly timeless while our experience of time is branded as a stubborn, obstinate illusion – should certainly deserve some critical evaluation, to say the least. Although numerous weaknesses can be collected from historical as well as recent literature sources (reference needed), let’s first focus on some essential ones from a process-oriented point of view. When being pressed to provide a well-founded and indisputable defense for the illusoriness of our experience of time, physicists are actually having quite a hard time trying to do so. This is first and foremost because physics, being so firmly rooted in experiment, is itself utterly dependent on empirical experience. Secondly, if nature were indeed purely physical – as most contemporary mainstream physicists would have it – it’s then quite difficult to see how it could ever give rise to something so explicitly non-physical like conscious experience. On top of this, the argument of time’s illusoriness becomes even more doubtful in view of the extra-ordinary level of sophistication that would be required for our

35

conscious experience to achieve such an extremely convincing, but – physically speaking – pointless illusion. In other words, it would simply be next to miraculous for such an illusion to ever have evolved at all. And if that would not be enough, in a completely timeless world, the utterly processual and time-related concept of evolution would just make no sense in the first place. Even though these points together would seem to make a very strong case against the eternalism of Einstein and Minkowski, most mainstream physicists do not appear to be very alarmed by them. Apparently, the prevalent attitude among physicists is that the findings in physics – arguably the most prominent member of our sciences – must certainly be more fundamental than those of the other, lower-grade sciences, like chemistry, biology, and especially, neuroscience. Another way to look at this, however, is to place the process of empirical experience before any possible results ensuing therefrom. Researchers on this side of the scale are far less likely to think of neuroscience, psychology, consciousness studies, and the like, as being inferior to the physical sciences. No wonder, then, that the two camps will find themselves talking past each other over and over again … (!) Therefore, process-inspired arguments alone are probably not enough to end this unfruitful status quo. More than anything else, it needs to be demonstrates technically, in the language of physics, that the current timeless view of the block universe interpretation needs a thorough revision. Previously, with David Bohm, Basil Hiley, Ilya Prigogine, Henry Stapp, and Lee Smolin, to name but a few, there have been quite some attempts from within the physical sciences to move towards a more process-oriented alternative to the static, timeless view.31 So far, though, all their ‘process-friendly’ work hasn’t been able to bring about any major reputation-shattering crisis in mainstream physics. Nonetheless, these efforts should definitely be taken seriously – if only to provide us with new angles and ideas on how to tackle the many unresolved matters in physics and science as a whole. However helpful the work of the abovementioned researchers may be in getting a more process-oriented perspective on the physical sciences, for now, let’s first focus on some specific objections against the initial assumptions, the idealizing abstractions, and also the eventual interpretation of these abstractions as used in Minkowski’s timeless block universe framework. The line of reasoning from initial assumptions to the interpretation of nature as a 4-dimensional block universe can be roughly summarized as follows:

31

For a larger list of process-minded physicists, see: (Eastman and Keeton 2003).

36

The basic initial assumptions:32

1. Reality is objectively real and mind-independent. That is, reality exists independently from our mental experience and observation of it. As such, it is an entirely physical “real world out there” whose contents range widely from Planck-scale fluctuations to subatomic particles, billiard balls and pyramids, and from small-, medium- and-largesized planets to solar systems, galaxies, galaxy clusters and beyond.33 2. Events in nature can be localized spatiotemporally by specifying their coordinates along the dimensions of geometrical space and geometrical time.34

The crux of Einstein’s thought experiment:

3. There is relativity of simultaneity. That is, observers moving relative to each other at different enough speeds will disagree whether two well-separated, distant events occurred simultaneously, or not. For the moving observer, after all, the expected meetups with the two oncoming light pulses may have a different order in comparison to that of the other, motionless observer.

The line of reasoning leading to Minkowski’s block universe picture:

4. With the objective existence of the real world as a background assumption, it is inferred from the relativity of simultaneity that there cannot be an objectively real Now; 5. In absence of an objectively real Now, there can be nothing to divide the past from the future (cf. Çapek 1976, 507); 32

Please note that this summation does not include Einstein’s assumptions, because we are here dealing with the assumptions that gave rise to the block universe interpretation as based on Einstein’s Special Theory of Relativity, not STR itself. See also (Bros 2005; ‘Five postulates of Minkowski space-time’). 33 In Einstein’s time it was unknown if the universe actually extended beyond our own galaxy. 34 It wasn’t until Minkowski’s later introduction of the 4-dimensional spacetime construct (1908) that space and time were first interpreted as being an inseparable whole. Therefore, Einstein’s initial assumption was that the whereabouts and “whenabouts” of events in nature could be specified in terms of three space coordinates and one time coordinate (x, y, z, t).

37

6. If there is nothing to separate past from future, then no transition from one moment to the next can occur; 7. Then, without any transition between past and future, all of eternity – past, present and future – must exist together as one in a timeless 4-dimensional block universe (cf. Fig. 2-2); 8. As a logical consequence of the absence of an objectively existing present moment, any experience of a Now, or of time passing by, must be considered an illusion.

2.5.1 The real world out there is objectively real and mind-independent (or not?)

For sake of argument, let’s try to add some nuance to this seemingly watertight step-by-step analysis. As noted, it is already quite a firm and assertive statement to postulate an observerindependent ‘real world out there’. There seems to be more than enough reason to opt for another initial assumption. For instance, quantum mechanics suggests that our observational participation in quantum experiments plays an indispensable role in the physical world. That is, quantum events are thought to exist in a state of superposition when not being observed, while exposure to observation will make this superposition ‘collapse’ into one specific state. Yet still, the more or less unanimously held assumption within the physical sciences is that our natural universe is an entirely material world. Minkowski thought that this material world had an absolute existence as a static four-dimensional continuum. That is, although space-time allows observers to adopt different reference frames, it exists objectively and entirely independent of the mind. In his ‘world postulate’ (Weinert 2005, 169), Minkowski put forward that the four-dimensional geometry of his abstract space-time construct perfectly matched the ‘architecture’ of the ‘real world out there’ – which he simply referred to as ‘world’ (Minkowski, 1908/1952, 83). This point of view meant that one could basically treat the world as a Euclidian, real coordinate space R4 with coordinates (x, y, z, t): “A four-dimensional continuum described by the ‘co-ordinates’ x1, x2, x3, x4, was called ‘world” by Minkowski, who also termed a point-event a ‘world-point’. … We can regard Minkowski’s ‘world’ in a formal manner as a four-dimensional Euclidean space (with an imaginary time coordinate [x4 = t]).” (Einstein 1920, 122)

38

Over the years, this objective and materialistic view of the universe, has led to an interpretation of consciousness as an emergent (but still entirely physical) property of the brain, produced exclusively by the interaction of its neurons as they are engaged in all kinds of activity patterns associated with wakefulness. However, such a materialistic view – although it is the current standard – leaves wholly unexplained what it is like to experience the information that is encapsulated within these activity patterns. Moreover, as David Ray Griffin mentioned in his banquet address at the 10th International Whitehead Conference, 2015: “Assuming that the neurons in our brains are purely physical things, without experience, most mainstream philosophers have concluded that it would be impossible to understand how consciousness could emerge out of the brain.” (Griffin 2015) It is, in other words, incredibly hard to understand how the world of subjective experience – the seeing of red and the feeling of warmth – can arise from mere physical events (cf. Edelman and Tononi 2000, 2). Renowned quantum physicist Erwin Schrödinger identified this gap between the so-called objective world of physics and the subjective world of conscious sensation as follows:

“If you ask a physicist what is his idea of yellow light, he will tell you that it is transversal electromagnetic waves of wavelength in the neighborhood of 590 millimicrons. If you ask him: ‘But where does yellow come in?’, he will say: ‘In my picture not at all, but these kinds of vibrations, when they hit the retina of a healthy eye, give the person whose eye it is the sensation of yellow.’ ” (Schrödinger 1944/1992, 153)

In the wake of Schrödinger, Bertrand Russell was keen to emphasize the inescapable role of subjectivity in physics: “Physics assures us that the occurrences which we call ‘perceiving objects’, [i.e., the conscious end products of focal attention; a.k.a. percepts] are at the end of a long causal chain which starts from the objects, and are not likely to resemble the objects except in very abstract ways. We all start from ‘naïve realism’, i.e., the doctrine that things are what they seem. We

39

think that grass is green, that stones are hard, and that snow is cold. But physics assures us that the greenness of grass, the hardness of stones, and the coldness of snow, are not the greenness, hardness, and coldness that we know in our own experience, but something very different. The observer, when he seems to himself to be observing a stone, is really, if physics is to be believed, observing the effects of the stone upon himself. Thus science seems to be at war with itself: when it most means to be objective, it finds itself plunged into subjectivity against its will.” (Russell 1950, 15)

It is exactly this argument that is being overlooked in the starting assumptions of the block universe interpretation. By postulating in advance that our observations of nature should always conform to the criterion of absolute objectivity, these initial assumptions are implicitly already writing off subjective experience as irrelevant. After all, our observations can only be absolutely objective when our subjective experience manages to achieve an exact synonymy with nature; observation can only be objective when it involves only the registration of what is already ‘out there’, or when it manages to capture at least the bare essence of these outerworld physical contents. According to this at first sight rather dualistic interpretation, the contents of conscious thought basically reflect those of the externally existing physical world. Conscious thought, on this account, amounts to nothing more than the computational processing of signals as the brain is taking in information originating from the real world out there. In other words, what we experience as our subjective world of thought is, at the end of the day, just physical neurons firing in response to what goes on in the equally physical ‘real world out there.’ To non-physicalists, consciousness seems to be all too easily explained away, as if it were a mere superfluous concept. Avid physicalists, however, would argue that there isn’t even anything there to be reasoned away in the first place. That is, according to physicalism, there’s no sign whatsoever of anything nonphysical having scientifically traceable causal effects on the physical world.35 As the argument goes, the operation of our musculoskeletal 35

This argument can be countered as follows: since consciousness enables us to imagine and “somatically appreciate” – i.e., give body-related meaning to something-to-be-perceived in terms of the conscious organism’s body states (cf. Damasio 1999, 133-167; Edelman and Tononi 2000, 82-110) – the different future scenarios with which we may have to cope, consciousness “steers” our current behavior in anticipation of what is expected to come. In a similar way, after all, Pavlov’s dog learned to associate the ringing of a bell with the appearance of food which triggered the secretion of saliva, so that the dog would be better prepared to digest the food. Accordingly, from early life onwards, conscious organisms gradually learn to value what they are undergoing by how their body states are affected by it. Like this, consciousness becomes a lived “anticipatory remembered present” (cf. Van Dijk forthcoming) – i.e., a bound-in-one culmination of direct perception and value-laden memories as experienced from within – which definitely has causal consequences for physical reality. When held in the spotlight of the third-person perspective of physical science, however, it remains notoriously elusive.

40

apparatus depends solely on electrochemical signaling in neuronal fibers. Even our emotions, feelings, drives, etcetera, should ultimately be understood in this way so that our conscious inner lives are, at the end of the day, nothing more than purely physical events. And although this physicalist account is indeed quite attractive because of its charming straightforwardness and non-ambiguity, it nonetheless overlooks a crucial aspect of our experience of nature. Namely, we, conscious observers, cannot ever step out of our own personal conscious awareness to check if nature-in-itself exists just as we experience it to be (cf. Van Dijk, forthcoming). For this reason, although it is nowadays considered sound scientific practice to think of nature as the ‘external physical world’, the entire phenomenal scenery around us should in fact not simply be thought of as being ‘out there’, but rather as an integral and indigenous part of all that is involved in getting the process of conscious experience up and running (see Sections 4.2.3 and 4.2.4 for more details). In empirical experience, for instance, there is not only the target system of interest that is relevant to what the outcome will be. It is not so much the naked target system that is being put under scrutiny, it is rather the target system in interaction with 1) the entire subject side of the universe of discourse (i.e. all measurement equipment as well as the conscious observer) and, not to be forgotten, 2) the entire grand-environment in which both target and subject side are embedded. The former influence is usually mentioned more often than the latter (cf. Planck 1932, 217; Heisenberg 1958, 58; d’Espagnat 1983, 17), but physicist Joe Rosen has given a well-argued account of how events under observation are inescapably affected by the greater universe in which they are embedded: “Since almost all laboratories are attached to Earth, the motion of Earth – a complicated affair compounded of its daily rotation about its axis, its yearly revolution around the Sun, and even the Sun’s motion, in which the Earth, along with the whole solar system, participates – requires rotation and changes of location and velocity, both for experiments repeated in the same laboratory and for those duplicated in other laboratories.” (J. Rosen 2009, 34) Another, more visual way of illustrating how the goings-on of the entire universe affect observation, comes from David Bohm: “Consider, for example, how on looking at the night sky, we are able to discern structures covering immense stretches of space and time, which are in some sense contained in the movements of light in the tiny space encompassed by the eye (and also how instruments, such

41

as optical and radio telescopes, can discern more and more of this totality, contained in each region of space).” (Bohm 1980/2002, 188)

All the light-emitting structures in the cosmos that are actually involved in stimulating your retinal cells during your nightly star gazing activities also have a gravitational effect on target systems that cannot be so easily ignored. As physicist Michael Berry (1978, 95-99) has been able to show, even the gravitational influence of an electron at the limit of the observable universe, some 10 billion light years away, has a noticeable effect on physical systems located here on Earth. So, all this basically means that even our most systematic and well-controlled way of observing how nature works – empirical experience – is affected by events happening at vast distances from the actual experiment. This tells us that the entire natural universe is actually involved in giving shape to our experiences. In fact, the universe at large is not only constantly ‘sending out’ light and gravitational stimuli that trigger our sensory cells in the retina and our balance-controlling vestibular system located in the inner-ear, it also gives birth to all the building blocks of biological life as its vast arrays of galaxy clusters harbor innumerable amounts of stars and supernovae capable of producing, through nucleosynthesis, all naturally occurring chemical elements.36 Obviously, this entire network may indeed be available for observation by us, conscious, visually inclined organisms, but this does not yet show how all this is actually seamlessly participating in that process of observation as well. For this, we need to look at how our local environment within the universe has come to be favorable to life. That is, in order to understand how the entire universe conspires to eventually give rise to life, conscious awareneness, as well as the later-developed empirical experience, we need to zoom in a little bit further onto the specific conditions of our own solar system. Particularly interesting, of course, is the small rocky planet named Earth, which travels through space at just the right distance from the Sun to be able to enjoy life-favoring temperatures and an oxygen-containing atmosphere. Other bio-friendly circumstances are also its strong, protective electromagnetic field, and huge supplies of water that – under the right conditions – could probably serve as a primordial soup for life to develop. Given that all this (and more) has been necessary for our higher-order consciousness to have evolved at all – as a seamlessly embedded endo-process of

36

Next to Big Bang nucleosynthesis (which is the main source of hydrogen [H] and Helium [He] in the universe), there’s also stellar nucleosynthesis and supernova nucleosynthesis (synthesizing H and He into the more heavy elements of the periodic table; reference needed).

42

the greater omni-process which is nature as a whole – we may quite rightfully state that the process of experience, at least in a latent, less condensed form, extends well beyond the limited confines of our brain-equipped skulls. In fact, it can be argued that the process of experience involves the entire universe. Indeed, as mentioned by David Bohm (1980/2002, 188), when spending some time staring into the sky at a clear, starry night, the tiny space occupied by our eyes will in some sense ‘contain’ the goings-on of vast portions of the cosmos – and thus, indirectly, of nature-as-awhole. The photons entering the pupil and hitting the retina may have been in transit for billions of years, emitted by stars varying widely in their historical appearance within the universe as well as in their distance from our home planet Earth. Nonetheless, all those photons come together within the eyeball, thus “informing” and making a difference to the conscious observer in question. As such, these photons are not just informing the organism in a way that fits the scheme of Classical Information Theory (as if they were prestated signals 37 that are fed into an input port to eventually ‘inform’ the end station, which is often thought of as the brain’s CPU-like center of subjectivity), but as we’ll see later on in Section 4.2.4, they are seamlessly integrated participants in the organism’s perception-action loops – the cyclic stream of experience – through which the process of subjectivity is kept on the go. Like this, the process of experience can be taken to involve the universe at large – from its earliest beginnings to its most recent occurrences. This doesn’t just mean that any arbitrary conscious observer can obtain sensory information about the distribution of stars and galaxies across the universe, rather it amounts to much more than just that. After all, the absolutely critical characteristic of all these light-emitting stars (and supernovae) is that they gave birth to all the chemical elements necessary for life to be possible at all. That is, without their role as natural fusion reactors, forming chemical elements during nucleosynthesis, the odds of our current biological world having been able to appear would have been zero. So, not only is our natural universe the generative source of all the chemical elements that have eventually enabled conscious life, the therefrom evolved conscious organisms get to sculpt their conscious Nows – the conscious twosome of self and scenery (see Sections 4.2.4 and 4.3 for a more detailed account) – by entering into cyclic interaction with the same realm from which they themselves have emerged. Like this, conscious organisms can then be thought of as seamlessly integrated endoprocesses within a greater embedding omni-process (the universe as a whole). As such, they

37

See, for instance, (Kauffman 2013, 9-22).

43

can reflexively form a conscious Now (cf. Velmans 2009, 328) through the interplay between their own biological body states and their embedding living environment. In this way, we are in fact seamlessly embedded organisms – ‘equipped’ with a memory-based, anticipatory conscious Now which enables us to navigate, to live from, and to make sense of, the larger embedding universe that forms our home as well as the ‘stuff’ that plays its part in our experiences. All things considered, we are ultimately nothing less than active participants in the reflexive process through which nature experiences itself (cf. Velmans 2009, 327-328; Van Dijk 2011, 81). Hence, instead of labeling nature as an external ‘real world out there’, it is far more helpful to treat it as a fundamentally indivisible whole in which the ‘inner life’ of conscious organism and the ‘outer world’ of natural phenomena are actually two intimately related aspects of the same overarching psychophysical process. At the end of the day, all the above should be more than enough to admit that subjectivity has to play a crucial role in nature. Instead of adopting the detached and disembodied ‘point observers’ from Minkowski’s block universe interpretation, with their equally detached perspective onto an otherwise entirely ‘physical real world out there’, we should preferable think of observers as seamlessly embedded, living, and utterly involved participants; intimately nested endo-processes within the greater omni-process of nature from which they ensue. Accordingly, we should not think of ourselves as being the detached end-destination of passive incoming information – whether empirical or sensory. Rather, we just as much in-form, affect, give shape, and make a difference to the process of nature as that the process of nature informs and makes a difference to us. Reminiscent of, but not entirely synonymous with, David Bohm’s concept of ‘active information’ (Bohm and Hiley 1993, 35-37), all this can be said to occur in an order of active mutual informativeness (cf. van Dijk 2011, 75; van Dijk 2016).

2.5.2 Events in nature reside in a geometrical continuum (or not?)

Already in Ancient Greece, Plato, Aristotle and other philosophers cherished geometry as one of the most esteemed branches of knowledge. Roughly from the year 1100 to 1700, it was common practice to interpret nature from the teleological perspective of AristotelianScholastic philosophy which was part and parcel of the Christian doctrine that very much dominated medieval society. But as much as Aristotle’s framework had been relying on basic

44

principles of geometry, particularly to describe the details of movement, it certainly had not succeeded in incorporating time within this geometric picture (cf. Section 2.1.1). The latter achievement was left for Galileo to flesh out. And by doing so, he managed to put together a framework that was now thought to be fully geometric. Thanks entirely to Galileo’s efforts, the scientific relevance of geometry eventually grew to unprecedented heights, thereby basically pushing the Aristotelian conception of a purposeful cosmos from the throne. In fact, Galileo was so much enthused by the outcomes of his experiments that he practically became a ‘crusader of geometry’ in his own right. As such, he felt that mathematics – which to him was identical to geometry – should be crowned as the perfect and infallible language of nature: “Philosophy is written in this grand book – I mean the universe – which stands continually open to our gaze, but it cannot be understood unless one first learns to comprehend the language in which it is written. It is written in the language of mathematics, and its characters are triangles, circles, and other geometric figures, without which it is humanly impossible to understand a single word of it; without these, one is wandering about in a dark labyrinth.” (Galileo 1623)38

Later on, when Newton elaborated on Galileo’s work and came up with his laws of motion and gravitation, the massive empirical success basically silenced all critics (with the exception of his main opponent Gottfried Leibniz and his followers). As a result, the geometrical timeline and the therewith associated method of time-based physical equations ended up on a high pedestal, and once in this privileged position they could quite easily convince virtually anyone that spatial and temporal geometry were indeed genuine aspects of nature itself, rather than idealizing abstractions.39 Minkowski’s four-dimensional geometrical construct is ultimately an elaboration of Newtonian absolute space and time,40 which, in turn, was a keen extension of Galileo’s 38

As quoted in (Popkin 1966, 65). Please note that “abstraction” is not the act of reducing concrete, real-world objects, events, relations, and/or phenomena to their most pure and ideal platonic forms. Rather, abstraction is the dissection and reduction of the process of nature to symbols, geometric elements, algorithms, etc., that are meaningless by themselves. They can only achieve concrete significance when situated within a socioculturally evolved, meaning-providing context of use. Like this, in order to make any sense at all, they need to be considered within a semiotic process where they can form a unified threesome with an observer-individuated referent (i.e. target system or aspect of interest) and the impact on the sign-interpreting observer; cf. Section 3.2.5. 40 For Newton, space and time were absolutes that did not depend upon any physical goings-on. Rather, they made up the backdrop within which the contents of nature could be accommodated. Like this, absolute space was seen to be unchanging and immovable. Time, on the other hand, was thought to be absolute and universal in the 39

45

seminal work. Now, by fusing space and time together into one, Minkowski obtained a 4dimensional continuum that could provide a quite straightforward framework for the relativity of simultaneity to make sense. Like this, despite spacetime’s origination from Newton’s and Galileo’s earlier geometrical understanding of space and time, the final knockout of Newton’s absolute space and time became an established fact. But Minkowski’s method basically consisted of refurbishing Newton’s notions of absolute space and time, using them as the semi-finished source materials for his end product which is the four-dimensional spacetime continuum. So, like this, Minkowski, working from Einstein’s Special Theory of Relativity, meant to overthrow Newton’s absolute space and time not by replacing them with something else, but by fusing them together into one continuum. But we should not forget that Newton’s space and time were themselves already simplifying abstractions from the process of nature, and so is the space-time construct through which they were amalgamated into one continuum. Nonetheless, by assuming that his abstractions agreed with what existed in nature and that they would hold indefinitely (i.e. here, there, and everywhere within nature, on a local as well as on a global scale), Minkowski basically tried to extrapolate to the global universe what had only been found to be so on a fairly local scale: “ … relativity theory, both special and general, is constructed with an in-built principle of locality. This principle is manifest explicitly both in the epistemological foundations of the theory as well as its mathematical foundations in terms of the manifold approach to spacetime. Thus the point event and its local neighborhood are considered as primary with the global structure of space-time arising by consistently piecing together local neighborhood patches.” (Papatheodorou & Hiley, 1997, 81)

This not-so-well-thought-through extrapolation of local models into global frameworks has been criticized quite strongly by Lee Smolin: “ … the success of scientific theories from Newton to the present day is based on their use of a particular framework of explanation invented by Newton. This framework views nature as consisting of nothing but particles with timeless properties whose motions and interactions are determined by timeless laws. The properties of the particles, such as their masses and electric charges, never change, and neither do the laws that act on them. This framework is ideally sense that a) it was supposed to be valid for all of nature simultaneously, and b) it was held to run its course irrespective of any events being present to unfold ‘within’ this absolute time.

46

suited to describe small parts of the universe, but it falls apart when we attempt to apply it to the universe as a whole. “All the major theories of physics are about parts of the universe – a radio, a ball in flight, a biological cell, the Earth, a galaxy. When we describe a part of the universe we leave ourselves and our measuring tools outside the system. We leave out our role in selecting or preparing the system we study. We leave out the references that serve to establish where the system is. Most crucially for our concern with the nature of time, we leave out the clocks by which we measure change in the system.” (Smolin 2013, xxiii)

Although the consequences of taking this external perspective may not become immediately apparent, there are some ominous tell-tale signs. Also, since Minkowski tried to build on the same Galilean-Newtonian notions whose legitimacy he was questioning in the first place, it shouldn’t come as a big surprise for us that empirical agreement between theoretical expectations and measurement outcomes does indeed break down at a certain point. Accordingly, when extrapolating from a locally successful geometric construct in an attempt to eventually cover the entire universe, we may well expect deviations between theory and practice. So when Einstein in turn elaborated on Minkowski’s four-dimensional spacetime manifold to thus put together his general theory of relativity, we should keep in mind that the Theory of General Relativity may actually have a limited domain of application as well. It’s of course common knowledge that general relativity only applies to large-scale, high-mass systems such as solar systems and does not work on the small-scale, low-mass level where quantum mechanics holds. Moreover, the Minkowskian-Einsteinian 4-dimensional geometrical construct may indeed have proven to be very successful in many cases, but the therewith associated theory of gravitation has nonetheless produced some rather worrying anomalies: the bore hole anomaly, the earth fly-by anomaly, and the dark matter/dark energy anomalies (cf. Cahill 2006, 44; Cahill 2008; McCarthy 2006, 358). Contrary to what mainstream post-geometric physics41 suggests, we cannot simply reduce the process of nature to a geometrized 4-dimensional continuum without having to pay any price for it. We don’t even have to look very far to see what has to be given up in

41

Post-geometric physics: Any field in physics where geometrical dimensions are used to construct – via an act of preparatory stage-building – a “prefab arena” in which the events of interest should run their course.

47

exchange. When setting up such an artificial geometrical arena,42 populated by infinitesimal ‘point events’ and ‘point observers’ 43 it shouldn’t come as a surprise that its characteristics do not fully match up with those of ‘Nature-in-full’:

“[The] relativistic space-time structure of point events and signals is only an abstraction arising from the externalization of the undivided activity of … process considered as a whole.” (Papatheodorou and Hiley 1997, 249)

In other words, it is the premature removal of all processuality, creative potentiality and subjectivity (cf. ‘innerworldliness’) from our worldview that causes our contemporary mainstream physics to portray nature as if it were entirely timeless and devoid of experience. However, physics itself – being an empirical science – is utterly based on experience to begin with. Since no point observer can be reasonably expected to exhibit any level of conscious experience, the inference that the experience of time should be considered an illusory sideeffect of physical reality is rather gratuitous. Such an inference simply reformulates what was already tacitly smuggled in at the beginning. That is, such an inference – as it relies on the physicalist orthodoxy of mainstream physics – necessarily presupposes the validity of the main pillars on which mainstream physics rests. This includes the Galilean cut between quantifiable aspects of nature (e.g. length, mass and time duration) and qualitative ones (e.g. the redness of red, the warmth of heat, and the hurtfulness of pain), which already from the very beginning strips the process of empirical observation from all its experiential aspects. Further on down the line, then, by reducing live observers to abstract ‘point observers’, postgeometric physics is basically telling us that it holds consciousness to be irrelevant because it does not fit into our quantitative account of nature from which consciousness has already been removed. Just as we saw earlier on with Aristotle’s teleological physics, this amounts to a tautology, i.e. trying to justify something by falling back on its presupposition. 42

The 4-dimensional spacetime continuum of relativity theory does not account for nonlocality. Therefore, at the least, it can be characterized as an idealizing simplification. 43 Initially, in Special Relativity, Einstein did not take into account gravitation. Only with the later development of his General Theory of Relativity, gravitation (and thus mass) were presented as a natural consequence of the curvature of the geometrical spacetime continuum. As an addendum to Einstein’s first (Special) Theory of Relativity, Minkowski’s geometrical spacetime construct only had to deal with geometry, point events, point observers, and any (less than or equal to light-speed) causal connection between them (cf. Papatheodorou and Hiley 1997).

48

This can already be interpreted as a subtle hint that, despite of what such physical abstractions may sometimes make us believe, we are not equivalent to passive, point-like endrecipients of numerical data extracted from an – according to mainstream physics – entirely physical and mathematically tractable real world out there. As will be discussed in Chapter 4 (‘Life and Consciousness’) living observers are actually seamlessly embedded endo-processes of nature that cannot be readily reduced to purely abstract point-observers. There is of course a whole lot more to be said about why it is better not to think of nature as a whole in terms of geometry, point events and point-observers. For now, however, the thing to be remembered is primarily that a bio-centric worldview in which organisms are seen as seamlessly embedded participants of the process of nature is not likely to be compatible with a geometrical continuum populated by point-size agents that are located outside of the events they aim to become informed about, instead of inside the natural world they are participating in it. For those readers who would like to know more about possible reasons to reject the idea of point-observers, more background information is provided in appendix X. Next to that, in Sections 4.2.3 to 4.3.1, it is discussed how conscious organisms are well-embedded living beings that get to sculpt their sense of self and world by being seamlessly integrated participants of the same natural world they are trying to make sense of.

2.5.3 Relativity of simultaneity means that our experience of time is illusory (or not?)

With regard to the relativity of simultaneity we may consider the following: the fact that different observers moving at different enough speeds may witness and undergo the effects of well-separated events in a different order, showed the failure of Newton’s absolute and objective present moment. The idea that there was a steadily advancing, absolute Now – valid for the entire universe and identical for every observer within it – turned out to be flawed. However, this should not necessarily mean that nature is entirely without processuality. In other words, the scientific finding of the relativity of simultaneity does not justify the conclusion that nature is devoid of a present moment effect – nor does it license the claim that there is no passage of time and that nature should therefore be entirely non-process. Although different observers may indeed not be able to reach an agreement on whether two well-separated events occur simultaneously or not, this does not mean that nature is beyond doubt timeless and non-processual. The relativity of simultaneity does indeed give a fatal

49

blow to Newton’s absolute and universally valid time, but according to Milič Čapek – a process philosopher and philosopher of science who spent much of his career trying to disentangle the many fascinating intricacies of space and time – this does not automatically imply the end of temporality as understood in a more wide-ranging sense: “ … Newtonian time may be only a special case of the far broader concept of time or temporality in general in the same sense that the Euclidean space is a specific instance of space or spatiality in general. If we admit this possibility, then the negation of the Newtonian time entails an elimination of temporality and change in general as little as the giving up of the Euclidean geometry destroys the possibility of any geometry.” (Čapek 1976, 507)

Although the steps leading to the block universe interpretation seem quite sound and plausible, the whole argumentation for the non-processuality of nature is in fact not as watertight as one might wish. Sure enough, the conclusion of nature’s timelessness follows from its premise which is the relativity of simultaneity. However, “ … it is simply not true that simultaneity and, in particular, succession of events are purely and without qualification relative” (Čapek 1976, 508). In fact, actual relativity of simultaneity – i.e. the case when two observers disagree about the (non)simultaneity of two events E1 and E2 because each observer has a different order in which these events are observed – will only occur under precisely defined conditions. That is, relative simultaneity will only take place when the spatial interval44 between the events E1 and E2 is greater than the maximum reach that causal action can have within the temporal interval (t2 - t1) that separates them.45 In other words, any two events E1 and E2 that are simultaneous This can be derived from Minkowski’s formula for the constancy of the world interval 𝐼 = 𝑠 2 − 𝑐 2 (𝑡2 − 𝑡1 ) = 𝑐𝑜𝑛𝑠𝑡𝑎𝑛𝑡 (with c = 3∙105 km/sec = 3∙1010 cm/sec; and with spatial interval 𝑠 = √(𝑥2 − 𝑥1 )2 + (𝑦2 − 𝑦1 )2 + (𝑧2 − 𝑧1 )2 . Like this, s is expressed in terms of the spatial and temporal coordinates x1, z1, t1, x2, y2, z2, and t2 which are geometrically associated with the events E1 and E2. Please note that all these geometrical coordinates are specified from the perspective of the observer whose reference frame is being applied. 45 In Newtonian physics, time is by convention considered to pass by at an even rate, while space is held to be spread out in equally long stretches as well. In Newtonian absolute space, therefore, the spatial distance between two stationary point positions 𝑠1 and 𝑠2 will thus be the same for all observers involved – moving or not. As a result, an object moving at uniform speed between these locations will cover the given distance ∆𝑠 = (𝑠2 − 𝑠1 ) within the same time interval ∆𝑡 = (𝑡2 − 𝑡1 ) – no matter which coordinate system is being used. In Minkowskian space-time, however, spatial distance and temporal duration are treated as equivalent. As a result, the invariance to be agreed upon is that of intervals of spacetime as an integrated whole. Such intervals are called ‘world intervals’ and are denoted by capital I (Minkowski 1908). 44

50

for observer O1 can only be non-simultaneous for observer O2 when the spatial separation between the events is greater than the distance that light can travel within the available time frame (t2 - t1) between their respective coming-into-actuality (Čapek 1976, 508; see also Weinert 2005, 161-184).

Fig. 2-3: Minkowski space-time diagram. Although Minkowski assumes that the spacetime continuum should really have a total of four dimensions, for ease of depiction only one spatial dimension is shown. Each observer or event at ‘Here-Now’ can be affected by events within the past causal light cones. The cones are depicted with a 45º angle, because, by convention, the speed of light is here equivalent to 1 space unit per time unit. Therefore, a world line cannot exceed the angle of 45º.

When their separation in space is smaller or equal to the temporal interval multiplied by the speed of light – or, in other words, when s ≤ c ∙ (t2 - t1) – then both observers will see their events occurring in the same order. All events to be found within the causal light cone will always have the same order of occurrence for all ‘inner-cone’ observers – whether they be moving or not. [NB: recheck this statement - xxxx] To get a better understanding of how this actually works, let’s take a look at a Minkowski space-time diagram (Fig. 2-3). Within the causal light cone, events are seen to neatly queue in line so that they form ‘world lines’. According to Minkowski, these world lines are the static spatiotemporal threads of events that, in loose analogy to beads on a string, are stockpiled one after the other all across the block universe – from its earliest of beginnings into the infinite future. In every frame of reference, any observer’s ‘here-now’ will precede all events belonging to the observer’s causal future, while it follows after all events found within

51

the backward cone of the observer’s causal past. The thus obtained unidirectionality of events is sometimes referred to as the arrow of time.46 Despite its slightly misleading name this metaphorical arrow does not move into the future, but is typically thought to point towards it, thereby indicating a manifest asymmetry in nature between past and future events: “By convention, the arrow of time points toward the future. This does not imply, however, that the arrow is moving toward the future, any more than a compass needle pointing north indicates that the compass is traveling north. Both arrows symbolize an asymmetry, not a movement. The arrow of time denotes an asymmetry of the world in time, not an asymmetry or flux of time [italics added]. The labels ‘past’ and ‘future’ may legitimately be applied to temporal directions, just as ‘up’ and ‘down’ may be applied to spatial directions …” (Davies 2006, 9)

According to the static block universe interpretation, we should, at least theoretically, be able to find all past and future events queued up in the form of world lines when tracing down the direction of this arrow of time. Like this, all these world lines and their past and future events are seen to exist together at once in one eternal, statically frozen ‘timescape’, and will forever remain so (cf. Davies 2006, 8). Accordingly, the early events are queued up at the beginning, while the later, posterior events are stockpiled further up down the line. This asymmetry between the past and future chunks of the block universe motivated Čapek to reason that, on close enough scrutiny, relativity theory actually suggests that everyone’s ‘local now’ ultimately has an absolute existence, with a strictly fixed order of what comes after and what comes before: “Since this ‘before-after’ relation is invariant in all systems, it follows that in no frame of reference can my particular ‘here-now’ appear simultaneous with any event of my causal future or with any event in my causal past. This follows from the fact that the succession of the events constituting the world lines can never degenerate into simultaneity in any system: this obviously applies to the world line of my own body. In this sense my ‘now’ still remains absolute. It is not absolute in the classical Newtonian sense since it is confined to ‘here’ and does not spread instantaneously over the whole universe. Yet, it remains absolute in the sense that it is anterior to its own causal future in any frame of reference.” (Čapek 1976, 518-519) 46

NB: See also (Weinert 2005, 184) for the link between causality (causal chain) and the before-after asymmetry.

52

At the end of the day, Čapek’s argument boils down to this: “On every individual world line, the ‘here-now’ moment separates unambiguously the past events from the unrealized potentialities of the future events, and this separation holds in all other possibly existing frames of reference. It certainly cannot be called arbitrary. In this precise sense each ‘here-now’ is absolute.” (520) This basically means that it should be impossible for any such absolute ‘here-now’ to simultaneously be part of someone else’s past, or, equivalently, it means that one observer’s future events cannot already be part of another observer’s present reality: “No event of my causal future can ever be contained in the causal past of any conceivably real observer. By ‘conceivably real observer’ we mean any frame of reference in any part of my causal past or anywhere in my present region of ‘elsewhere’. In a more ordinary language, no event which has not yet happened in my present ‘here-now’ system could possibly have happened in any other system ... Since the inclusion into the causal past of the observer is the necessary condition for the perceivability of events, it means that the postulated existence of future events is unobservable in principle. … [T]he virtualities of our future history which our earthly ‘now’ separates from our causal past remain potentialities for all contemporary observers in the universe. Something which did not yet happen for us [locally] could not have happened elsewhere in the universe.” (Čapek 1976, 519-521)

In an attempt to explain the implications of the Special Theory of Relativity in lay terms, however, renowned physicist Brian Greene claims that observers at, say, 10 billion light years from Earth would be able to make their ‘Now slice’ coincide with what we would call our remote future – just by leisurely moving towards us (Greene 2004, 134-138). This, according to him, is simply an inescapable consequence from the fact that we live in a static block universe. There is nonetheless good reason to doubt this conclusion. Namely, the argument for the static block universe seems to depend largely on what Whitehead called the fallacy of misplaced concreteness – the confusion of nature itself with our theoretical abstractions of it. In this case, the confusion is between a) geometric conceptions of space and time on the one hand, and the process of nature on the other hand, and b) between point-events and point-

53

observers and actual events and live observers. On top of that there’s even another level of complication which makes things even more mixed-up. That is, other theoretical abstractions in special relativity – such as ideal clocks, ideal measuring rods,47 four-dimensional coordinate systems R4, space-time diagrams, causal light cones, photons that basically serve as bits of information, etc. – are typically used in further interplay with one another, thus yielding a level of meta-abstraction which can then again be quite easily confused with the actual process of nature as well. For instance, Alfred A. Robb argued in 1914 that: “The work of Minkowski is purely analytical and does not touch on the difficulties which lie in the application of measurement to time and space intervals and the introduction of a coordinate system. As regards such measurement; one cannot regard either clocks or measuring rods as satisfactory bases on which to build up a theoretical structure such as is required in this subject. One knows only too well the difficulty there is in getting clocks to agree with one another; while measuring rods expand or contract in a greater or lesser degree as compared with others. ... It is not sufficient to say that Einstein’s choices are ideal ones: for, before we are in a position of speaking of them as being ideal, it is necessary to have some clear conception as to how one could, at least theoretically, recognize ideal clocks and measuring rods in case one were ever sufficiently fortunate as to come across such things; and in case we have this clear conception, it is quite unnecessary, in our theoretical investigations, to introduce clocks or measuring rods at all.” (Robb 1914/2014, 13) Notwithstanding criticism like this, Eddington’s solar eclipse experiment and other experiments following in its wake turned out to agree so well with the predictions of Einstein’s Special Theory of Relativity that most people in the field sooner or later became dedicated believers.48 Subsequently, this was what encouraged physicists to take a serious 47

Alfred A. Robb. Geometry of Time and Space. => speaks of “optical lines” (Robb 1914(?)/2014, 13): [NB: Since measurement coordination is needed – i.e. the assignment of (1) a fixed standard rate by which ideal clocks should be expected to run, or (2) a fixed standard length for the span of an ideal measuring rod – measurement practice first requires a reliable theoretical account in which this standard measure is to be grounded. But, in turn, this theoretical account can only turn to measurement practice in order to get the data on which to base its theoretical inferences of how to arrive at a reliable context-independent standard measure.] In other words, measurement coordination is needed to guarantee that all ideal clocks will operate at a universally identical, standard rate when moving at any speed anywhere in the cosmos ... 48 As mentioned earlier, there have also been quite some experiments that did not agree with the relativity theories. Examples are the experiments that led to the ‘bore hole anomaly’, the ‘Earth fly-by anomaly, and, of course, the ‘dark matter and dark energy anomalies’). Instead of leading to doubt about the theories, however, these experiments are typically thought to be indications that the data are, in one way or the other, incomplete (cf. McCarthy 2006, 358).

54

look at Minkowski’s block universe interpretation as well. What they then found was that Minkowski had linked the relativity of simultaneity with the tilted simultaneity planes in his geometry-based space-time diagrams that could be projected from the past causal light cones into the future-directed cones. But instead of interpreting this as an abstraction of nature, Minkowski held that it was in fact a faithful representation of nature. That is, he claimed that nature was actually a geometry-based 4-dimensional spacetime continuum and that therefore such tilted simultaneity planes did not only appear in his space-time diagrams, but that they really existed in nature – extending not only into the past, but also into the future. Accordingly, the belief developed that all of nature’s future events are already lying ‘out there’ in wait – ready to be a part of any observer’s ‘Now slice’.49 So, in retrospect, it seems that (1) the ‘discovery’ of relative simultaneity for spacelike separated events, and (2) the above-mentioned further mix-up of initial abstractions have together been so compelling to Minkowski, Einstein and many of their contemporaries that they were willing to drop every sense of temporality without all too much ado. Unfortunately, however, while doing so, they became so much committed to their geometrical interpretation of nature that they entirely overlooked the lurking risk of the fallacy of misplaced concreteness. Then, because they were most likely so mesmerized by how well the block universe concept worked out, they went on to marginalize the fact that both the relativity of simultaneity and the relativity of succession of events do not apply under all conditions (cf. Čapek 1976, 508; see also Weinert 2005, 161-184). It is like this that this debatable idea of a static, timeless block universe could have survived a century of physical research and has since its inception established itself as the standard interpretation of the Special Theory of Relativity. But, with all the criticism from Section 2.5 to 2.5.3, we can now consider ourselves sufficiently equipped to conclude that the concept of a static, timeless block universe is at least premature, and most likely flawed. Now, although this verdict seems to open up the possibility for a dynamic block universe (as suggested by Čapek, Weinert, Ellis and others), this alternative option is still susceptible to some of the earlier-mentioned objections (for instance, it still does not take into account that we, conscious observers, are seamlessly embedded endo-processes of the greater 49

However, only in hindsight – i.e., only after the synchronization of clocks by two spatially separated, moving or non-moving observers – can two ‘inner-cone events’ be identified as lying on the same ‘simultaneity plane’ within that light cone (see Fig. 2-3). This strongly suggests that the synchronization events (i.e., light emission, reflection and reabsorption; see Fig. xxxx) must actually have occurred before that, and that they do not pre-exist in the future light cone.

55

overarching omni-process which is nature). The dynamic block universe view still hinges on the same acts of abstraction that helped facilitate the static block universe.50 Like so, a dynamic block universe interpretation will not lower the risk of committing the fallacy of misplaced concreteness51 and will still lead to the undesirable bifurcation of nature (as it still reduces live human beings to insentient point-observers). In fact, the acts of abstraction that accompany the setting up of any representational model of nature are part and parcel of what Lee Smolin calls “doing physics in a box”. So, to get a better understanding of how our interpretations of nature actually get to be formed, let’s take a look at how our current way of “doing physics in a box” is actually made to work.

50

It is already a misleading idealization to treat position, time, events, observers, clocks, measuring rods, and the like as if they were truly representative of a ‘real world out there’ and as if they can be successfully held in one’s thoughts separately from the process of nature itself. 51 As many generations of physicists before us have done, we could decide to just stick to the well-beaten path of geometry-based approaches which has been so carefully laid down by Galileo, Newton, Einstein and Minkowski. If we would indeed choose to do so, we would eventually have to commit ourselves to abstracting the process of nature into geometry-based spatial and temporal dimensions, point events, point observers, causal light cones, and so forth. When thinking of mathematics and geometry as tools (Smolin 2013, 34), rather than regarding them as parts of an eternal, perfect and exact language of nature, however, these geometry-based abstractions are more likely to turn out as idealizing figures of speech, not as representatives of concrete reality.

56

3. Doing physics in a box

3.1 The Newtonian Paradigm

In his provocative and stimulating book Time Reborn (2013), Lee Smolin argued that contemporary mainstream physics is still very much based on the Newtonian paradigm: “My argument starts with a simple observation: the success of scientific theories from Newton to the present day is based on their use of a particular framework of explanation invented by Newton. This framework views nature as consisting of nothing but particles with timeless properties whose motions and interactions are determined by timeless laws. The properties of the particles, such as their masses and electric charges, never change, and neither do the laws that act on them. This framework is ideally suited to describe small parts of the universe, but it falls apart when we attempt to apply it to the universe as a whole. All the major theories of physics are about parts of the universe – a radio, a ball in flight, a biological cell, the Earth, a galaxy. When we describe a part of the universe we leave ourselves and our measuring tools outside the system. We leave out our role in selecting or preparing the system we study. We leave out the references that serve to establish where the system is. Most crucially for our concern with the nature of time, we leave out the clocks by which we measure change in the system.” (Smolin 2013, xxiii)

When ignoring the limitations of the Newtonian paradigm, we may all too easily forget that the exclusion of clocks, measuring rods and ourselves is in fact an act of idealizing abstraction. By failing to question the actual validity of this abstraction, we may soon become convinced that our simple equations for our local isolated systems can be extrapolated to the global universe without any trouble whatsoever. And that’s in fact exactly what happened after Galileo. That is, building from Galileo’s proportional relationships between distance and time, Newton was able to add force and mass into the mix, and in so doing developed his more advanced Newtonian equations of motion. Like this, the equations dealing with the gravitation of ordinary objects on earth could be unified with Kepler’s laws of planetary motion, thus leading to the famous ‘clockwork universe’ worldview in which everything is determined from beginning to end. This even motivated Pierre-Simon Laplace to claim that it should in principle be possible to calculate any future state for all of nature if only we were

57

given “at a certain moment ... all forces that set nature in motion, and all positions of all items of which nature is composed” (Laplace 1814/1951, 4). And although, nowadays, we may indeed no longer be committed to such a strict determinism, when push comes to shove we still seem to think of our physical equations as being essentially deterministic. Over the years mainstream physics has thus developed the belief that it should in principle be possible to predict any system’s temporal evolution with as good as perfect precision. If not yet today, then at least not too long from now, any system’s future state should thus be attainable from its initial conditions and the laws of nature that are thought to govern the system’s behavior (Smolin 2013, 94).52 Like this, a system of interest is typically singled out from its environment, then, some convenient intermediate state is chosen to serve as its initial condition, after which the system is put through the wringer of natural law.

3.1.1 The exophysical aspect of the Newtonian paradigm The first time that these key elements – natural system, initial conditions, and mathematically spelled-out lawful regularities – appeared in the method of physics, was when Galileo wrote out his experimental findings on falling objects. In fact, in order to make all this possible, Galileo had to separate what he took to be the objective primary qualities of the natural world (e.g. shape, size, position, etc.) from any subjective secondary qualities thereof (e.g. color, sound, tactility, etc.). In his view, the latter amounted basically to no more than superficial name tagging as performed by an observer’s conscious mind: “Now I say that whenever I conceive any material or corporeal substance, I immediately feel the need to think of it as bounded, and as having this or that shape; as being large or small in relation to other things, and in some specific place at any given time; as being in motion or at rest; as touching or not touching some other body; and as being one in number, or few, or many. From these conditions I cannot separate such a substance by any stretch of my imagination. But that it must be white or red, bitter or sweet, noisy or silent, and of sweet or 52

All this is typically expected to occur in an empirically adequate way, that is, with chronological, one-on-one empirical agreement between measurement data and data-reproducing algorithms, or, otherwise, by way of statistical goodness of fit – as is the case in quantum mechanics.

58

foul odour, my mind does not feel compelled to bring in as necessary accompaniments. Without the senses as our guides, reason or imagination unaided would probably never arrive at qualities like these. Hence I think that tastes, odours and colours, and so on are no more than mere names as far as the object in which we place them is concerned, and that they reside only in the consciousness. Hence, if the living creature were removed, all these qualities would be wiped away and annihilated. But since we have imposed upon them special names, distinct from those of the other and real qualities mentioned previously, we wish to believe that they really exist as actually different from those.” (Galileo 1623/1957, 274)

In this way, Galileo introduced into physics the distinction between what he considered to be the physical side of nature and what, in his opinion, belonged to the mental side. By stripping physical objects from any subjective sensory qualities, at last it became possible to make an objective and completely quantitative comparison between physical properties. Particularly, this crucial distinction between the physical and the mental enabled Galileo to formulate what has later become known as his ‘Law of Fall’, 𝑠 ∝ 𝑡 2 , the lawful mathematical expression in which the distance s is directly proportional to time squared. It wasn’t until after Galileo that this proportional relation was actually converted into what is now sometimes called Galileo’s equation 𝑠 = ½𝑎𝑡 2 (with 𝑠 standing for distance, 𝑎 for the constant of acceleration, and 𝑡 for time). Evidently, this posthumous tribute was given to honor Galileo’s pioneering contributions to the development of the physical equation. It is however far less obvious that it was Galileo’s act of dissecting the physical from the mental which truly ushered in the era of modern physical research. This was without doubt an enormously crucial step in the history of science (cf. Goff 2013) as it set the stage for all scientists that followed in Galileo’s footsteps and played an essential role in their attempts to one day achieve the full mathematization of nature (cf. Goff 2013; Dijksterhuis 1961; Smolin 2013, 107 and 245). Despite the great progress that had been made by embracing Galileo’s method of separating subjective qualities from measurable quantities, there was also a serious downside to this way of looking at nature:

“ ... Galileo, who said that the scientific method was to study this world as if there were no consciousness and no living creatures in it. Galileo made the statement that only quantifiable phenomena were admitted to the domain of science. Galileo said: ‘Whatever cannot be

59

measured and quantified is not scientific’; and in post-Galilean science this came to mean: ‘What cannot be quantified is not real.’ This has been the most profound corruption from the Greek view of nature as physis, which is alive, always in transformation, and not divorced from us. Galileo’s program offers us a dead world: Out go sight, sound, taste, touch, and smell, and along with them have since gone esthetic and ethical sensibility, values, quality, soul, consciousness, spirit. Experience as such is cast out of the realm of scientific discourse. Hardly anything has changed our world more during the past four hundred years than Galileo’s audacious program. We had to destroy the world in theory before we could destroy it in practice.” (Laing 1980,53 as quoted in: Capra 1988, 132)

In fact, physics has nowadays become so theoretical and so much centered around mathematics and its equation-based method of representation that many physicists seem to think of nature almost solely in terms of physical equations – even to the extent that these equations are often mistaken for the natural world they were actually meant to represent. In doing so, however, these physicists are actually forgetting that the whole enterprise of trying to grasp nature in terms of mathematics is only made possible by first stripping away everything that could not be converted into mathematics in the first place; the redness of red, the sweetness of sweets, and the silkiness of silk cannot be converted into objective and lawful mathematical equations since there is no way for anyone to have access to someone else’s sensory life. For this reason, the Newtonian paradigm – which basically boils down to the idea that it should indeed be possible in principle to one day grasp the entire whole of nature within the language of rigid mathematics – is ultimately based on an incomplete picture of nature and made possible only by the premature banishment of subjectivity (cf. Goff 2013). But next to Laing’s and Goff’s objections, there seems to be yet another downside to Galileo’s separation of object and subject. That is, the universe as a whole can never be observed – let alone be measured with clocks and measuring rods – from some ideal, allencompassing ‘view from nowhere’ (Nagel 1986), imagined as a perspective from outside our natural universe. This impossible ‘view from nowhere’ not only amounts to a kind of mythical outside perspective onto what is generally held to be an entirely physical world, but it also refers to the assumption that observation can be performed while totally neglecting any possible observer influence on the system-to-be-observed. That is, by acting as if the observer 53

This quote is Fritjof Capra’s rendition of a personal conversation with psychologist Ronald D. Laing at a 1980 conference on ‘Psycho-Therapy of the Future’, held in the Monasterio de Piedra Hotel near Zaragoza, Spain.

60

isn’t really present at all, as if a neutral and observerless view is actually all there is, the process of observation can be presented as if it were an assumption-free, objective, and nonparticipating act of information intake. However, quantum theory suggests that, even when the subject system’s presence is minimized like that, the very act of harvesting information from a quantum system will make it collapse into one definite state, although prior to observation the system is typically held to exist in a superposition of all quantum states that comply with its wave function. So, despite the fact that the observer’s presence is neglected in theory, it seems indispensable in practice. Notwithstanding this fundamental limitation, our highly esteemed present-day physics firmly sticks to its habit of ‘doing physics in a box’ with all unfavorable consequences there may be, unfortunately.

3.1.2 The decompositional aspect of the Newtonian paradigm

As mentioned above, the essence of the Newtonian paradigm can be found in the belief that our physical equations should in principle be able to represent nature-as-a-whole as if it were simply the sum total of physical systems. The singling out of those physical systems, however, ultimately hinges on only a few elementary acts of decomposition. That is, these elementary decompositions are required to set the stage in which ‘the fine art of doing physics’ is to take place. In fact, the resemblance between the art of doing physics and the art of doing theater is probably what must have motivated physicist and well-known science popularizer Paul Davies to portray nature in terms of the basic elements of drama: “If nature could be compared to a great cosmic drama in which the contents of the universe – the various atoms of matter – were the cast, and space and time the stage, then scientists considered their job to be restricted entirely to working out the plot. Today, physicists would not regard the task as complete until they had given a good account of the whole thing: cast, stage and play. They would expect nothing short of a complete explanation for the existence and properties of all the particles of matter that make up the world, the nature of space and time, and the entire repertoire of activity in which these entities can engage.” (Davies 1995, 16)

61

Just like the average audience usually doesn’t pay too much attention to the stage-building preparatory work of the theater crew, most working physicists typically remain quite indifferent to the very elementary acts of decomposition that actually enable them to do physics in the first place. Although mostly taken for granted once they’ve become a done fact, these elementary acts of decomposition basically serve to reduce nature to a compact and well-defined universe of discourse within which physics can be made to work. In order to do so, usually without even realizing that they are basically relying on several a priori, and thus at root hypothetical nature-dissecting cuts, most contemporary mainstream physicists commit to the following elementary acts of decomposition (Van Dijk, forthcoming):



the decomposition of nature into target side and complementary subject side;



the decomposition of the subject side into the conscious observer’s ‘center of subjectivity’ and its observation-enabling support systems, measurement instruments, etc.;



the decomposition of the target side into relevant system-to-be-observed and irrelevant system environment;



the decomposition of system-to-be-observed into its ‘constituent elements’;54

Last but not least, a good case can be made that these decompositions are to be accompanied by some further, more controversial ones. The controversy relates especially to the fact that our current mainstream physics holds that nature is entirely timeless and thus basically nonprocessual whereas these remaining decompositions suggest that it is not nature that is nonprocess, but contemporary mainstream physics itself. Following this suggestion, below we’ll start out with the process of nature. However, once our nature-dissecting gaze has partitioned the process of nature into its alleged constituent elements, it can be looked at as if there is no processuality left. That is, Galileo 54

Please note that the system environment and observation facilities are themselves thought to be made up from their own individual system constituents as well. For instance, the observation-enabling support systems, amongst which: 1) the sensory system, 2) accessories, and 3) research facilities, may respectively be divided into: 1) the eyes, optic nerves, visual cortices, etc.; 2) engineering tools and research equipment such as wrenches, cloud chambers and photo-detectors; 3) lab buildings, cleanrooms, scientific libraries, and so on. In turn, all this is embedded in a greater embedding environment and set within a historically evolved context of sociocultural and scientific use (cf. van Dijk 2011, 77; van Dijk 2016). On the whole, however, all aforementioned systems are typically taken for granted, neglected or left out of scope. Depending on the focus of the investigation, as well as the personal preference and philosophical persuasion of the chief investigator, any of the supporting subsystems on the subject side may be handed over to the target side. Like this, a measuring instrument may itself become part of the system-to-be-observed and the subject side will have to trust on ‘the naked eye’ to gather its empirical data.

62

and Newton have basically set the stage for the expulsion of both time and processuality by the later Einsteinian-Minkowskian block universe interpretation. After all, by decomposing the process of nature into spatial and temporal dimensions, the following decompositions later enabled Einstein to fuse space and time together ‘again’ into one spacetime continuum. However, this fusing together of space and time, when looking at it from a process perspective, may just as well be explained as a not entirely succeeded attempt to glue together what should not have been taken apart in the first place – namely, the undivided process of nature. Nonetheless, the Einsteinian-Minkowskian block universe interpretation presented nature as one giant block universe with past, present and future frozen solid into one static whole devoid of any unique and exclusive present moment. Starting from the fundamental assumption that nature is inherently processual, this would then require the following acts of decomposition:



the decomposition of the process of nature into ‘occupied space’ and the ‘passage of time’;



the decomposition of the ‘passage of time’ into a ‘geometrical timeline’ and an external and unidirectional ‘present moment indicator’ moving at a uniform rate;55



the decomposition of ‘occupied space’ into ‘empty space’ and its ‘content’.56

Indeed, before Einstein's arrival on the scene, it was commonly believed that space and time, as derived here, were absolute dimensions existing independently from the material-energetic contents within them. This was, of course, Newton’s interpretation of how nature should hang together. So, as such, the above decompositions may be called Newtonian decompositions. 55

This present moment indicator, or time pointer, is to move externally from the timeline at a uniform rate, or else it cannot provide the otherwise completely static timeline with any dynamicity, or a distinction between past and future (cf. Cahill et al. 2000). 56 Please note that, in line with Einstein’s famous equation 𝐸 = 𝑚𝑐 2 , this content is typically thought of in a material-energetic sense. In the timeless interpretation of quantum physics, it is thought that the stationary wave function can specify all possible configurations of all the universe’s the material-energetic content of the universe that are compliant with the universe’s actual initial conditions: “In quantum mechanics, [the wave function] is all that does change. Forget any idea about the particles themselves moving. The space Q of possible configurations, or structures, is given once and for all: it is a timeless configuration space. ... [T]he probability density [of this configuration space Q] has a frozen value - it is independent of time (though its value generally changes over Q). Such a state is called a stationary state. … All true change in quantum mechanics comes from interference been stationary states with different energies. In a system described by a stationary state, no change takes place. ... The suggestion is that the universe as a whole is described by a single, stationary, indeed static state.” (Barbour 1999, 229-231)

63

Einstein, then, although famous for having get rid of Newton’s absolute space and time, actually built upon their skeletal framework to arrive at the idea of a unified spacetime continuum. That is, although he specifically aimed to reject Newton’s absolute space and time, he still elaborated on these notions by gluing them together into one spacetime construct. And despite the many successes that could be celebrated because of this idea of unifying space and time into one whole, there unfortunately seem to be some downsides as well. This is partly because it is often not such a good idea to try to glue together what should not have been taken apart in the first place. Just as a yoke and egg white will not make a whole egg – let alone a whole chick – when being put together again,57 the merging together of space and time into one spacetime continuum will not result in the universe becoming whole and undivided. Already in the process of separation there are things that will necessarily get lost. Galileo, for instance, didn’t hesitate to separate target and subject world from one another for the greater cause of being able to specify nature in terms of mathematics. Like this, he made it particularly clear that all unquantifiable subjective aspects of observation should be thrown overboard. Next to that, the overall organization of nature as a whole is something that necessarily falls out of reach when dissecting nature into any possible variety of constituent elements. Notwithstanding all these fundamental limitations, the exophysical-decompositional method did give rise to physical equations that have been able to reach a high degree of empirical adequacy and to bring us lots of practical applications. At the heart of this success were the ideas of ‘timeline’ and ‘system state’ which allowed the fruitful collaboration between law-like ‘physical equation’ and ‘configuration space’ – two essential ingredients of the Newtonian paradigm (cf. Smolin 2013, 49-50 and 71). Together, these concepts of ‘timeline’, and ‘system state’, turned out to lend themselves quite well to the expression of natural order. This ‘natural order’ is in fact an assumption that forms one of the basic pillars 57

Admittedly, this is of course a crude caricature, but a telling one nonetheless. After all, just as the shell is a crucial part of the egg that will be lost in the process of separation, there are also various aspects of nature that will be lost in the above process of decomposition as well. First of all, everything that is related to the subject side – measurement instruments including clocks and measuring rods, as well as the conscious observer and all unquantifiable subjective aspects of observation – is separated from what is held to be the entirely physical ‘real world out there’. Also, space, time, and mass-energy are indeed first artificially decomposed from the undivided whole which is nature in the raw, before it is attempted to glue them together again. But because the initially unbroken 'whole is more than the sum of its parts', any act of a priori decomposition will cause something essential of nature to become lost. By the way, the well-known phrase ‘the whole is more than the sum of its parts’, can easily be misunderstood, because in its deepest essence nature does not contain any real ‘parts’. That is, every ‘part’ of nature is only a 'part' in the sense that it is subjectively singled out and linguistically labeled as such.

64

on which science rests. That is, it is an axiomatic principle of science to assume order in nature. Without this assumption, it wouldn’t even be possible to do science at all (R. Rosen 1991, 58). Now, when combining the concept of ‘timeline’ with that of ‘system state,’ this natural order could be expressed with the help of time-driven equations of state – physical equations that were held to specify different successive system states with each step in time. This actually worked so well that the many thus achieved successes caused this idea of natural order to be elevated to the maxim of ‘lawful order’ or even ‘Natural Law’ (cf. R. Rosen 1991, 58):

“ … the basic cornerstone on which our entire scientific enterprise rests is the belief that events are not arbitrary, but obey definite laws which can be discovered. The search for such laws is an expression of our faith in causality. Above all, the development of theoretical physics, from Newton and Maxwell through the present, represents simultaneously the deepest expression and the most persuasive vindication of this faith. Even in quantum mechanics, where the discovery of the Uncertainty Principle of Heisenberg precipitated a deep re-appraisal of causality, there is no abandonment of the notion that microphysical events obey definite laws; the only real novelty is that the quantum laws describe the statistics of classes of events rather than individual elements of such classes.” (R. Rosen 1985, 9)

Due to its many successes, this notion of a law-abiding natural world managed to totally overshadow the alternative narrative of emergent natural order through habit formation (cf. Peirce 1992, 277; Smolin 2013, 147). Because of this dominance, our way of doing physics has become largely geared towards thinking of nature in terms of deterministic, law-like physical equations. On top of it all, however, this dominance has prevented us to seriously question the validity of the above decompositions. That is, instead of treating these decompositions as pre-theoretical interpretations of nature – needed firstly to enable a universe of discourse for doing physics, and secondly to be able to set up physical equations for the expression of ‘lawful order’ – we came to think of them as irrefutable and irreplaceable evergreens that should always, in one way or the other, be included in our physics. This belief has caused us to focus exclusively on 1) post-theoretical interpretation of well-matured physical equations, 2) alternative, but equivalent reformulations of these

65

equations, and 3) enhancement of our measurement technologies as the prime areas where breakthroughs should be expected. The confusing multitude of different interpretations for the quantum mechanics formalism, for Einstein’s relativity theories, and for classical mechanics, for instance, can be seen as a tell-tale sign of the first point.58 As for the second option, string theory would be a good example, but it is by more and more people considered an outdated project (cf. Smolin 2006; Woit 2006). At the end of the list, then, state of the art measurement technology (such as the LHC-detector at CERN, Switzerland and the LIGO-detectors in the US)59 can indeed help us uncover thus far unexplored domains, but will still operate within the Newtonian paradigm and thus leave several essential aspects of nature out of the picture. Spending our thoughts and energy mostly on such post-theoretical interpretation, reformulation, and measurement enhancement has kept many researchers from trying to tackle the more fundamental pre-theoretical interpretations. Pre-theoretical interpretation relates to the very ‘elementary’ acts of decomposition on which our way of doing physics is based. As such, it is often thought to belong to metaphysics and philosophy, which, in the eyes of many physicists, makes it an area that need not be looked into anymore since it is widely considered to have reached its mature end stage. Together with some other reasons, this has even caused most working physicists to turn away from post-theoretical interpretation as well. That is, the prevailing position nowadays seems to be that of trying to refrain from interpretation altogether by retreating into operationalism or instrumentalism (informally referred to as the ‘shut up and calculate!’ approach). Unfortunately for those with an aversion against interpretation, however, this is just as much an interpretation as all other interpretations, so what may be called the ‘interpretation-avoiding argument’ does not really hold. In fact, in comparison with post-theoretical interpretation, which deals primarily with the interpretation of an instrument-based phenomenology of nature, pre-theoretical interpretation seems to take us one step closer to how we make sense of nature. Therefore, if 58

The framework of physical equations for each of those theories is subject to all sorts of different interpretation. It is of course widely known that there is a large number of different interpretations for quantum mechanics, amongst which we can find the Copenhagen interpretation, the Bohmian hidden-variable interpretation, Everett’s many-worlds interpretation, Einstein’s neorealist interpretation, von Neumann’s extension of the Copenhagen interpretation, and Heisenberg’s potentia-actuality interpretation (see Herbert 1985, 16-29). Furthermore, next to the block universe interpretation of the theory of special relativity, there is also a dynamic block universe interpretation, as well as a Lorentzian and neo-Lorentzian interpretation, to name a few. Even for the quite straightforward classical Newtonian mechanics there are at least four empirically equivalent interpretations: 1) the action-at-a-distance interpretation; 2) the gravitational field interpretation; 3) the curved space interpretation; and the analytical-mechanistic interpretation (see Jones 1991). 59 In full, these acronyms are read as: ‘Large Hadron Collider’ at the ‘European Organization [formerly: Council] for Nuclear Research’ and the ‘Laser Interferometer Gravitational-Wave Observatory’. Both research projects have undergone several technical updates to increase their sensitivity and measurement range.

66

we ever want to find out if and how we can do physics without first having to break the initially unlabeled and unbroken natural world into pieces, we likely stand a far better chance if we start rethinking our first acts of elementary decomposition.

From quantum wholeness to the subject-object split and non-decompositional decomposition Given Niels Bohr’s argument of the inseparability of observed and observing system in quantum experiments (reference needed), contemporary mainstream physics is burdened with the task to reunite what shouldn’t even have been separated in the first place. Although the first decomposition – i.e. the division of nature into target side and subject side – is necessary for physics to enable experimentation, the very application of this division in the experimental practice of quantum physics leads Bohr to the conclusion that target and subject ultimately form one inseparable whole. That is, while the split between target and subject system is an epistemological necessity to enable the practice of doing physics, this same practice suggests that this split is, at the end of the day, an ontological irreality. Ultimately, therefore, the split can be no more than a mere figure of speech, convenient for didactic purposes within the context of physics, but a figure of speech nonetheless. In the words of John Stewart Bell: “Now nobody knows just where the boundary between the classical and quantum domain is situated. Most feel that experimental switch settings and pointer readings are on this side. But some would think the boundary nearer, others would think it farther, any many would prefer not to think about it. … A possibility is that we find exactly where the boundary lies. More plausible to me is that we will find that there is no boundary.” (Bell 1988, 29-30)

According to Bell, many other near-fundamental concepts in physics are equally dubious, because they are so intimately related with the target-subject split: “The concepts ‘system’, ‘apparatus’, ‘environment’, immediately imply an artificial division of the world, and an intention to neglect, or take only schematic account of, the interaction across the split. The notions of ‘microscopic’ and ‘macroscopic’ defy precise definition. So also do the notions of ‘reversible’ and ‘irreversible’. Einstein said that it is theory which decides what is ‘observable’. I think he was right – ‘observation’ is a complicated and theoryladen business. Then that notion should not appear in the formulation of fundamental theory. Information? Whose information? Information about what? On this list of bad words from

67

good books, the worst of all is ‘measurement’. It must have a section to itself.” (Bell 1990, 34)

So, the following question arises: could there be a way around all those feeble foundations and their ambiguous behavior in experimental practice? A first clue can be found by looking at how the original unbroken wholeness of target world to be observed and observing subject world is already torn apart in the pre-measurement stage. That is, in both classical and quantum physics, there is a well-established tradition of ‘apartheid’ that comes so natural to physicists that they usually don’t even think about it once, let alone twice. This ‘apartheid’ starts with the Galilean cut, by separating the so-called ‘objective’ and ‘subjective’ aspect of observation from each other in order to enable the explicitly quantitative way of doing exophysical-decompositional physics. As mentioned in Section 3.1.1, Galileo (1623/1957, 274) paved the way for mathematical physics by throwing overboard all subjective and qualitative ‘secondary’ aspects of observation (such as the colorfulness of colors, the touch of textures and the smell of scents). Since these aspects cannot be quantified, tabulated, plotted against time, and turned into mathematical relations, Galileo decided they should belong to the realm of consciousness. Others before him never got to the point that they felt the need to get rid of all these sensory qualities, so, therefore, Galileo was the first to draw this cut between the ‘world of objectivity’ and that of ‘subjectivity’: “ … the supposition that material objects instantiate sensory qualities, such as colours, shapes and odours, is incompatible with their having an entirely mathematical nature. And hence it was necessary to strip physical objects of their sensory qualities in order to make it intelligible to suppose that the physical world could be completely captured in mathematics. … However, for all its virtues, physics has never been in the business of giving a complete description of reality. It aims to give a mathematical description of the fundamental causal workings of the natural world. The formal nature of such a description entails that it necessarily abstracts not only from the reality of consciousness, but from any other real, categorical nature that material entities might happen to have.” (Goff 2013)

Although the embargo on sensory qualities made sure that only quantifiable aspects (such as location, size and weight) were taken into account, all mathematical physics that followed after Galileo basically got burdened with a built-in, hidden dualism. As long as we stick to

68

Galileo’s cut, all kinds of problems related to this hidden dualism will continue to plague physics. For instance, the necessity to set up a universe of discourse with a dedicated subject side is fundamentally at odds with any attempt to apply our physical equations to nature as a whole. For, in that case, although it is typically taken for granted in routine, small-scale measurement situations, the entire subject side (including measurement gear, such as clocks, measuring rods, and also the conscious observer) has to be located outside the natural universe, which is impossible (cf. Smolin 2013, 46 and 80). Nonetheless, in quantum physics, not only is the Galilean cut required to enable the mathematization of observed events, but ‘quantum particles’ also need to be ‘soaked loose’ from their embedding environment (cf. De Muynck 2002, 74-75, 83, 90-91, 94) before they can be submitted to measurement by using, say, a bubble chamber, a photographic plate, or some strategically positioned photodiodes. Were we to retrace our steps, by trying to ‘re-submerge’ those ‘quantum particles’ back into their embedding environment and undo the seminal Galilean cut, what kind of decomposition would still enable us to get a hold of nature? Would we be forced to fall back onto the naked eye – unprejudiced and unmediated by technological-mathematical tools? Or would this still not enable us to stop putting nature through the filter of our nature-dissecting intellect? To be sure, we are ourselves seamlessly part of the same process we’re trying to make sense of. So, therefore, we can expect that our scientific reasoning will always have an element of subjectivity in it, no matter how hard we try to avoid this from happening. As Max Planck put it: “Science cannot solve the ultimate mystery of nature. And this is because, in the last analysis, we ourselves are part of nature and, therefore, part of the mystery that we are trying to solve.” (Planck 1932, 217)

For this reason, when setting out to solve this mystery, we first need to find out how we, seamlessly embedded observers, get to sculpt the world in which we live into ‘conscious information’ and become ‘knowers of knowledge’ – whatever that may turn out to mean exactly. Sections 4.3 and 4.3.1 will give a more detailed discussion of how this could work. For now, however, we’ll focus on the special kind of decomposition that seems to be necessary (but perhaps not sufficient) to pull off such a thing. It is special in the sense that it can be thought of as a form of ‘nondecompositional decomposition’ since it pertains to the

69

coming-to-the-fore of seamlessly embedded endo-processes from within a greater embedding ecosystem of background processuality:



the decomposition of the initially unlabeled natural world into identifiable foreground signals and indiscriminate background noise;

So, instead of pre-theoretically decomposing nature into deeply contrasting target and subject sides and then trying to fuse these both sides together again through post-theoretical interpretation, why not try to do it the other way around? That is, it is probably more fruitful to try to model nature as an undivided process from the get-go, and then to allow for an inner selection process among fore- and background patterns to take place within this process model. How such a nondecompositional process model can be made to work, will be the subject of Chapter 5, where the more subtle details of Process Physics will be discussed. For now, however, we’ll try to go a little bit deeper into the concept of information and how it should be reinterpreted to make it fit a nondecompositional way of modeling our initially unlabeled natural world.

3.2 Measurement and Information theory

The unlabeled natural world can be referred to in many different ways. It can go by a wide variety of names, ranging from the Kantian ‘noumenal world’ or ‘nature-in-itself,’ Alfred North Whitehead’s ‘extensive continuum’ (reference needed), John Archibald Wheeler’s ‘pregeometry’ or ‘pre-space,’ David Bohm’s and Basil Hiley’s ‘holomovement’ and ‘implicate order,’ Bernard d’Espagnat’s ‘veiled reality,’ the ancient Greek ‘apeiron,’ John Stewart Bell’s world of pre-observational ‘beables’,60 or ‘the vacuum state’ in quantum field theory. Each and every one of these terms seems to revolve around, or, at least, leave room for, the idea of an underlying world of potentiality. As such, these terms can go quite well with the concept of ‘pure data in the wild’ which can be thought of as embryonic ‘fractures in the fabric of Being’ 60

I.e. the imperceptible counterparts of ‘physical observables’.

70

(cf. Floridi 2011, 85-86) or as ‘ur-differences that make a difference’ (cf. Bateson 1972, 459; van Dijk 2011, 75).61 This conception of information as a difference that makes a differences fits precisely in the scheme of the abovementioned embedded endo-processes emerging from within a greater ‘ocean of potential’ which is the embedding background processuality (cf. Section 5.2). In mainstream physics, however, quite another concept of information holds sway. Information is here typically used in the syntactical sense of numerically expressed empirical data. These empirical data, however, need a prestated alphabet of symbols (Kauffman 2013, 11) to be expressible in the first place. In nature, there is no such pre-available character set,62 but classical information theory and mainstream physics readily take their symbolic alphabets for granted without even questioning their validity or thinking about how they were put together at all (cf. Shannon and Weaver 1949). In contemporary mainstream physics, information is what crosses the boundary between the target and subject side of a universe of discourse. It is, however, through pretheoretical decomposition (cf. Section 3.1.2) that the universe of discourse for doing contemporary mainstream physics (see Fig. 3-1) can be put together at all. The main function of this universe of discourse is to divide nature into target and subject side, establish an information-exchanging relation between them, and, then, to convert any thus acquired raw measurement results into well-refined empirical data that should be compressible into concise mathematical algorithms (or, in other words, into ‘lawful’ physical equations).

Fig. 3-1: Simplified universe of discourse in the exophysical-decompositional paradigm Analogous to physics, classical information theory requires a division of the world into an information-providing data source and an information-acquiring endpoint. Next to that, it 61

These rudimentary activity patterns can thus be imagined to be emergent from an initially undifferentiated vastness (e.g. via something not unlike a phase transition). Hence, from very early on, nature can already be thought of in a mutually informative (hence, epistemic), as well as in an ontic sense. 62 DNA, for instance, hasn’t been pre-available in the biosphere, but had to evolve within it. In the prebiotic universe, it becomes even harder to find something analogous to a symbol-based alphabet. In fact, the introduction of an alphabet of symbols can be seen as part of pre-theoretical interpretation.

71

draws on an explicitly quantitative method for analyzing the symbolic codes of transmitted and incoming messages. Such quantitative information is typically employed in the form of syntactical units of expression, for instance, as Morse code in telegraph communication, as binary bit strings consisting of digital ones and zeroes, or as the 26 letters of the English alphabet. In everyday life, once fully accustomed to such a symbol-based system of communication, it basically becomes second nature to think of incoming data mostly in terms of the meaning that we usually like to attach to it. As one becomes a more advanced member of the reading community, for instance, it will typically be quite difficult to suppress the tendency to search a completely randomized text for the presence of familiar words, abbreviations, or other sequences to which meaning can be attached. In fact, language is so natural to us that we almost automatically think of linguistic meaning being in the text at hand, or even in the digital coding behind the fonts on our computer screen. But binary bits are utterly meaningless to the computer itself, and so are the pixels, and the magnetic writings on the hard disk drive. In fact, a computer doesn’t need to understand what it’s doing in order to do its work in a – for us – meaningful way. Likewise, in classical information theory, information need not, and typically does not, have any meaning attached to it. Instead, it is primarily understood in a purely quantitative sense as purely passive data, the becoming available of which will reduce the recipient’s initial uncertainty about the till then unknown contents of the original message as it was released from the information source (cf. Shannon and Weaver 1949, 108-109; Berger and Calabrese 1975). Accordingly, information theorists have no use for any meaning to be awarded to communication signals; from their perspective it only matters how much of the original message is still intact when it arrives at its final destination. Any possible meaning that a symbol-interpreting recipient could perhaps attach to this message later on, would thus remain completely irrelevant. In exophysical-decompositional physics the situation is much the same. That is, despite Thomas Kuhn’s argumentation that empirical data will always be theory-laden,63 it is still implicitly assumed that empirical data can eventually be assessed in an exact, objective, and purely quantitative way and that any possible qualitative aspects are merely a matter of postmeasurement interpretation. This implicit assumption entails that all earlier rounds of

63

Necessarily, measurement outcomes will always have meaning-providing interpretation associated with it due to the measurement and background theories that give rise to the conversion of raw data into well-refined empirical data (Kuhn 1962/2012, 123).

72

preparation, measurement fine-tuning, and re-interpretation are typically taken for granted once a certain measurement practice has reached maturity (cf. Van Fraassen 2008, 138-139; Van Dijk 2016). Accordingly the involvement of all kinds of subjective qualitative choices about competing measurement theories, background assumptions, initial conditions, etc., is thought to become largely irrelevant when the long-aspired ideal of picture-perfect agreement between empirical data and algorithm comes in sight. And although such a complete and absolute empirical fit may be impossible in practice, it is widely believed that increasing measurement precision will allow the observer to approach it in the limit, thus progressively approximating the perfectly isomorphic correlations that are anticipated in representational theory.

According to this representational view – which is basically inherent to the exophysical-decompositional paradigm – empirical data can be mined for regularityexhibiting patterns that are thus held to be objectively informative about the lawful regularity of nature itself; all this regardless any possible later-conceived interpretations. In line with Robert Rosen’s analysis (1991, 58-59) exophysical-decompositional physics is possible only by adherence to the following two premises: 

Firstly, there must be lawfulness to nature. That is, orderly causal relations are thought to hold between nature’s observable and even its unobservable events.



Secondly, it is proposed that these causal relations can be communicated, at least partially if not entirely, by way of informational relations that can be established between ‘nature as observed’ and the current conscious observer in charge.

Along these lines, these two principles of Natural Law boil down to nature having an inherent orderliness associated with it. Arguably, then, this allegedly inherent orderliness “can be matched by, or put into correspondence with some equivalent orderliness within the self [i.e. the mind, or the observer’s ‘center of subjectivity’]” (R. Rosen 1991, 59). However successful it has been in the past, this notion of Natural Law comes with some not-to-be-underestimated limitations. In particular, it leads to a worldview in which nature is being assessed exclusively in terms of its effect on something else – for instance, on us, conscious observers, or on measurement equipment – rather than in terms of what it is in and to itself. Accordingly, physics will never be able to go beyond a phenomenology of nature since it can be no more

73

than an account based on the registered change in the subject system’s state (cf. Section 2.1 and 3.1.2 to 3.2.2). In other words, the subject system’s grain-size of observation will automatically determine the lower limit of the observation range, thus leaving out of scope all that may fall below it. Moreover, this limitation relates directly to the nature of information. That is, in this phenomenology-based approach to physics, information can only be defined on the basis of the smallest distinction that can be made by the subject system at hand. Like this, this phenomenology-based information will never be able to attain the status of Luciano Floridi’s ‘pure data in the wild’ which, arguably, should amount to fundamental ‘fractures in the fabric of Being’ (2011, 85). Instead, it is typically assumed that any such subphenomenal ‘urdifferences’ in the alleged ‘real world out there’ must somehow be capable to affect the latermanifesting phenomena in the observable domain, thereby allowing conscious observers to interpret them according to some suitable context of use (Van Dijk 2016, n4). However, although the linear hierarchy of the exophysical-decompositional universe of discourse may perhaps imply quite strongly that there will indeed be measurement interaction, the question of how such measurement interaction should actually take place will necessarily remain unanswered.64 Likewise, there will always be an indefinite amount of uncertainty regarding the extent to which our measurement phenomenologies can be thought of as representing the above-mentioned unobservable ‘ur-differences’.65 So, in order to find a satisfactory way around this fundamental indeterminacy, physics eventually more or less settled for an instrumentalist solution. That is, the apparently unfeasible direct representational relation between ‘pure data in the wild’ and ‘well-refined empirical data’ was simply substituted by an indirect probabilistic relation holding among the many members of a statistical ensemble of possible measurement outcomes. As explained below, this statistical approach is a clear example of classical information theory put into practice.

64

Although target side and subject side can arguably be decomposed into an arbitrary number of constituents, this cannot be done for the measurement interaction between those opposing sides. This is due to what may be called ‘the problem of the missing meta-observer’ which is inherent to the use of an epistemic cut between target and subject system. However, when allowing such a new meta-observer to examine the finer details of measurement interaction, the same problem will occur all over again, albeit this time between the newly introduced meta-observer and the initial measurement interaction that became the new target of investigation (cf. Von Neumann 1955, 352; Pattee 2001; van Dijk 2016). 65 Depending on which metaphysical system is being used, these theoretically assumed ur-differences may indeed be referred to as noumena, be-ables (i.e. the counterparts of observables), actualities, existents, and so on. In each case, however, it should be noted that the use of the plural noun form already involves a tacit elementary act of decomposition which dissects the one undivided whole of nature into a multiplicity of constituent entities.

74

3.2.1 Looking at measurement in a purely quantitative, information-theoretical way

In mainstream physics, information is initially presented in a purely syntactical sense. Accordingly, the ‘becoming available’ of empirical data via the process of measurement is typically seen to reduce the observer’s earlier existing uncertainty (cf. Shannon and Weaver 1949, 108-109; Berger and Calabrese 1975; van Dijk 2011, 75) so that the reduction of the observer’s uncertainty can be treated in an entirely quantitative manner. That is, the increase in knowledge about some target system of choice is expressed exclusively in terms of the relative amount of data that the observer is capable to extract from it. Like so, the actually obtained value state in each instance of measurement can be compared with the total amount of potentially available value states, thereby leading to a purely quantitative expression:

“ … in classical information theory (Shannon and Weaver 1949) every single incoming datum will lessen the recipient’s uncertainty about a given amount of possible alternatives. Like this, information is a measure of the decrease in uncertainty about the occurrence of a specific event from a given set of possible events. This information-theoretic measure of uncertainty reduction is quantified as follows: The probability of occurrence for each specific member (e.g. characters, or system states)66 of a given set of possibilities (e.g. an alphabet, or a fixed region in phase space) depends on the total amount of available options and the relative frequency of all these alternatives individually (cf. Fast 1968, 325-326). In other words, Shannon’s information-theoretic measure of uncertainty reduction indicates the relative decrease in ignorance about which options from a prespecified collection of possibilities get to be selected as the individual content values of the data signal under construction.” (Van Dijk 2011, 76-77)

Jerome Rothstein, a physicist who wanted to analyze nature in an information-theoretic sense, thought that it should be possible for a physical system to predict, at least partially, its own future state and that of its external surroundings merely by performing operations on its environment and itself. By thinking of physical systems as information-processing automata, or “well-informed heat engines” (Rothstein 1964), he can be seen as having tried to reinterpret 66

Please note that classical information theory allows various kinds of members – ‘elementary items of information’ such as symbols, signs, tokens, bits, byte-sized bit strings, syllables, or words – to be used interchangeably.

75

the Newtonian paradigm in a computer-like fashion. That is, by looking at natural systems as if they were in fact processing, storing, and responding to incoming information, he could treat them as if they were indeed entirely analogous to information theory.

Fig. 3-2: Rothstein’s analogy (between communication and measurement)

In this way, Rothstein claimed that the processes of measurement observation and communication should be seen as analogs – the former being completely equivalent to the latter. Accordingly, he decided to depict the measuring procedure in physics as if it were absolutely conform to a communication system from Shannon’s classical information theory (see Fig. 3-2). As a result, he addressed information in physics in the following way: “Let us now try to be more precise about what is meant by information in physics. Observation (measurement, experiment) is the only admissible means for obtaining valid information about the world. Measurement is a more quantitative variety of observation; e.g., we observe that [an object] is near the right side of a table, but we measure its position and orientation relative to two adjacent table edges [italics added]. When we make a measurement, we use some kind of procedure and apparatus providing an ensemble of possible results. For measurement of length, for example, this ensemble of a priori possible results might consist of: (a) too small to measure, (b) an integer multiple of a smallest perceptible interval, (c) too large to measure. It is usually assumed that cases (a) and (c) have been excluded by selection of instruments having a suitable range (on the basis of preliminary observation or prior knowledge). One can define an entropy [i.e. an information-theoretical measure of uncertainty] for this a priori ensemble, expressing how uncertain we are initially

76

about what the outcome of the measurement will be. The measurement is made, but because of experimental errors there is a whole ensemble of values, each of which could have given rise to the one observed. An entropy can also be defined for this a posteriori ensemble, expressing how much uncertainty is still left unresolved after the measurement. We can define the quantity of physical information obtained from the measurement as the difference between initial (a priori) and final (a posteriori) entropies. We can speak of position entropy, angular entropy, etc., and note that we now have a quantitative measure of the information yield of an experiment. A given measuring procedure provides a set of alternatives. Interaction between the object of interest and the measuring apparatus results in selection of a subset thereof. When the results of this process of selection become known to the observer, the measurement has been completed.” (Rothstein 1951, 172)

3.2.2 The Modeling Relation: relating empirical data with data-reproducing algorithm

Although it is more or less common practice within physics to speak of physical equations as representing nature, this is rather a shorthand expression for a somewhat more complicated state of affairs. What is actually meant when physicists speak of ‘mathematical representations of nature’ is that the physical equations are algorithmic compressions of wellrefined samples of empirical data that are then associated with something that is commonly known as a natural system. To be more precise: “In the exophysical-decompositional paradigm, samples of empirical data are typically superimposed onto the observed aspects of nature that we like to label ‘the natural system N.’ Like this, empirical data and data-reproducing algorithm are basically ‘forced’ to be synonymous with ‘their’ natural system N (cf. R. Rosen 2012, 71-75). Without this forced synonymy, physics as we know it would not be able to function at all.” (Van Dijk 2016, n9).

Now, let’s take a look at what mainstream physicists actually intend to say when they speak of ‘the mathematical representation of nature’. For sake of convenience this will be illustrated by taking Galileo’s inclined plane experiment as a case in point:

Firstly, a target system is singled out from its natural environment and then some particularly interesting aspect of this system is chosen to be specified in terms of its effect on the

77

measument equipment at hand. For instance, a bronze ball with mass m could be picked out as the target system of choice after which its position is selected as the main phenomenon of interest. When this ball is released on top of an inclined plane it would roll down the ramp and cause the strategically positioned warning bells to call their notification signal (see Section 2.1.2). It should be noted that the physical observables of current interest, distance and time, owe their status of quantifiable variables exclusively to a very specific fact. It was, after all, Galileo’s idea of putting together a linear, continuous scale of chained-together standard units for both distance and time that made it possible to measure length and duration simply by counting the number of elapsed markings. By introducing this method of determining the ratio between a standard unit of time and distance and their final count for each measurement run,67 he basically ushered in the golden age of mathematical physics. In fact, to this day this proportional means of assigning numbers to what he took to be the physical characteristics of nature is very much at the heart of what mathematical physics is all about. This general methodology enables us to record what is being observed, and then to index it by when it is observed (R. Rosen 1991, 69). That is, by matching the bronze ball’s change in position with the simultaneously recorded change in time, Galileo could establish motion as a physically quantifiable phenomenon. What’s more, spelling out these physical characteristics of nature in terms of numbers made it possible to search for any mathematical regularity in the pattern of the thus harvested empirical data. Like this, well-formed samples of empirical data can be put together. That is, a more polished, mathematically related time series [𝑠(𝑡𝑛 ), 𝑡𝑛 ] (with = 0,1,2,3, … ) can be derived from the initially raw empirical data to get a smoothly curved transition from initial to final conditions. For instance, from all temporally arranged number pairs [𝑠(𝑡𝑛 ), 𝑡𝑛 ] one pair may be chosen to serve as the starting entry. Then, after an arbitrarily short interval68 the next pair can be taken to serve as its immediate successor. As shown in Fig. 3-3a, the evolution of system states, from initial conditions [𝑠(𝑡0 ), 𝑡0 ] to the immediately succeeding follow-up conditions [𝑠(𝑡1 ), 𝑡1 ], and so on, is taken to be the result of some physical operation that is going on in nature.

67

As apparent from the example of weighing scales, which had already been around in Ancient Greece, the basic method of proportional comparison was already available long before Galileo. He was the first, however, to systematically apply it to time and distance combined. 68 Nowadays, this usually depends on the maximum frequency of measurement. In Galileo’s case, however, it depended on the shortest period that could practically be achieved for the standard interval of time.

78

Fig. 3-3: Steps towards Robert Rosen’s Modeling Relation.

Once the numerical values for the initial and all following conditions are recorded, they are inferred to relate to one another by a mathematical operation that closely matches this physical operation (see Fig. 3-3b). In this way, it is supposed that the empirical data extracted from the so-called ‘physical world’ is represented by the mathematical world. In turn, the mathematically calculated results will have to be verified against the next pair of numbers in the sample of empirical data. And if sufficient agreement between calculated and observed

79

values cannot be found, the initially applied mathematical operation for getting from one state to the next will have to be revised in order to achieve empirical congruence between natural and formal system. This basically amounts to a more detailed re-interpretation of the sample of raw empirical data by the newly applied mathematical operation. Now, this complies very much with what we’ve already seen in Sections 3.2 to 3.2.2; namely, that the putting together of such a mathematical representation is an act of abstraction. This means that any resulting abstract equation should be converted back again into the empirical data in order to confirm its agreement with measurement. After all, the raw empirical data, if not the ultimate primary source of nature’s information, should at least be seen as the first level of information that is algorithmically compressible. Accordingly, in the exophysical-decompositional paradigm, it’s actually those raw empirical data that make up the actual measurement phenomenology of the process under investigation. And since we cannot go beyond this phenomenology, we must realize that the thus achieved empirical agreement is between ‘phenomenal’ data and abstract algorithm, not between nature and mathematics. So, crucially, the processuality of nature (on the extreme left) can only be approximated or implied by mathematical inference. In other words, when using the method of physical equations, nature’s processuality can never be grasped in full.

Despite this implicational character of data-reproducing algorithms, our wellestablished physical equations are typically still thought of as having a representational relation with the target system whose data they are trying to replicate. In order to keep this representational interpretation afloat, the empirical agreement between data and algorithm should reach a level of at least near-perfection, if not beyond that. To be able to pull this off, there always needs to be a ‘post-dictive’69 measurement encoding in which the recorded samples of raw empirical results are converted into more polished data that can then be mathematically compressed into a short physical equation. Indeed, this measurement encoding amounts to implicational mathematization. Subsequently, on the way back from abstract mathematical results to concrete phenomenal results, there also needs to be a predictive decoding that extrapolates from the thus achieved physical equation what the numerical values of future measurement outcomes

69

Here, ‘post-dictive’ is used as the counterpart of ‘predictive’. In this way, it refers to the encoding of past empirical data into a potentially congruent data-reproducing algorithm (i.e. a candidate physical equation).

80

are expected to be (Fig. 3-3c).70 This predictive decoding, then, amounts to imputational rephenomenalization, or in other words, the retranslation of abstract, algorithmically generated numbers into concrete measurement phenomena where the calculated numerics are imputed (i.e. attributed or assigned) to their matching measurement results. In so doing, it has even become common practice to treat these algorithmically generated numerical values as if they were in principle entirely synonymous with the natural system they are supposed to portray. In any case, together, these encoding and decoding encryptions serve to filter out any unwanted irregularities, data-contaminating noise, measurement errors, etc., from the original, raw measurement results (cf. R. Rosen 1991, 59-62; Van Dijk 2016). However, there can be no procedure from which these encodings and decodings themselves can be derived from the data or algorithms.71 This is in fact directly related to the impossibility of having a watertight procedure for algorithm choice: “ … because there are no neutral criteria for choosing one algorithm over the other (Kuhn 1977), no one algorithm can be considered the ultimate candidate. For instance, since goodness of fit, consistency, broadness of scope, simplicity, [beauty] and fruitfulness may be at odds with each other, any choice for ranking these criteria according to their alleged importance, or for finding an optimal balance between them, can never be objective but is rather based on personal preference, intuition, educated guesses, and the like. So, together with the abovementioned encodings and decodings, all criteria for choosing between competing algorithms are external from the natural system N and the formal system F. Next to these already major externalities, there are several other external criteria, specifications, and decisions that play their part in setting up the relation between data and algorithm. For instance, decisions have to be made concerning a) the frequency of sampling; b) how many entries the data sample should have as a minimum [in order to qualify as a bona fide sample of empirical data]; c) which statistical format should be appropriate in which case; d) which background theories to apply in order to provide a more meaningful context for the in-itself meaningless foreground algorithm.” (Van Dijk 2016)

70

It must be noted that both measurement encoding and predictive decoding will always require ample interpretation on the part of the experimentalist. As first mentioned by Thomas Kuhn (1969/2012, 123), all measurement interpretation occurs on the basis of the actually applied theory (including all relevant background theories). 71 After all, this would only call forth the same problem all over again (i.e. which production rules to use for putting together the data-smoothening encoding and decoding encryptions) and thus lead to confusing circularity and/or infinite regress.

81

All in all, the act of encoding and decoding seems to depend to a great extent on external criteria, specifications, and decisions. Accordingly, as depicted in Fig. 3-3c, any physical equation that is meant to represent a given target system in the above-explained sense, is actually the result of implication (from raw empirical data to sharply specified mathematical input value), mathematical inference (which proposes a mathematical relation between initial input value, or initial condition, and its subsequent output values) and imputation (i.e. attribution of the mathematically calculated end result back onto the raw empirical data). Now, when there is sufficient agreement between the raw measurement outcomes and the mathematical calculations, or, to be more precise, when the transitions from initial to followup conditions in both the natural system N and the formal system F can be seen to commute (i.e. are considered to be isomorphic within acceptable margins), then it is considered acceptable for physicists to speak of ‘representation’.

In line with all this, Robert Rosen (1985, 20 and 75) emphasized that causal linkages between preceding and succeeding system states can indeed only be implied, and not be confirmed as an objective fact of nature. Nonetheless, in mainstream physics the concept of causality has over the years gained a strong aftertaste of objectivity. As will be explained below, however, the concept of processuality is to be preferred over causality:

“In the original version of the Modeling Relation [cf. Fig. 3-3e], the regular pattern in the calculated outcomes of the formal system F is meant to comply with an equally regular causal pattern which is thought to be active among the phenomena associated with the empirical data of natural system N. However, causality – as it dissects nature into a causative and a therefrom ensuing effectuated side – is an exophysical-decompositional concept about nature, rather than an inherent aspect of nature itself. For this reason, Rosen’s original left-hand term ‘causality’ is here replaced by ‘process’ [see Fig. 3-3d to 3-3f], thus stressing the deeperseated processuality … of nature.” (Van Dijk 2016)

Because the concept of causality is typically identified with the notion of system states and their transition from one to the next, its very formulation must depend on the same external encodings and decodings that are involved in putting together our familiar data-compressing physical equations. Moreover, causality can only be pinpointed by supposing that the theoretically assumed physical activity between two consecutive system states can be

82

synonymized with the inferred mathematical operation (see Fig. 3-3b to 3-3d). As mentioned above, such a straightforward synonymy is ultimately unwarranted. Even a less rigorous approximate isomorphism between data and algorithm will not be enough for causality to be judged a better alternative than processuality. That is, our concept of causality cannot reasonably be applied to any deeper, subphenomenal level of reality. After all, the exopysical-decompositional approach can only provide us with phenomenologies, i.e., results based on sense data and the readings of measurement instruments, rather than on the process of nature itself. Causality can thus deal only with patterns of relationship that fall within the measurement range of the observational system in use. All other, less prominent and finer-grained background activity and noisy external influences will necessarily fall outside the scope of investigation and will thus typically be labelled not only causally negligible, but also scientifically irrelevant. But because causality is obviously a less universal concept, it is here (in Fig. 3-3d to 3-3f) replaced by process – this not so much to dismiss the concept of causality, but rather to emphasize that processuality subsumes causality. After having added this nuance to the concept of causality, let’s see how the transition between system states can be presented in a more straightforward way. For sake of simplicity, the second time slices, system states, or number pairs – i.e. the phenomenal and mathematical results in the lower boxes of Figs. 3-3b to 3-3d – may be placed on top of the one with the initial conditions. Like this, we end up with something that comes very close to Robert Rosen’s original Modeling Relation (see Fig. 3-3e). And what is now particularly interesting is that both the mathematization-enabling measurement encoding and the rephenomenalization-enabling predictive decoding cannot be derived from the data or the algorithm itself. So, analogously to the external present moment indicator in Section 2.1.3, encoding and decoding are in fact external manipulations; although they give shape to the supposedly ‘hard and exact’ physical equations, they ultimately rely on subjective choice. And furthermore, we can see the appearance of a geometrical timeline (Fig. 3-3f) resulting from the subjective choice as a result of abstracting from the process of nature in terms of phenomenology-based consecutive states.

83

3.2.3 From information acquisition to info-computationalism

According to Rothstein’s analogy (Fig. 3-2), the numerical data that inform us about the natural world, are harvested from target systems by means of a linear chain of informationexchanging and data-processing modules. In this way, data is shuttled from target system (i.e. the source process) to subject system where it is being processed – analogously to a computer that processes incoming information. Like this, the natural target system is basically treated as if it were merely an information-emitting black box, while, on the other hand, the endobserver’s mind-brain is ultimately thought to be nothing but an entirely matter-based, biological computer (Block 1995) – indeed, a naturally evolved one, but a computer nonetheless with the brain as the hardware, and the mental thought patterns being no more than neural signaling. With the rise of the electronic digital computer – the main principles of which were provided amongst others by Shannon’s information theory and Turing’s computational logic (Shannon and Weaver 1949; Turing and Ince 1992) – it has now become a quite popular idea to look at nature from an info-computational point of view. That is, as the computer began to play a more and more prominent role within society, an entire belief system emerged in which the universe was thought to work as a giant computational system (cf. Lloyd 2010) in which natural systems behaved just like mere algorithm-executing conversion modules – turning inputs into outputs. Accordingly, in what is probably the most popular version of the computational theory of mind, the brain is considered to be a biological information-processing computer, while the mind is seen as the software that this bio-computer is running (Block 1995). In this way, the mind-brain may be interpreted in a device-like fashion, as electrochemically functioning circuitry by means of which sensory stimuli can be encoded into nerve pulses, and then channeled through the network infrastructure, thus enabling transmission, storage, and output of neural signals – just as in a computer: “[W]ith the development of computer technology and artificial intelligence, such cognitive processes as memory and perception were analyzed into specific functions performed by specialized processors, each of which received a certain input, performed a specific operation upon it, and transmitted a certain output. All this led to a picture of the brain as an elaborate biological computer, the ‘modular mind’ of Fodor (1983), a system of neural processors

84

shuttling raw sensory data around to make them into coherent pictures, much as a computer takes millions of binary bits to make texts and pictures.” (Pąchalska et al. 2007)

However, this info-computational view still leaves much unexplained. For instance, although sensory information typically makes a meaningful difference to the inner life of conscious organisms in the biological world, incoming digital data have no meaning whatsoever to information-processing computers. Also, this info-computational view suggests that the signals of neurons can be turned into mental imagery in roughly the same way as binary data can be used to make up the graphical code of a digital image file (such as a JPG-, BMP- or GIF-file) that can be shown on a computer screen. But contrary to a computer screen which typically has a conscious user sitting behind it, the brain cannot be thought of as having within it such a dedicated user who’s capable to observe, interpret, and act upon the data and pictures that are thus presented. After all, comparable to the earlier problem of the meta-observer (Section 2.3), we could ask how such an inbuilt center of subjectivity should work and then find that yet another homunculus-like center of subjectivity would have to be invoked in order to answer this question (for more neuroscientific context, see also: Edelman and Tononi 2000, 127 and 220-222). So, from this we can conclude that the brain is not some kind of cinematic theater in which lifelike scenes are shown to a first-person center of subjectivity capable of sensorimotor control. Nor is the brain a mere black box for converting inputs into outputs; it's not a CPU-like center of subjectivity whose job it is to turn incoming stimuli into outgoing responses; nor should it be seen as a transmitter-receiver unit equipped specifically to pick up and pass on in- and outbound signal traffic. Rather, as we will see later on, from Sections 4.2.4 to 4.3.1, the mind-brain is a seamlessly embedded member of the ultimately indivisible organism-world system in which a highly complex culmination of mutually informative activity patterns facilitates the emergence of higher-order conscious experience. Hence, the mind-brain is certainly not a pre-wired data-processing switch-box in the sense of classical information theory. Unlike a CPU or switchbox, the mind-brain has a neuroplastic organization and cannot retain its function without its signal traffic or in isolation from its embedding ‘mother system’ which is the organism-world system as a whole (see Sections 4.2 to 4.3.1 for further details). So, even though it explicitly meant to avoid it, the info-computational approach, by setting up a computation-based monism, basically gives rise to a tacit form of dualism. Although it is emphatically denying that there be a separate mental process in the brain, the

85

info-computational approach suffers from a similar problem as the Newtonian paradigm with which it is associated; namely, the problem of a potentially infinite regress of meta-observers. In fact, by imposing the linear hierarchy of Shannon’s Classical Information Theory onto the operation of the mind-brain, info-computationalism is adopting a modular-brain point of view that inevitably leads to such problematic complications: “ … the exclusive attention to specific subsystems of the mind-brain often causes a sort of theoretical myopia that prevents theorists from seeing that their models still presuppose that somewhere, conveniently hidden in the obscure ‘center’ of the mind-brain, there is a Cartesian Theatre, a place where ‘it all comes together’ and consciousness happens.” (Dennett 1991, 39)

So, when setting up a universe of discourse according to Rothstein’s information-theoretic analogy, any signal going from source to destination – from target to subject side – would always require an already conscious end-observer to interpret it. In other words, in order to put into effect its core hypothesis that consciousness is mere neuron-based information computation, info-computationalism has to tacitly presuppose consciousness in the first place. After all, in an info-computational universe of discourse, the communicated signals are by themselves utterly vacuous and completely meaningless to the signal-conveying components that reside on the subject side. Therefore, they must be made sense of through the conscious inner-life of a data-interpreting end-observer in order to reach the status of a meaningful message. However, any confirmation of an ‘actual’ center of subjectivity within the brain of this observer will inevitably call forth the same problem all over again, requiring yet another such ‘center of subjectivity’ where the incoming information can finally become conscious (or so the promise goes), thus triggering a confusing infinite regress of meta-observers72 (Von Neumann 1955, 352). The point is basically that the info-computational view can give us no insight in what it is for information to become conscious. It can give us no clue whatsoever as to what it feels like to have access to particular sense data. It cannot ever tell us about what brings about the redness of red, the painfulness of pain, the silky feel of silk, or the sweetness of sweets. For all that info-computationalism could care, these subjective aspects are not even there to begin with, and if they were, they would be quite irrelevant or even illusory side effects of an organism’s processing of incoming sense data. But despite the suggestion of today’s ICT-, AI72

This infinite regress of meta-observers is analogous to the homunculus problem in cognitive science (cf. Edelman and Tononi 2000, 94 and 127).

86

and computer science communities that the subjective aspects of sense data are at the end of the day just fluky, redundant and pointless epiphenomena, they are actually an indispensable fact in our personal daily lives.

3.2.4 Information, quantum and psycho-physical parallelism

The info-computational view of how knowledge acquisition works, or, in other words, of how a conscious observer gets to make sense of nature, hinges on the oversimplified and therefore misleading concept of an information-theoretical universe of discourse. By sheer definition, a universe of discourse – irrespective of a classical, quantum, or relativistic context – brings along a split between ‘objective’ source and ‘subjective’ recipient of information – i.e. it separates the target from the subject side. The scientifically most fundamental example of such a nature-dissecting cut can be found in quantum physics, where it appears to produce the subjectivity-dependent ‘wave function collapse’73 – a phenomenon that still doesn’t have a commonly agreed upon interpretation. In an attempt to wrap his head around what he thought to be an apparent (and not an actual) causal relation between conscious observation and ‘wave function collapse,’ John von Neumann proposed that there be a psycho-physical parallelism (1955, 418-421). This psychophysical parallelism, founded by German physicist, philosopher and psychologist Gustav Theodor Fechner (1860), entails that events taking place in the physical world will always occur in tandem with the psychological contents of the mind. As such, it rejects the possibility of interaction between body and mind. Instead, it permits only a functional correlation to be present between the world of physical events and the world of subjectivity. According to psycho-physical parallelism, then, a mental state has a precise one-on-one correlation with a brain state (and, by the same token, with a therewith associated physiological state of the body, and a world state of the physical world).

73

Wave function collapse: the coming into actuality of one specific measurement outcome although the systemto-be-measured is thought to exist in a superposition of equally probable quantum states prior to the conscious measuring act. In absence of conscious observation the quantum states are believed to exist all-together-at-once, this in analogy to the different time slices in Minkowski’s block universe that are thought to exist all-together-atonce as well, thus leading to 1) the timeless view of the universe, and 2) the arguable claim that our experience of time is completely imaginary. This is why adherents of the block universe interpretation and relativityinspired interpretations of quantum physics like to dismiss consciousness as irrelevant and illusory (cf. Smolin 2013, 59-64 and 80).

87

Fig. 3-4: Universe of discourse with von Neumann’s object-subject boundary (commonplace conception of measurement)

Von Neumann reasoned that psycho-physical parallelism would be applicable as follows: “First, it is inherently entirely correct that the measurement or the related process of the subjective perception is a new entity relative to the physical environment and is not reducible to the latter. Indeed, subjective perception leads us into the intellectual inner life of the individual, which is extra-observational by its very nature (since it must be taken for granted by any conceivable observation or experiment). Nevertheless, it is a fundamental requirement of the scientific viewpoint – the so-called principle of the psycho-physical parallelism – that it must be possible so to describe the extra-physical process of the subjective perception as if it were in reality in the physical world – i.e., to assign to its parts equivalent physical processes in the objective environment, in ordinary space. (Of course, in this correlating procedure there arises the frequent necessity of localizing some of these processes at points which lie within the portion of space occupied by our own bodies. But this does not alter the fact of their

88

belonging to the “world about us,” the objective environment referred to above.) In a simple example, these concepts might be applied about as follows: We wish to measure a temperature [cf. Fig. 3-4a]. If we want, we can pursue this process numerically until we have the temperature of the environment of the mercury container of the thermometer, and then say: this temperature is measured by the thermometer [cf. Fig. 3-4b]. But we can carry the calculation further, and from the properties of the mercury, which can be explained in kinetic and molecular terms, we can calculate its heating, expansion, and the resultant length of the mercury column, and then say: this length is seen by the observer [cf. Fig. 3-4c]. Going still further, and taking the light source into consideration, we could find out the reflection of the light quanta on the opaque mercury column, and the path of the remaining light quanta into the eye of the observer, their refraction in the eye lens, and the formation of an image on the retina, and then we would say: this image is registered by the retina of the observer [cf. Fig. 34d]. And were our physiological knowledge more precise than it is today, we could go still further, tracing the chemical reactions which produce the impression of this image on the retina, in the optic nerve tract and in the brain, and then in the end say: these chemical changes of his brain cells are perceived by the observer. But in any case, no matter how far we calculate – to the mercury vessel, to the scale of the thermometer, to the retina, or into the brain, at some time we must say: and this is perceived by the observer [cf. Fig. 3-4e]. That is, we must always divide the world into two parts, the one being the observed system, the other the observer. In the former, we can follow up all physical processes (in principle at least) arbitrarily precisely. In the latter, this is meaningless. The boundary between the two is arbitrary to a very large extent. In particular we saw in the four different possibilities in the example above, that the observer in this sense needs not to become identified with the body of the actual observer: In one instance in the above example, we included even the thermometer in it, while in another instance, even the eyes and optic nerve tract were not included. That this boundary can be pushed arbitrarily deeply into the interior of the body of the actual observer is the content of the principle of the psycho-physical parallelism – but this does not change the fact that in each method of description the boundary must be put somewhere, if the method is not to proceed vacuously, i.e., if a comparison with experiment is to be possible. Indeed experience only makes statements of this type: an observer has made a certain (subjective) observation; and never any like this: a physical quantity has a certain value.” (Von Neumann 1955, 418-421)

89

In the proceedings of a 1938 conference in Warsaw (Poland), named New Theories in Physics, Von Neumann’s view on the object-subject boundary and psycho-physical parallelism was summarized by the editors. They gave the following rendition of his response to Bohr’s presentation on “The Causality Problem in Atomic Physics”:

“Professor von Neumann thought that there must always be an observer somewhere in a system: it was therefore necessary to establish a limit between the observed and the observer. But it was by no means necessary that this limit should coincide with the geometrical limits of the physical body of the individual who observes. We could quite well ‘contract’ the observer or ‘expand’ him: we could include all that passed within the eye of the observer in the ‘observed’ part of the system — which is described in a quantum manner. Then the ‘observer’ would begin behind the retina. Or we could include part of the apparatus which we used in the physical observation — a microscope for instance — in the ‘observer’. The principle of ‘psycho-physical parallelism’ expresses this exactly: that this limit may be displaced, in principle at least, as much as we wish inside the physical body of the individual who observes. There is thus no part of the system which is essentially the observer, but in order to formulate quantum theory, an observer must always be placed somewhere.” (Białobrzeski et al. 1939, 44)

During the era in which quantum mechanics was still in its infancy, the majority of physicists believed – as most of them still do – that the physical world is a causally closed realm. Although it was apparently still acceptable for information from the physical world to somehow appear in the subjective stream of the conscious observer (cf. von Neumann 1955, 418-421; Stapp 2007, 15), to materialistic physicists it is inconceivable that non-physical subjective thought should be able to have a causal effect in the physical world. This is why the idea arose that the world of subjective observation had to be thought of as extraphysical and without a causal linkage to events in the physical world. It was thus quite bothersome that the so-called ‘wave function collapse’ seemed to depend crucially on the becoming conscious of a measurement act, whereas, prior to an observer becoming aware of a measurement outcome, all possible quantum states were thought to exist all-together-at-once – in superposition with each other. This apparent(?) causal dependency of physical quantum states on conscious observation was quite a perplexing mystery to the physics community of the time. Therefore,

90

Von Neumann – after having picked up the idea from an earlier paper by Niels Bohr (1929) – decided to embrace psycho-physical parallelism as a suitable alternative for this, to physicists, unacceptable mental causation. After having discussed it with Niels Bohr, both of them came to believe that the parallelism between physical and mental world could be due to the principle of complementarity. As Bohr put it: “Complementarity: any given application of classical concepts precludes the simultaneous use of other classical concepts which in a different connection are equally necessary for the elucidation of the phenomena.” (Bohr 1934, 10) The link between psycho-physical parallelism and Bohr’s complementarity principle may thus be imagined as follows: Matter and mind, when thinking of them as aspects of nature whose details cannot be simultaneously studied, can perhaps be mutually exclusive during observation – just as the wave and particle aspects of light. As Werner Heisenberg (1958) mentioned, “we have to remember that what we observe is not nature in itself but nature exposed to our method of questioning.” So, accordingly, just as we observe waves when we probe a quantum system in one way and particles when we probe it in another (cf. Gribbin 1995, 186), Von Neumann and Bohr supposed that matter and mind were two aspects of the natural world that, due to their complementarity, required different methods of assessment to come to the fore. Although their opinions seemed to meet with regards to the issue of complementarity, there was still a considerable difference in their approach to the quantum measurement process. In Bohr’s view, quantum mechanics and classical physics can only give complementary coordinations of our experiences of quantum events and the observational system that measures those events: “[For Bohr, the] wholly incompatible conceptions of subject and object in measurement do not constitute incompatible characterizations of the ‘real things’ measuring and measured ... they are instead to be considered only as complementary coordinations of our experience of these things. ... Quantum and classical mechanics are thus relegated to the level of merely epistemically significant complementary coordinations of experience, and as such their incompatibility becomes unimportant.” (Epperson 2004, 37) Accordingly, the quantum events and the measuring equipment should, in Bohr’s eyes, be described in a quantum mechanical and a classical way, respectively. In this view, nature is, at

91

the end of the day, the ur-source of natural facts, whereas our empirical experience can only provide us with knowledge about those facts. As such, nature-in-itself is, according to Bohr, ultimately unknowable (Epperson 2004, 37). In Michael Epperson’s reading of von Neumann’s view, however, measurement responses are just as much part of nature’s facts as the events or actualities that bring about these responses. In fact, in von Neumann’s scheme, what is being measured (i.e. the target system S) and what is doing the measurement (i.e. the measuring system M) will together form a chain of ‘necessarily interrelated facts’ – the so-called ‘von Neumann chain’ in which S is measured by M(apparatus), then (S+M) can be measured by M’(eye), after which (S+M+M’) can be measured by M’’(visual cortex), and so on: “Early pioneers in the development of quantum mechanics like Niels Bohr (1958) assumed ... that the measurement devices behave according to the laws of classical mechanics, but von Neumann pointed out, quite correctly, that such devices also must satisfy the principles of quantum mechanics. Hence, the wavefunction describing this device becomes entangled with the wavefunction of the object that is being measured, and the superposition of these entangled wavefunctions continues to evolve in accordance with the equations of quantum mechanics. This analysis leads to the notorious von Neumann chain, where the measuring devices are left forever in an indefinite superposition of quantum states. It is postulated that this chain can be broken, ultimately, only by the mind of a conscious observer.” (Nauenberg 2011)

In order to make his scheme of psycho-physical parallelism work, von Neumann supposed that the higher brain centers were directly associated with consciousness. The main reason for this belief is that – as can be learned from first-person introspective experience – the ‘abstract ego’74 does not seem to allow the simultaneous co-existence of mental states. In other words, contrary to quantum states, mental states do not seem to exist in simultaneous superposition with one another.

74

Von Neumann used the term ‘abstract ego’ as a name for the ‘immaterial intellectual inner life’, the ‘conscious mind’, or the ‘center of subjective’.

92

3.2.5 From psycho-physical parallelism to measurement as a semiosic process

A good case can be made, however, that psycho-physical parallelism is ultimately just a pseudo-explanation for wave function collapse. That is, the formulation of von Neumann’s psycho-physical parallelism may well be possible only by committing what William James (reference needed) called the psychologist’s fallacy. The psychologist’s fallacy has been expressed in many ways, but for present purposes Anderson Weekes’ version is quite convenient: “ … the ‘psychologist’s fallacy’, which is to find in introspection only the objects that appear to thought rather than the whole of the thought to which objects appear … ” (Weekes 2006, 230).

When looking at nature from the perspective of psycho-physical parallelism, von Neumann seems to pay attention only to the foreground details (i.e. the ‘facts’ of measurement that appear in object-like fashion to one’s subjective ‘abstract ego’), while taking mostly for granted how the underlying process of subjective perception works to bring these ‘facts’ into conscious actuality. In doing so, von Neumann has to presume that these facts come into conscious actuality, but does not provide any account of exactly how – let alone why – this should occur so. To put this more in the context of quantum information and info-computationalism: The ‘clicking’ or ‘non-clicking’ of a photocounter – usually a photodiode-based detector capable of producing an audible ‘click’ on detection of a photon – can be postulated to represent one bit of information. According to John Archibald Wheeler, this makes up a raw fact: “With polarizer over the distant source and analyzer of polarization over the photodetector, we ask the yes or no question, ‘Did the counter register a click during the specified second?’ If yes, we often say, ‘A photon did it.’ We know perfectly well that the photon existed neither before the emission nor after the detection. However, we also have to recognize that any talk of the photon ‘existing’ during the intermediate period is only a blown-up version of the raw fact, a count. The yes or no that is recorded constitutes an unsplitable bit of information.” (Wheeler 1989/1999, 311)

93

Since Claude Shannon had not yet put out his standard work on classical information theory, von Neumann probably didn’t adhere to such an explicit information-theoretical conception of measurement as Wheeler’s. However, his application of the object-subject split automatically gives rise to a universe of discourse that, in hindsight, has all the hallmarks of information theory. After all, the assumption of an extraphysical ‘abstract ego’ (or an ‘immaterial intellectual inner life’) seems to leave no other alternative than to suppose that the appearance of a Wheelerian ‘fact’ in one’s stream of consciousness is like the intake of information by the observer’s ‘center of subjectivity.’ The psychologist’s fallacy, then, is the result of presupposing – as von Neumann did – that consciousness involves an extraphysical center of subjectivity, rather than a withinnature, participatory process through which organisms gradually get to make sense of the natural world in which they live. That is, we, conscious living beings, should be seen as seamlessly embedded and radically participatory inhabitants of the same natural world we’re trying to make sense of. We become acquainted with nature by living through it, not by passively staring at it from an extraphysical viewpoint. We acquire knowledge about nature not by passively taking in information while residing on the subject side of a universe of discourse, but by living through an ongoing, radically participatory cyclic process of semiosis (Van Dijk 2016) through which experienced foreground signals, signal-designating symbol system, and symbol-interpreting self emerge as a triadic unity (see Section 4.2.3 to 4.3.1 for further details). How conscious self and conscious world – thinker and thought – come into actuality as two aspect of the same process will be discussed in Sections 4.2.3 to 4.5. For now, however, we’ll briefly touch on how doing physics does not revolve around an informationtheoretically inspired universe of discourse with a boxed-in target system and a subject side separated therefrom, but that doing ‘physics as we know it’ should instead be thought of as a semiosic process of formalization, preparation, and observation – the respective equivalents of symbol, referent, and symbol-interpreting user in semiotics. As such, as will become clear later on, our way of ‘doing physics in a box’ is basically a technological-mathematical extension of the semiosic process through which the emergence of conscious experience comes about.

94

Fig. 3-5: From background semiosic cycle of preparation, observation and formalization to foreground data and algorithm

After an empirically adequate physical equation has been found to commute (by going through Rosen’s modelling relation, cf. Section 3.2.2), and measurement practice is thought to have reached maturity, all support processes that were needed to give rise to the thus achieved level of sophistication are more or less made to leave the center stage. As already hinted at in Section 3.1.2, physicists who are using physical equation to track the temporal evolution of a natural system, can often be likened with a theatre audience that forgets entirely about the preparatory activities of the stage building crew, casting agency, producer, director and scenario writer, each of whom is responsible for other background aspects of the theatre play: “Once … an empirically faithful algorithm has indeed been found, it is typically thought to reliably keep track of the target system’s behavior – if not in a direct chronological sense, then at least statistically. In any domain of experimentation, as soon as measurement practice reaches maturity the research interest starts shifting towards the aspired agreement between

95

empirical data and algorithm (cf. Van Fraassen 2008, 138-139). Simultaneously, attention typically drifts away from the very process of measurement interaction through which this empirical agreement could be achieved in the first place. Together with all other preceding processes of system delineation, measurement refinement, data processing, algorithm selection, etc., it is basically evacuated to the backstage.” (Van Dijk 2016) In quantum physics, the preparation process primarily pertains to how a ‘quantum particle’ is ‘soaked loose’ from its embedding environment. If it weren’t for this preparatory process and subsequent observation, a ‘quantum particle’ would lead its existence while remaining ‘submerged’ within the otherwise undivided process of nature-as-a-whole which is, at the quantum level, a giant cosmic sea of vacuum fluctuations: “What we usually call ‘particles’ are relatively stable and conserved excitations on top of this vacuum. Such particles will be registered at the large-scale level, where all apparatus is sensitive only to those features of the field that will last a long time, but not to those features that fluctuate rapidly. Thus the ‘vacuum’ will produce no visible effects at the large-scale level, since its fields will cancel themselves out on the average, and space will be effectively ‘empty’ for every large-scale process (e.g. as a perfect crystal lattice is effectively ‘empty’ for an electron in its lowest band, even though the space is full of atoms).” (Bohm 1980, 111)

Cyclotrons, laser guns, and all kinds of other pieces of preparatory equipment can be used to individuate a single quantum particle or photon out of what usually exists only as a collective whole of many such ‘particles’: “In the laboratory we see measurement arrangements that, although intended to perform measurements in the microscopic domain of quantum mechanics, are yet composed of macroscopic (although often very small) components. In these measurement arrangements one can often discern two fundamentally different parts, viz., the part having as an objective to prepare microscopic objects (like an electron emission grid, cyclotron, laser, etc.), and the part intended to register some phenomenon that can be interpreted as measurement result (like a photo diode, bubble chamber, spark chamber, etc.). The first part will be referred to as the preparing apparatus, the second one as the measuring instrument. The measuring instrument has as an essential part a macroscopic pointer ranging over a measurement scale from which the individual measurement result 𝑚 can be read off.” (De Muynck 2002, 74-75)

96

In classical physics, on the other hand, the process of preparation can be interpreted as the singling out, by the observer’s nature-dissecting gaze, of some interesting ‘physical’ aspect of nature, so that a target system can be put together based on this act of individuation (cf. van Dijk 2016, n7). Of course, the entire foregoing history of system delineation, measurement refinement, data processing, algorithm selection, pre-theoretical statistical analyses, and so on, can all be considered to be part of the background processes that enable the presentation of the foreground items, namely: the well-refined empirical data and their data-reproducing algorithms. These data-reproducing foreground algorithms can only come to the fore and reach the status of laws of nature through the intense level of cooperation of all participating background processes: “Collectively, these intimately entangled and mutually overlapping background processes, can be grouped into three functionally distinct subprocesses, namely the preparation process, the observation process, and the formalization process – all embedded within the same meaningproviding context of use (Fig. 3-5). As such, they work together to form a trilateral universe of discourse that closely resembles the so-termed triadic relation in semiotic information theory … (cf. Zeman 1977; Nöth 1995, 85; Fiske 2002, 41-43). In semiotic information theory meaning gets to be established within a triangular relationship between a sign, its referent, and a sign-interpreting user,75 and as shown in Fig. 3-5, the three of them can be positively identified with the formalization process, the preparation process, and the observation process, respectively. “In the thus formed semiotic triunity, the process of empirical science is given shape and meaning by passing through the different stages of preparation, observation and formalization. Here, the preparation process delineates areas of interest (target systems) from which informative signals can be extracted.76 The observation process then deals with the intake of informative signals and converts them into empirical data. Finally, the cycle is

75

According to semiotics, semantic as well as pragmatic information can thus be added to in-themselves meaningless data-signifying syntactical symbols. For reasons of simplicity, the possible difference between sign (a.k.a. sign function or signhood) and sign vehicle (a.k.a. token or signifier) is ignored here. Instead, the terms meaning and sign are used to denote the use of a certain token within a triadic sign relation. See (Nöth 1995, 79) for more details on the possible differences between sign and sign vehicle. 76 The preparation process may pertain to different activities in different theoretical contexts. In quantum physics, it primarily denotes the process through which a ‘quantum particle’ is ‘soaked loose’ from its embedding environment so that it can be submitted to observation further on down the line (cf. De Muynck 2002, 74-75, 83, 90-91, 94), and in classical physics it refers merely to the process through which some interesting ‘physical’ aspect of nature is ‘individuated’ into a target system.

97

closed by the formalization process which imports these empirical data and compresses them into concise algorithms, whose calculated results are then hypothesized to reflect the target system’s informative signals. After this, the cycle can then be repeated to dig deeper into the targeted process of interest and to attain more empirical data and/or better goodness of fit between data and algorithm.” (van Dijk 2016)

As will be explained further on (from Section 4.2.3 to 4.3.1), this semiosic process of premeasurement target individuation, instrument-assisted observation, and algorithm selection, can be seen as an extension of the conscious individuation, observation, and symbolization as performed by sentient organisms as they live through their ‘sensation-valuation-motor activation-world manipulation’ cycles, thus giving rise to the brain-mediated processes of perceptual categorization and concept formation. Although semiotics can be regarded a part of information theory, it does not work along the lines of our classical information theory in which information acquisition takes place by one-way data traffic. Its mode of operation is based on habit-forming cyclic loops in which meaning gets to be established from within, rather than having to be bestowed from the outside as is required in classical information theory. Because of this it can be more easily brought in connection with nonrepresentationalist theories of consciousness – especially Gerald Edelman’s theory of neuronal group selection. In Edelman’s view, consciousness (see Sections 4.3 and 4.3.1 fur further details), the massive mutual informativeness among activity patterns within the thalamocortical region of the mind-brain is what facilitates the emergence of our conscious experience. Because of this active, mutual informativeness – facilitated by a high level of recursive, participatory signaling – information does not need to be conveyed in the familiar way of classical information theory, computer technology and info-computationalism. Rather, organismically meaningful information is established by going through the mutually informative cycles that entail not only the activity patterns within the thalamocortical region, but the entire whole of perception-action cycles in which the organism is engaged. Like this, no real object-subject boundary can be drawn. Instead, I suggest it is better to think of subjectivity as a system property of nature that does not at all fit into the dualistic transmitter-receiver framework of classical information theory. As will be made more plausible in Section 4.2.3 to 4.3.1, subjectivity is what is already tacitly present in the cyclic processuality of nature, but what can gradually intensify as organisms go through their multimodal, value-modulated perception-action loops.

98

So, instead of interpreting target and subject systems in terms of our conventional information theories, we should realize that the target process – whether it be a conventional natural system or even the ‘process of subjectivity’ associated with the observer’s own conscious self – can only become known in terms of the conscious information (i.e. percepts) that comes into actuality during this process of mutual informativeness. All in all, it can thus be concluded that the total arbitrariness in the choice of where to situate the epistemic cut between target and subject side is a major hint that something is seriously wrong with the conventional information-theoretical picture. At the least minimum, to understand how information works, we should replace the linear module-based hierarchy of conventional information theory with the well-nested processual holarchy of (bio)semiotic information theory. In semiotics, the coming into actuality of experiencing self and experienced world goes hand in hand with concept formation and symbolization. As will become apparent in the next chapter, the “sculpting process” of such biologically meaningful concepts and symbols occurs by way of adaptive, value-steered perception-action cycles.

3.3 From doing physics in a box to doing physics without a box Throughout this present chapter, we found that the problems of ‘doing physics in a box’ were numerous and fundamental. It turns out that there are even more difficulties than Lee Smolin noted down in his 2013 book Time Reborn (e.g. the cosmological fallacy and the reasoning away of the passage of time) and it seems that the majority of these difficulties has something to do with the reduction of the live, conscious observer to an externalized, abstract and insentient intake unit of pre-coded data – one that readily meets the selection criteria of infocomputationalism. It is because of the uncritical and premature adoption of the point-observer and its relatives – supposedly equipped with a purely info-computational inner-center of information intake – that the actual living experience on which empirical science itself is ultimately based could have been swept overboard. We have apparently become so used to the idealizing simplification of point-observers and the like, that we seem to totally forget about their downsides. For instance, we all too easily take it for granted we, conscious observers, have to imagine ourselves as somehow outside of what is so often thought of as our entirely physical real-world-out-there. On top of that, just to be able to talk about such an external world at all

99

(cf. Quine 1969, 1) it has to be cut into bite-size bits and pieces by our nature-dissecting gaze. From this, then, it is only a small step to start thinking of the natural world as if it truly existed as a mere collection of such ‘bits and pieces in external interaction.’ However, when indeed taking this step, we entirely forget that the involved bits and pieces (i.e. material objects, molecules, atoms, quarks, strings, and the like) are ultimately just idealizing figures of speech, capable of fitting quite nicely with phenomenal reality, but by no means giving us something like the ‘true look’ of nature-in-itself. Confusing our linguistic labels, such as atoms, electrons, quarks, and so on, with what goes on in the process of nature itself would amount to what Whitehead called the fallacy of misplaced concreteness – mistaking the abstract for the concrete, with all due undesirable consequences. So, although the exophysical-decompositional methodology has served us so well over the last few hundred years, it still seems to be plagued by all kinds of fundamental difficulties. Because of our historically grown attachment to this well-tried and tested fractionating mode of understanding, it seems to be a worthwhile idea to see if we can get rid of its negative aspects while keeping its positive sides intact. What should escape unscathed from such a ‘cleansing attempt’ would definitely be its ability to break down the raw, tumultuous processuality of nature into understandable bits and pieces so that nature can be talked about, rather than just being looked at in mute awe. Judging from the halftime score of our present investigation, there seems to be more than enough reason to try to go beyond the exophysical-decompositional paradigm and replace it with something else. Due to its many past as well as recent successes, it does however seem a tall order to put the entire exophysical-decompositional approach outside with the trash. We’ve benefitted way too much from its achievements, after all, to take such a radical step. So, instead, what seems to be needed is that we try not to ditch the exophysicaldecompositional approach altogether, but rather to supplement it with a nonexophysicalnondecompositional method – a method that does not replace it, but makes up for its weaknesses. In other words, when trying to make sense of the indivisible process which is ‘nature-in-the-raw’, we need to arm ourselves with ‘binocular vision’. That is, just to be able to talk about nature, we may indeed be forced to decompose it into smaller parts (cf. Quine 1969, 1), but when trying to grasp nature as one indivisible process, we should take into account its undivided and interconnective wholeness as well. Up to now the preferred way to go about this has mainly been to add a holistic interpretation to the exophysical-decompositional formalisms of our physical sciences – quite

100

literally as an aftermath measure. For instance, Niels Bohr’s ‘quantum wholeness’ – i.e. his interpretation that wave function collapse hints at the fundamental inseparability of quantum system and observer77 – is a case in point. Another example is David Bohm’s holistic process metaphysics which, despite its admirable and impressive effort to give an impression of the deeper processuality of nature, is still very much organized around the quantum formalism beyond which it is attempting to go. Both can be considered ‘aftermath interpretations’ in the sense that they followed after the formulation of the quantum wave function formalism. After Galileo, however, it has become common practice in physics to think of mathematics as the primary language of nature, any framework of interpretation is typically regarded as secondary; i.e. as less fundamental than the mathematical formalism that it aims to address. Because of this, then, we should not expect mere interpretation to provide us with a definite answer to the problems of physics in a box – especially since there are so many competing, but mathematically equivalent interpretations. To get rid of the aforementioned problems of the exophysical-decompositional paradigm, therefore, it seems that we need more than just such ‘aftermath re-interpretation’ to get the job done. In fact, what our current physics seems to call for is a ‘significant other’ to stand by its side. What seems to be needed is a nonexophysical-nondecompositional physics – a way of doing physics without a box – that can serve as the ‘better half’ for its exophysicaldecompositional counterpart of doing physics in a box. We need the best of both worlds to form a more comprehensive, binocular physics which should then be capable of opening up previously unseen vistas. As a prime characteristic, such a ‘binocular physics’ should not leave it absurd that we exist (Deacon 2012, cover text). In other words, our physics should not leave unexplained that we – physics-abiding conscious organisms – are actually inseparably part of the same natural world that it is trying to make sense of (cf. Planck 1932, 217). We, within-nature conscious organisms who actually developed physics as a tool to help us figure out what nature is all about, are in fact inseparably part of this tool’s target of investigation: the process of nature. Therefore, physics should take into account not only how nature works, but also how our experience of nature works (cf. Wolfram 2002, 547). Furthermore, honoring Whitehead, we should aspire a physics that doesn’t reduce ‘nature alive’ to ‘lifeless nature’ (Desmet 2013, 87), or, what’s basically the same, a physics that doesn’t ‘objectify’ nature. That is, our physics shouldn’t rid its observers from their 77

See Mara Beller’s 2003 article “Inevitability, Inseparability and Gedanken Measurement” for some more background information on how Bohr arrived at this interpretation.

101

subjectivity, because that basically amounts to treating live conscious observers as being equivalent to lifeless and insentient objects – a point-observer being the most extreme example. Whenever the conscious observer becomes the ‘object’ of interest, we should realize that the thus presented observer can be no more than a ‘phantom observer’ – an ‘objectified observer’ whose lived subjectivity is systematically being left out of the picture. Like this, the very essence of being an observer is not really there anymore because it has simply been stripped away for the sake of doing physics in a box. So, in order to find out how subjectivity can be included within a nonexophysical-nondecompositional physics, let’s first take a closer look at what it is about consciousness that is so hard to get a grip on.

102

4. Life and consciousness

Although we’ve already touched on the inability of info-computationalism to properly address ‘the becoming conscious’ of information (see Section 3.2.3) we have not yet delved into the issue of how best to describe consciousness itself. When trying to specify what consciousness is exactly, it will soon become apparent that the ‘system-to-be-specified’ – in this case, the process of consciousness – is in fact overlapping with the very system that is supposed to specify it – again, the process of consciousness. For sake of simplicity, this situation may be compared with a cartographer who’s trying to include himself, in person, in the map of his own hometown just in order to finally complete it. In practice, though, instead of somehow having to step into the map themselves, map users typically just imagine a ‘now I am here’ sign while reading a map (cf. Van Fraassen 2008, 76-84). Because the map user cannot be fully included within the map itself, the user must be substituted by a virtual placeholder – an abstract idea of the user, rather than the user in person. In this way, the conscious user, as she can only be flagged by a virtuality, must herself remain absent from the map, and, by the same token, cannot be included in any other representational description of reality whatsoever. And although this oversimplified example may perhaps not directly reflect the actual situation in any consciousness-exploring research lab, it does still bring one particularly relevant aspect of empirical science to the fore. That is, even though all acts of measurement and information intake require consciousness to perform its role of so-called ‘center of subjectivity’, when nature is being looked at in a representational way, consciousness will always remain a virtuality. This would not even change if consciousness itself were to be chosen as the target system of interest. As already hinted at above, unlike the usual target systems in physics and chemistry, consciousness cannot be conveniently singled out by one’s conscious gaze since that would only lead to a strange loop involving the consciousness of consciousness – which, to the best of our knowledge, is the same as consciousness itself (cf. Edelman and Tononi 2000, 10-14). At the end of the day, any attempt to get to the bottom of consciousness by capturing it within a representation is doomed to failure:

103

Fig 4-1: Conscious observer as an embedded endo-process within the greater embedding omni-process which is the participatory universe. The conscious observer gets to make sense of nature by living through his ongoing perception-action cycles, nutrient-waste cycles, O2-CO2 cycles, as well as all other constantly renewing, criticality-seeking nonequilibrium thermodynamic cycles in nature. The O2-CO2 cycle involves the inhalation of oxygen and the exhalation of CO2 to guarantee adequate oxygenation of the organism’s body cells and healthy levels of CO2 in the lungs and the blood. Under sufficiently optimized conditions, the body can aerobically metabolize ingested nutrients, thereby enabling its cells (e.g through their mitochondria) to produce and store energy for doing work. The nutrient-waste cycle goes from mouth to posterior turning nourishing foods into soilfertilizing manure. Like this, the organism’s perception-action cycles (as depicted in Fig. 4-1d and 4-1e) – are powered by metabolizing food into energy-rich glucose (and derivative substances) and the aerobic combustion of glucose which depends on the O2-CO2 respiration cycle (see Fig. 4-1f). Moreover, all these cycles extend well into the embedding, co-evolving environment of which the conscious organism is a seamlessly embedded, co-creative participant.

104

“No amount of description will ever be able to account fully for a subjective experience, no matter how accurate that description may be. [For example:] No scientific description of the neural mechanisms of color discrimination, even if it is perfectly satisfactory, will make you understand what it feels like to perceive a particular color. No amount of description or theorizing, scientific or otherwise, will allow a color-blind person to experience color.” (Edelman and Tononi 2000, 11)

On top of that, a color-sensitive photodiode, although it may indeed be responsive to a large spectrum of visible light, typically does not become aware of the colors it manages to detect (cf. Edelman and Tononi 2000, 17). Unlike a live conscious organism with color vision, a color-sensitive photodiode does not adaptively change its sensory circuitry and its physiology in response to incoming stimuli. Nor does it attach any body-related meaning to incoming stimuli or grow any adaptive perception-action repertoires that are sculpted by neuromodulatory value systems.78 It is along these lines – by way of an intimate, synergistic interplay between a) incoming stimuli, b) previously laid down action repertoires,79 and c) action-steering value systems – that experiencing self and experienced world can gradually come into actuality from the booming buzzing confusion (cf. James 1911, 50) of the initially unlabeled world of undifferentiated signals. It must be emphasized that this coming into actuality of self and world does not have anything to do with an homunculus-like center of subjectivity taking in a ‘mental duplicate’ of the alleged ‘real world out there’. Instead, it involves the coalescence of ‘world-related’ exteroceptive signals and ‘organism-related’ interoceptive signals into one multimodal stream of massively back-and-forth chattering thalamocortical activity patterns through which the conscious organism – as it continually lives through its own changing body states and its value-laden perception-action cycles – gradually gets to sculpt an experiencing self and experiential scenery as two complementary aspects of the same bound-in-one ‘conscious Now.’ 78

Value systems: neuromodulatory, hormone-secreting systems that signal diffusely across the brain during biologically meaningful events, thereby fine-tuning nervous pathways that are simultaneously active (Edelman and Tononi 2000, 46-47). Because of value systems, the organism can become capable of strengthening successful activity patterns and weakening those that are of little to no use for survival. 79 Acquired, experientially fine-tuned action repertoires involve the establishment of action-triggering dispositional memory pathways that enable the organism to repeat an act when being confronted with similar exteroceptive and interoceptive stimuli (Edelman and Tononi 2000, 105). As such, they co-determine how an organism gets to live through its action-perception cycles.

105

In this way, thinker and thought and are not to be looked at as some signal-decoding central processing unit trying to interpret incoming signal traffic. Because we are in fact seamlessly embedded parts of the same natural world we’re trying to make conscious sense of (cf. Fig. 4-1), thought and thinker should not be seen as being apart from, but rather as two aspects of, the same process (James 1890/2007, 401). Accordingly, without the need of any representational conception of our mental contents, thinker and thought should be seen as one dual-aspect process alternating between a) the organism’s inner-life, as it is sensed and felt from within, and b) its outward perspective onto what is so often (wrongfully) thought of an entirely physical ‘real world out there’.

4.1 The evolution of the eye

As an introduction to the actual coming into actuality of thinker and thought, let’s first turn our attention to the emergence and evolutionary development of sight. In Fig. 4-2 we may recognize various developmental stages in the evolution of the eye. Each individual stage is here illustrated by the eyes of a particular species of mollusks (e.g. snails, squid, octopi, etc.) that are thought to be good illustrations of some distinctive previous evolutionary stages of the octopus eye.80

Fig. 4-2: Subsequent stages in the evolution of the eye

When sunlight, or light from a bioluminescent source or a light bulb finds its way through light-diffracting media such as air or water, it is absorbed, reflected, scattered, bent and diffracted by the various obstacles it comes across. As such, it typically turns into diffuse light 80

Due to the close resemblance between the eyes of humans and octopi, the various evolutionary stages that are hypothesized to have preceded the current stage of the octopus eye are often thought to make up a good model for the evolutionary development the human eye may have undergone.

106

with a non-monochromatic spectrum. Hence, at any spatial location where an organism may position its photosensitive receptor cells (such as its pigment spot, optic cup, or camera-type eye) the thus observed lighting conditions will inform the organism about what is going on in the immediate (or even more distant) environment:

“Imagine an environment illuminated by sunlight and therefore filled with rays of light traveling between surfaces. At any point, light will converge from all directions, and we can imagine the point surrounded by a sphere divided into tiny solid angles. The intensity and spectral composition of light will vary from one solid angle to another and this spatial pattern of light is the optic array. Light carries information because the structure of the optic array is determined by the nature and position of the surfaces from which it has been reflected.” (Bruce et al. 2003, 6)

This ambient optic array (Gibson 1966) is to be conceived of as follows: “The optic array is the three-dimensional bundle of light rays that impinge from all directions upon each point in an illuminated world. Objects in the world can be thought of as labelling specific rays, so producing a global pattern of light intensities. A retinal image provides access to only part of the optic array at any one time, but a stationary observer can sample different parts by eye movements and head rotations. By changing position, the observer can sample the different optic arrays impinging on neighbouring points in space. However, sampling in this case should not be thought of as a discrete process. Rather, as the observer gradually moves, so each ray gradually moves, thus producing the smooth transformation in the optic array that Gibson called the optic flow.” (Harris 1994, 308)

With the introduction of the ambient optic array, J.J. (James) Gibson wanted to show that an organism’s photoreceptor cells are not so much in the business of making passive snapshots of the incoming light that hits them, but, rather, that organisms get to know their environments by tuning in on the information that is potentially available within a dynamically changing optic array: “For an animal at the centre of this optic array to detect any information at all, it must first have some kind of structure sensitive to light energy … Many biological molecules absorb

107

electromagnetic radiation in the visible part of the spectrum, changing in chemical structure as they do so. Various biochemical mechanisms have evolved that couples such changes to other processes. One such mechanism is photosynthesis, in which absorption of light by chlorophyll molecules powers the biochemical synthesis of sugars by plants. Animals, on the other hand, have concentrated on harnessing the absorption of light by light-sensitive molecules to the mechanisms that make them move. “In single-celled animals, absorption of light can modulate processes of locomotion directly through biochemical pathways. Amoeba moves by a streaming motion of the cytoplasm to form extensions of the cell called pseudopods. If a pseudopod extends into bright light, streaming stops and is diverted in a different direction, so that the animal remains in dimly lit areas. Amoeba possesses no known pigment molecules specialized for light sensitivity, and presumably light has some direct effect on the enzymes involved in making the cytoplasm stream. Thus, the animal can avoid bright light despite having no specialized light-sensitive structures. “Other protozoans do have pigment molecules with the specific function of detecting light. One example is the ciliate Stensor Coerulius, which responds to an increase in light intensity by reversing the waves of beating of its cilia [i.e. the lengthy protrusions that grow from the cell membrane] that propel it through the water. Capture of light by a blue pigment causes a change in the membrane potential of the cell, which in turn affects movement of the cilia (Wood 1976). “Some [other] protozoans, such as the flagellate Euglena, have more elaborate lightsensitive structures in which pigment is concentrated into an eyespot, but Stensor illustrates the basic principles of transduction of light energy that operate in more complex animals. First, when a pigment molecule absorbs light, its chemical structure changes. This, in turn, is coupled to an alteration in the structure of the cell membrane so that the membrane permeability to ions is modified, which in turn leads to a change in the electrical potential across the membrane. “In a single cell, this change in membrane potential needs to travel only a short distance to influence processes that move the animal about. In a many-celled animal, however, some cells are specialized for generating movement and some for detection of light and other external energy. These are separated by distances too great for passive spread of a change in membrane potential and information is instead transmitted by neurons with long processes, or axons, along which action potentials are propagated.” (Bruce et al. 2003, 7-8)

108

The take-home message, here, is that the physiology of organisms with photosensitivity changes when the organism is exposed to light. In this way, a strict dividing line between informative signal and signal-informed organismic system cannot really be drawn. Unlike a photodiode’s flip-of-a-switch kind of optical registration, which is reversible and does not change the device’s physical architecture in any relevant way,81 biological photocells, pigment layers, retinas (and, eventually, the entire organism) irreversibly change their internal physiology and their external response to their embedding environment. In fact, it is the very essence of organisms to be intimately and symbiotically engaged with the world of signals in which they lead their lives.

4.2 From info-computationally inspired neo-Darwinism to ‘lived-through subjectivity’ as a relevant factor in evolution

The signal-interpreting organism is so much entangled with the signal-conveying world through which it lives that this world not only plays an active part in the organism’s gettingto-know it, but also that there is on balance no clear and definite borderline to be drawn between organism and world. As it stands, we will not be able to find a sharp and unambiguous split between the organism’s sensorimotor circuitry and the signal traffic that it literally ‘gives way to’, nor should we think that this signal traffic and its initiatory ‘outerworld’ signal can ultimately be told apart in any exhaustive and well-justified sense. We have not been able to do so in physics (see Section 3.1.2 and 3.2.5), nor will we be able to pull it off in anatomy, physiology, biology, ecology, or the like.

However unattainable such a strict dividing line between target world and subject system may be,82 this hasn’t stopped our nature-dissecting intellect from trying to put it into practice. As a result, contemporary biology has maneuvered itself in the wake of physics and adopted the same approach that physics has chosen to deal with the difficulty of not being able to know what happens in measurement interactions: instrumentalism (which doesn’t care 81

A photodiode will typically alternate between two pre-set states that enable it to send out a binary signal, thus communicating the detection or non-detection of light. These states can be considered part and parcel of the physical architecture of the photodiode. 82 Cf. “causative” stimulus and “effectuated” nervous signaling; body and mind; the physical world and the mental world; Descartes’ res extensa and res cogitans, etc.

109

too much how measurement or observation works, but is concerned mainly with the fact that it works). Instrumentalism does indeed admit our ignorance about what goes on in measurement interaction, and accepts that there may be fundamental uncertainty about the exact location or even the actual existence of the split between target (i.e. source of information) and subject system (i.e. endpoint of information). With this in mind, however, instrumentalism-minded biologists may argue that there’s no objection whatsoever to still apply the split if it leads to empirical agreement between the empirical data and the data-reproducing physical equations describing, say, the in- and outgoing flows across a cell membrane. But it is then easily forgotten that we were only applying the split as a convenient figure of speech, and all too often start to treat the resulting empirical data and their data-reproducing physical equations as if they were reality itself – thereby reducing the raw processuality of nature to well-refined mechanical-mathematical procedures.

In the same vein, biology has moved more and more towards the idea that organisms may work in an info-computational manner, as if they were basically DNA-recombining biological machines geared exclusively towards following the instructions laid down in their genes. In fact, the modern synthesis in biology, also known as neo-Darwinism, rests on two pillars – Darwin’s theory of evolution through natural selection and Mendelian genetics. When combined, these two provide a picture of biological evolution that can be explained entirely in terms of genetic mechanisms:

“The term ‘evolutionary synthesis’ was introduced by Julian Huxley … to designate the general acceptance of two conclusions: gradual evolution can be explained in terms of small genetic changes (‘mutations’) and recombination, and the ordering of this genetic variation by natural selection; and the observed evolutionary phenomena, particularly macroevolutionary processes and speciation, can be explained in a manner that is consistent with the known genetic mechanisms.” (Mayr 1980, 1)

It can thus be argued that neo-Darwinism fits readily into the info-computational narrative. By embracing this two-legged neo-Darwinistic interpretation of evolution, biologists basically start looking at life as resulting entirely from DNA processing. DNA-sequences are thus

110

treated as if they were basically no more than coded instructions for synthesizing proteins.83 Put informally, DNA-code is thus likened with program instructions in computer software, thereby interpreting the organism’s ‘genetic program’ in an info-computational sense. That is to say, in analogy to running a program which will then produce a certain computable output, the info-computational interpretation of genetics basically treats DNA as the set of instructions for ‘computing’ which proteins will be synthesized.

All in all, the info-computational neo-Darwinistic narrative implies that organisms can be treated as if they were no more than code-converting automatons whose arrival on the scene can be explained entirely in terms of the accumulated changes in their genetic germ-line that gave them an unpremeditated competitive edge over other rivalling organisms with whom they share their precarious living-environment. Put in a nutshell, the two pillars of neo-Darwinism – Darwin’s theory of natural selection, can then be summarized as follows: “ … order and harmony [in the Darwinistically evolving biological world] does not arise from higher-order laws destined for such effect, but can be justly attained only by letting individuals struggle for personal benefits, thereby allowing order to arise as an unplanned consequence of sorting among competitors. The Darwinism of the modern synthesis is, therefore, a one-level theory that identifies struggle among organisms within populations as the causal source of evolutionary change, and views all other styles and descriptions of change as consequences of this primary activity.” (Gould 2007, 224)

In this way, this info-computationally inspired neo-Darwinism describes biological evolution as a process of gradual, accumulative change. This process, moreover, is thus thought to have no predetermined direction in which individual organisms, merely by realizing the unpremeditated ‘side-effect’ of their own survival and that of their offspring, give rise to their own species. At the end of the day, neo-Darwinism concludes that species outperform their competition merely by the otherwise blind optimization of their own processing of incoming information, nutritional matter, and energy.

83

Proteins are biomolecules that are absolutely vital to living organisms as they participate in a vast repertoire of biological activities, such as DNA replication, cell metabolism, biochemical signaling, and molecular transportation (cf. the blood’s O2-binding protein hemoglobin).

111

Although this is a very powerful and illuminating account which has brought us a long way in understanding how evolution works, it’s unfortunately not the complete story. First of all, it suffers from the same defect that plagues our contemporary mainstream physics: there is no place for actual, lived subjectivity. As a result, subjective consciousness is typically labeled as epiphenomenal – a non-essential, illusory side-effect – although, ironically, the subjective mating choice of the female animal can hardly be put aside as irrelevant (cf. Hunt 2014, 29-31). A second point, somewhat related to the first, is that neo-Darwinism generally does not address the relevance of how the organism lives through all this processing of incoming information, matter and energy. With the growing acceptance of epigenetics,84 neuroplasticity, and neural re-use (Anderson 2014) there has perhaps come some more appreciation for aspects like this. That is, it is now a well-accepted idea that the pheno- and genotype of organisms can change not only due to the mutation and sexual recombination of DNA, but also via developmental and experiential selection that lead to acquired characteristics (cf. Edelman and Tononi 2000, 83-84), or, in other words, traits picked up by going through life, instead of being strictly determined by genetic mutation and inheritance. Also, the preferences that an individual animal may acquire during life, as well as the cultural traditions that may be developed from such preferences within different social groups of animals and eventually even a species as a whole, may significantly contribute to the niche that this animal, social group or species may start to occupy and exploit (reference needed). So, instead of looking at molecular changes in DNA and RNA as the sole relevant factors for biological evolution, we should turn our attention to other areas as well. In order to grasp natural evolution – and particularly its most relevant aspect, life itself – we need to go beyond the current info-computationally inspired approach of looking at organisms as if they were merely the result of computational processing. We need to do better than just looking at nature in terms of numerically specified inputs and outputs of otherwise unspecified black boxes.85 Instead of zooming in primarily on the quantitative specification of inputs and outputs that should thus inform us about all kinds of informational, material and energetic flows, we 84

Epigenetics is the study of how each organism's life events can affect the expression of their genes as some genes are left free to act while others are deactivated by methylization (cf. Phillips 2008). 85 Although it is typically suggested that the interior of these black boxes can be fully accounted for in a later meta-analysis, this follow-up analysis will then inevitably bring along the same problem all over again. In this way, just as in physics (see Section 3.3), only a pseudo-explanation is given, or another sub-plot, of how the processing of inputs into outputs should occur.

112

need to focus more on the meaningful difference that these inputs and outputs make to each other, the organism and its environment. To be more specific, we need to pay more attention to how the organism’s current lived experience of going through these informational, material and energetic flows will affect its future going-through life. Only in this way can we expect to give due credit to the subjective aspects of our own everyday lives. Only by giving heed to how an organism learns to make sense of its environment under precarious conditions (cf. DiPaolo 2009; Thompson 2014, 328-329) and to how it learns to anticipate possible future events – and even construct scenarios of never before experienced affairs – can we expect to overcome the defects that plague the exophysical-decompositional, info-computational, representational approach of our contemporary mainstream physical sciences, in particular physics, chemistry, molecular biology, and (neuro)biology.

As will be discussed later on, these subjective acts of sense-making, creative learning, anticipation, and the co-evolutionary synergism between the organism and its ecological niche all involve the laying down of habitually grooved activity patterns. They all involve the strengthening and/or weakening of latent action dispositions. Among the things that are relevant in the formation of an organism’s dispositional perception-action repertoires we can find, for instance, muscle memory, perception-memory patterns, an organism’s inborn instincts, its commitments and preferences, neuroplasticity, (epi-)genetically laid down propensities, behavioral tendencies, and so on.

4.2.1 From the info-computational view to information as mutualistic processuality

We need to go beyond the exophysical-decompositional, info-computational approach and apply a concept of information that has the two aspects of subjectivity and objectivity already baked in it from the get-go, rather than sticking to the current convention of communication theory in which data signals have to cross the poorly defined boundary between target and subject side – from the source of the information to its eventual end-receiver. In the infocomputational view of cognition, of which communication theory is one of the main inspirational influences, information has to first reach the subject side before it can become informative at all. But it is often forgotten that this information has to remain unlabeled before

113

it ever arrives there. On top of that, there is no realistically attainable final center of subjectivity, arrival at which will enable any incoming information signal to become fully known. Therefore, a complete info-computational informativeness, although suggested by our conventional information and communication theories,86 will remain forever out of reach. In fact, when sticking to the info-computational mode of analysis we will never be able to find out what it is about living, conscious organisms that lifts them above the level of light-detecting photodiodes, information-processing computers, and so on. To be sure, when we remain firmly attached to the mechanical-mathematical, exophysical-decompositional approach of info-computationalism and do not make any allowance for a complementary alternative account of information, we will never be able to formulate a valid scientific answer to the question of what life and consciousness are all about. So, in order to get a closer view on the first contours of a possible alternative conception of information, let’s first take a look at how primitive organisms get to make sense of their surrounds. In the biological world the communication of signals is not to be understood in the externalistic, data-exchanging sense of info-computationalism in which code signals are sent off on a one-way trip from source to destination. Instead, organism and world are actively engaged in a joint process of mutual informativeness in which everything within and without the organism can make a difference (however slight it may be) to the informative process as a whole.

4.2.2 From the non-equilibrium universe to the beginning of life as an autocatalytic cycle

Evolution is not driven purely by genetic mutations that may result in one kind of organism becoming well-adapted to its environment and another less so (thus leading to the ‘selection’ of winners and losers in the struggle for survival). Rather, evolution just as well depends on organisms giving shape (both unintentionally and intentionally) and actively manipulating their environment in a way that affects their survival. Accordingly, biological evolution seems to involve more than just random genetics-based adaption to precarious and unpredictably changing environments. 86

Whenever signal-distorting noise can be kept at a low enough level, Shannon’s information and communication theory holds that messages can be received without any data corruption.

114

It is namely a core characteristic of evolution that organism and environment are engaged in an intimate, symbiotic relationship. So much so, even, that when being pressed to precisely locate the actual dividing line between the two, we will sooner or later come to realize that there is no such sharp and absolute boundary to be found.87 So, instead of there being a truly objective divide between living organism and environment, the symbiotic process which is life is actually a natural extension of the process of nature – a local outgrowth from what was already there, rather than something entirely new, different and otherworldly.88 From this alternative perspective, the early universe, since it can be said to have had the potential for life within it from the very beginning, should better be referred to as being biocentric, rather than abiotic. In the words of biologist and complex systems researcher Stuart Kauffman: “ ... the evolving universe since the Big Bang has yielded the formation of galactic and supragalactic structures on enormous scales. Those stellar structures and the nuclear processes within stars, which have generated the atoms and molecules from which life itself arose, are open systems, driven by nonequilibrium processes. We have only begun to understand the awesome creative powers of nonequilibrium processes in the unfolding universe. We are all – complex atoms, Jupiter, spiral galaxies, warthog, and frog – the logical progeny of that creative power.” (Kauffman 1995, 50-51)

The universe as a whole can thus best be thought of as a giant nonequilibrium process (Prigogine and Nicolis 1977; Jantsch 1980; Smolin 1997, 158-160; Chaisson 2001, 15 and 125-131), rather than just an enormous lifeless collection of externally interacting material ‘bits and pieces’ in which life came into being as a chance side-effect of entirely physical interactions. Although this outline of it is admittedly a crude simplification, the latter view tries to understand life and consciousness in terms of what is nonliving and nonconscious, 87

This is actually quite analogous to the absence of a sharp and absolute borderline between target and subject side in physics (see Section 3.1.2). 88 From the perspective of the orthodox physicalist paradigm, life (as well as the related phenomenon of conscious experience) seems to be utterly otherworldly. That is, by prematurely characterizing the early universe as an entirely physical, mechanistic, and abiotic realm, the emergence of life automatically becomes a sudden and radical departure from the mechanistic status quo. As a result, reductionistic explanations have remained at a loss ever since – requiring all kinds of counter-productive measures, such as writing off conscious experience and the passage of time as illusory just in order to preserve the mechanistic, reductionistic worldview. Unfortunately, though, such measures create more problems than they solve and leave more things unexplained than they clarify. With that in mind, perhaps it’s about time to start questioning the mechanistic, reductionistic worldview, rather than conscious experience and the passage of time.

115

which is impossible (reference needed). The former, on the contrary, opens up the possibility to see the universe as biocentric from its earliest of beginnings. In fact, the beginning of life on Earth could only occur due to nature’s non-equilibrium processuality. The interplanetary gas clouds and dust particles that have come to form our planet Earth, as well as the chemical elements from which, later on, more complex molecules started to form, all originate from nucleosynthesis in stars and supernova (cf. Arnett 1997; see also second half of Section 2.5.1). All this eventually enabled the right conditions for more complex chemical reactions to occur. As Kauffman suggests, life is likely to have emerged spontaneously from a ‘primordial soup’ of such chemical substances. Under normal conditions, such a primordial soup will accommodate numerous chemical reactions among its different species of molecules. Most of these chemical reactions are relatively slow-going, because reactions that go at a fast rate quickly deplete their resources and will therefore typically fall into decline about as fast as they got going. However, such fast reactions do not have to mean the end of the system’s chemical reactivity. Each of the system’s different species has the potential to be a catalyst for multiple other reactions. In other words, each chemical may be able to speed up a chemical reaction that was already occurring within the system, although initially at a much slower rate. Now, as soon as the system reaches a critical diversity, those catalytically accelerated chemical reactions will no longer deplete their resources. Instead, they become part of a selfperpetuating non-equilibrium autocatalytic cycle in which the reaction product of one reaction becomes a resource chemical for the next reaction, and so on, thus establishing a closed chain with catalytic closure and a longer sustainable balance between the system’s production and consumption rates: “... life is a natural property of complex chemical systems ... [W]hen the number of different kinds of molecules in a chemical soup passes a certain threshold, a self-sustaining network of reactions – an autocatalytic metabolism – will suddenly appear. Life emerged, I suggest, not simple, but complex and whole, and has remained complex and whole ever since – not because of a mysterious élan vital, but thanks to the simple, profound transformation of … molecules into an organization by which each molecule’s formation is catalyzed by some other molecule in the organization. The secret of life, the wellspring of reproduction, is not to be found in the beauty of Watson-Crick pairing, but in the achievement of collective catalytic

116

closure. The roots are deeper than the double helix and are based on chemistry itself [and particularly the emergence of self-perpetuating chemical cycles]. So, in another sense, life – complex, whole, emergent – is simple after all, a natural outgrowth of the world in which we live.” (Kauffman 1995, 47-48)

Furthermore: “Here, in a nutshell ... is what happens: as the diversity of molecules [in a primordial ‘soup’ of prebiotic chemical substances] increases, the ratio of reactions to chemicals ... becomes ever higher. ... As the ratio of reactions to chemicals increases, the number of reactions that are catalyzed by the molecules in the system increases [even more]. When the number of catalyzed reactions is about equal to the number of chemical [molecule species], a giant catalyzed reaction web forms, and a collectively autocatalytic system snaps into existence. A living metabolism crystallizes. Life emerges as a phase transition.” (Kauffman 1995, 62)

Such a collectively autocatalytic network has no clear boundary separating it from its environment, other than the closed autocatalytic cycle in which it is engaged. That is, although the cycle of coupled chemical reactions remains largely the same with every iteration, the autocatalytic network as a whole is an open system and keeps itself going by drawing in energy and nutrients from its environment. In turn, this environment is then ‘enriched’ with the system’s waste products and excess heat. Whenever such an autocatalytic network manages to maintain its organizational integrity over longer periods of time (for instance, by developing a semi-permeable membrane)89 or when it even succeeds in attaining a higher level of complexity, it may start to develop more intricate such autocatalytic cycles and subcycles nested within or running through itself, each with their own reaction products and their own specific impact on the system’s local-global organization. It is the going through these cycles that makes a (bio)chemically meaningful difference not only to the autocatalytic system as a whole, but to its environment as well.

89

When an autocatalytic network evolves a semi-permeable membrane, it is typically referred to as an autopoietic system – a system capable of maintaining and reproducing itself (Maturana and Varela 1973/1980).

117

In fact, since there is no clear boundary between system and environment, all nonequilibrium processuality90 that facilitates their symbiotic relationship should actually be considered as the relevant phenomenon of interest – not just the autocatalytic system by itself. Accordingly, a system’s sensitivity to its environment as well as its adaptivity should be seen as being part of the ‘biunitary whole’ of system-environment. Sensitivity and adaptivity should thus not be thought of as pure system properties belonging strictly to the autocatalytic network itself, but rather as aspects of the process as a whole. Accordingly, they are inevitably dependent upon the same grand-environment from which the autocatalytic system had arisen in the first place. As such, there is an unmistakable co-dependency between the autocatalytic network and its environment,91 and sensitivity and adaptivity are just as well aspects of the environment as they are of the network in question. Even when such an autocatalytic network manages to grow a protective semi-permeable membrane, this codependency persists, as does the underlying ‘oneness’ of the network and its environment.

4.2.3 From environmental stimuli to early subjective experience

In another layer of interpretation, though, this membrane-packaged chemical reaction network may now be considered an ‘autopoietic unit’ (cf. Maturana and Varela 1973/1980) – an individual biological cell whose internal processes are not only capable of maintaining the cellular whole in which they are participating, but also of giving rise to new ‘infant cells’ with the same biochemical reaction repertoire as the original ‘parent cell.’ Different stimuli may trigger different biochemical chains of events within and between such primitive biological cells. Depending on the kind of cell, stimuli may trigger changes in metabolism (by changing the cell’s internal reaction pathways), outer shape (e.g. when an amoeba expels fluids by using its contractile vacuole as a protective mechanism against absorbing too much water), collective behavior (e.g. free-roaming slime mold cells that aggregate together when food becomes scarce), the ability to perform cell division, and so on. 90

Of course, all non-equilibrium processuality will involve not only the entire autocatalytic cycle, but also all of the (direct and indirect) in- and outgoing flows of energy, material and information. 91 An autocatalytic network and its environment have a co-dependent symbiotic relationship, albeit an asymmetrical one, in that the impact of an individual autocatalytic system on its environment is usually smaller than that of the environment on one of its local autocatalytic networks. For the simple reason that any autocatalytic network that exhausts its environment will rob itself from its future resources, thereby sealing its own fate, it is far more likely for an environment to grind down one of its in-house autocatalytic networks than it is for some autocatalytic network to deplete its own environment.

118

In multicellular organisms, environmental stimuli are farther removed from innerorganism processes so that direct stimulation can no longer be used as an effective means of signal transmission. To get from sensory stimulation to motor or homeostatic response, multicellular organisms typically rely on extracellular electro-chemical signaling cascades facilitated by lengthy nerve fibers (cf. Bruce et al. 2003, 7-8). Such a membrane-bounded organism, its entire embedding environment, as well as the environmental stimuli that may gradually come to guide the organism’s behavior, form a seamlessly merged ecosystemic whole. Somatosensory and sensorimotor activity patterns should therefore not be interpreted as happening exclusively within an organism (reference needed). Instead, these activity patterns transcend the organism as they loop through the organism-environment system as a whole, thus ending up as adaptive perception –action cycles that form the rudimentary basis of subjectivity.

In stark contrast to the above scenario, we often still resort to info-computationalism and the simplifying machine metaphor in which sensitivity and adaptivity are associated with receptor, processor and effector units within an organism. Accordingly, we typically like to ascribe the characteristics of ‘sensitivity’ and ‘adaptivity’ primarily to organisms, and not so much to the environment in which they live. But although an environment may usually indeed be less sensitive and adaptive to organism-induced changes than the other way around, the environment participates just as much in the cyclic process of co-sensitivity and co-adaptivity as does the organism. In fact, as will be further discussed below (in Section 4.2.4), it is the going through such cyclic processes that should be identified as the main essence of subjective experience.

4.2.4 From early photosensitivity to value-laden perception-action cycles To understand how the first sensory modalities – particularly light-sensitivity – could show up in organisms, we should look at how any of the cycles that a primitive autocatalytic network (or biological cell) might be engaged in, could come under the influence of light. That is, we have to look at how autocatalytic cycles can tap into light energy to thus become adaptively

119

oriented towards light in a way that contributes to the organizational integrity of the system as a whole.92 If light stimuli manage to initiate a chain of events that positively affects the wellbeing of the organism, then this may offer the organism a whole new means to cope with the many challenges of its precarious environment. There are different scenarios that may lead to the development of such lightsensitivity. For instance, an autocatalytic network may manage to draw a photosensitive protein into one of its chemical cycles, or one of the many proteins that are taking part in the chemical reaction network of a primitive biological cell may start to switch to another state when being impacted by light. In both cases, their biochemical networks are apt to develop adaptive reaction pathways. Specifically, when this newly acquired photo-sensitivity turns out to facilitate the prolonged continuation of the cycles in which it is involved, thereby leading to an increased organizational integrity, it can be said to have ‘survival value’ for the system as a whole. For instance, non-UV light stimuli may trigger the unpremeditated production of reaction products with UV-protective characteristics, thus enabling the networked cycles to develop a defense against damaging UV radiation (reference needed). As an autocatalytic network develops an orientation towards light, this may thus lead to protection against environmental threats, increased access to nutrients and energy resources, proto-homeostatic regulation of the organism’s biochemistry,93 and other adaptive benefits. In this way, it becomes possible for the organism to keep its metabolism going, to carry on with regenerative maintenance, and to keep investing in renewing growth of its various life-supporting cycles. Like this, light stimuli can in fact acquire ‘somatic’ meaning and become valuative as, during life, they gradually get to be associated with repeatedly cooccurring favorable and/or unfavorable internal states of the autocatalytic system. For instance, when activation of a light-sensitive cycle will consistently go hand in hand with access to nutrients, this will obviously promote the network’s metabolic well-being. Hence, through their coupling to the organism’s well-being and mal-being, environmental stimuli are basically given body-related value and can thus become inherently meaningful to the organism. In the words of neurobiologist Gerald Edelman: 92

The organizational integrity involves more than just self-preservation. That is, this integrity not only refers to the capacity of an organism or ecosystem to maintain its organization, it also pertains to the capacity 1) to develop towards a higher level of complexity when conditions are favourable to do so; and 2) to withdraw into an earlier state when energy inflow is depleting or when resources are scarce (but still with the potential to return to the lost higher-order level of organization). Such growth towards increasing levels of complexity, which is characteristic of rich, healthy ecosystems is called ‘ascendency’ (Ulanowicz 1997). 93 For instance, under the influence of the day-night cycle, the organism may develop early circadian rhythms that affect its inner biochemistry.

120

“I use the word value to refer to evolutionarily … derived constraints favoring behavior that fulfills homeostatic requirements or increases fitness in an individual species.” (Edelman 1989, 287-288)

In his book Consciousness: How Matter Becomes Imagination, co-written with Giulio Tononi, it is put like this: “We define values as phenotypic aspects of an organism that were selected during evolution and constrain somatic selective events, such as the synaptic changes that occur during brain development and experience.” (Edelman and Tononi 2000, 88)

Accordingly, Edelman defines value systems as those constraining, cycle-modulating parts of the organism on the basis of which it can: 1) carve up the world of signals into re-cognizable, re-livable, somatically relevant categories,94 and 2) develop adaptive action repertoires (cf. Edelman 2001, 43). To an organism, value constraints enable it to adaptively increase its fitness in the absence of pre-programmed, quantitatively specified goals. Unlike a boiler-heated or air-conditioned room whose change in temperature can be initiated by turning the thermostat’s temperature selection dial up or down, the adaptive change in organisms is not directed by such externally imposed set points. Whereas a thermostat 1) may just as well be located outside the room whose temperature it is trying to manipulate, and 2) is by itself just a largely passive signal-comparing switch box, value pervasively participates in, and actively contributes to, adaptive change in an organism’s organization. Those inner-organism cycles that, during evolution, have come to serve as ‘salience indicators’ for other inner-organism cycles, may thus be called value systems. For instance, since it communicates information about environmental light conditions to other parts of the body, the synthesis and secretion of melatonin by the pineal gland plays a major role in the wake-sleep cycle of human beings and other mammals (reference needed). Another example can be found in the way that the evolved shape, muscularity and jointedness 94

This idea of categories pertains to the organism’s capacity to “classify” environmental stimuli in terms of what they do (and thus: mean) to the organism and what they trigger the organism to do in response. As in Pavlovian conditioning (where, after repetitive trials, an initially neutral stimulus, such as the sound of a ringing bell, gradually becomes meaningful as it gets to be associated with the arrival of food), categorization enables an organism to “get to know” environmental stimuli in terms of its own ‘somatic status’ (cf. physiology, biochemistry, homeostasis, etc.) and adaptive motor responses. As the organism develops a habit to repeat adaptive categorical responses, it basically ‘sculpts’ its ability to discriminate salient foreground percepts from a less relevant background for adaptive purposes (cf. Edelman and Tononi 2000, 48).

121

of a human hand leads to a certain repertoire of possible and impossible movements (cf. Edelman and Tononi 2000, 88). This, of course, affords these hand-equipped human beings to manipulate and take advantage of environmental opportunities in a specific way (cf. Gibson 1979, 113 and 224-225), which, in turn, may direct further evolutionary adaptation of hand morphology, the sensorimotor system, and, in the long run, the human species as a whole. In the absence of any pre-available manual or coded instructions that tell the organism how to make sense of the world in which it lives and what to do in which situation in order to stay out of harm’s way, value systems are indispensable for growing-up organisms to learn survival skills and to increase their adaptivity in a precarious and ever-changing livingenvironment (reference needed). Trying to take all this into account, Edelman tries to delineate value systems as follows: “I define value systems as those parts of the organism (including special portions of the nervous system) that provide a constraining basis for categorization and action within a species. I say ‘within a species’ because it is through different value systems that evolutionary selection has provided a framework of constraints for those somatic selective events within the brain of each individual of a particular species that lead to adaptive behavior. Value systems can include many different bodily structures and functions (the so-called phenotype); perhaps the most remarkable examples in the brain are the noradrenergic, cholinergic, serotonergic, histaminergic, and dopaminergic ascending systems. During brain action, these systems are concerned with determining the salience of signals, setting thresholds, and regulating waking and sleeping states. Inasmuch as synaptic selection itself can provide no specific goal or purpose for an individual, the absence of inherited value systems would simply result in dithering, incoherent, or nonadaptive responses. Value constraints on neural dynamics are required for meaningful behavior and learning.” (Edelman 2001, 43-44) Although, thus far, we’ve been focusing primarily on the role of value in giving shape to the selection-driven adaptivity of organisms, it must be emphasized that value does not only influence adaptivity. Most notably, it is part and parcel of the organism’s subjectivity, feeling, emotion, etc., as it directly pertains to what it means for the organism to go through its many coupled life-supporting cycles.

122

From all these life-supporting cycles that value is involved in, perception-action cycles are certainly among the earliest and most prominent.95 Given the intimate interplay between sensorimotor, somatosensory, and value cycles, it’s probably more instructive to refer to such perception-action cycles as sensation-valuation-motor activation-world manipulation cycles. It also needs to be remarked that the concept of ‘sensation’, as it is used here, can pertain to both interoception and exteroception; the inward-looping cycles of body-related ‘self signals’ and the outward-looping cycles of world-related ‘non-self signals’.

Furthermore, because of sensorimotor and somatosensory coupling (see Sections 4.2.3 to 4.3.1 for a more elaborate discussion), a light-sensitive cycle not only affects its own future course of development,96 but also that of the greater network of system-environment cycles as a whole. Indeed, in the course of evolution, a light-sensitive cycle is likely to have started out with an otherwise non-functional protein that, under the influence of light, started to operate as a bi-stable switch for triggering a whole cascade of other events. So, once the bi-stable operation of such a light-sensitive protein started to affect other processes elsewhere in the chemical reaction network, this could trigger them to change their dynamics in a way that would be favorable for the organism as a whole. In other words, the protein’s light-sensitivity gets to have survival value for the organism as a whole. Moreover, this is how, already at a primordial level, sensation-, metabolism- and action-related cycles could become intimately interwoven, thereby giving rise to a primitive form of sensorimotor and somatosensory coupling and the binding-into-one of the organism’s multiple cycles of experience.

95

Anyone wanting to distinguish between sensation (as mere ‘uninterpreted’ sensory stimulation and signal transduction) and perception (as the process of valuative-emotive interpretation of sensory signals, stereotypically to be performed by the brain) would probably prefer to use the term ‘sensation-action cycle’ instead of ‘perception-action cycle’ – especially when the investigative focus is on primitive cellular life. On the other hand, whenever one prefers to avoid such a possibly premature distinction between sensation and perception, perception could also be thought of as exhibiting various levels of sophistication – from the primitive, rudimentary level to the highly complex. We can, after all, not rule out beforehand that there may be an internal process of sense-making even in such primitive organisms. Because of the prominent role of value constraints, even already in early life, we cannot exclude this possibility too soon. Indeed, early ‘interpretative’ valuation cycles may at first sight not yet be functional as such, or be instead so rudimentary as to be negligible. However, even already in its prebiotic stage, the universe is – metaphorically speaking – filled to the brim with nonequilibrium cycles. Whenever any ‘individual’ non-equilibrium cycle gets to be absorbed into another one, or when an existing cycle evolves an inner sub-cycle such that the smaller, nested cycle becomes relevant in maintaining the whole of the greater, overarching one, then, to the best of our knowledge, we can consider the nested sub-cycle to be of organizational value to the whole. What’s more, they can in fact be considered mutually meaningful, and so can all other non-equilibrium activity patterns in the universe. Therefore, even such low-level valuative activity can be considered relevant enough – at least potentially – to take their early form of proto-sensitivity serious (see Section 4.2.4 for further details). 96 Fitness, organizational integrity, metabolic rate, the morphology and functionality of habitually grooved biochemical pathways, etc., can all be considered possible aspects of the future course of development of lightsensitive cycles.

123

It is by continuously going through these cycles that this ecosystemic network as a whole enacts a phenomenal world for the organism to negotiate (cf. Maturana and Varela 1973). Although initially on an extremely rudimentary level, the organism gets to ‘make sense’ of what is otherwise left unlabeled by living through its various organism-environment cycles. And to the extent that no clear dividing line can be drawn between what belongs to the organism and what belongs to its environment, the value-ladenness should actually not be attributed solely to the organism, but to the organism-environment system as a whole. In fact, since the organism has only access to its world of experientially categorized signals and has no other way of making sense of its living-environment, the entire phenomenal scenery around it should ultimately not be considered as being apart from, but rather as a part of the process of experience (cf. Velmans 2009, 327-328). Accordingly, what we usually like to think of as being the physical ‘real world out there’ is actually part and parcel of our process of experience (Ibid.). So much so, even, that according to Max Velmans’s reflexive monism, we, as seamlessly embedded conscious organisms with our conscious view onto the greater embedding universe, are in fact participating in a reflexive process through which nature experiences itself (Ibid.). Much in line with Alfred North Whitehead’s panexperientialism, the experiencing organism and its experienced environment are thus two aspects of the same process of experience. Also, in the tradition of biosemiotics, the outward- and inward-looping organismenvironment cycles that the organism is going through as it tries to face the challenges of life, can together be said to make up a bundled biosemiosic cycle of mutual significance, or, as early biosemiotician Jakob von Uexküll (cf. von Uexküll and Uexküll 1940/2010; Koutroufinis 2016) would have it, the organism’s self-centered world consists of ‘carriers of significance’, thus forming a world of subjectively meaningful information beyond which there is nothing for the organism to make sense of.

124

Yet another way of looking at these perception-action cycles is in the context of Gestalt psychology. Gestalt cycles of experience (cf. Fig. 4-1) are thought to be an indispensable part of the cyclic self-regulation of all living organisms (Perls 1947/1969; Clarkson and Mackewn 1993, 48). Gestalt psychology starts out with the idealizing picture of a general state of balance for living organisms that basically allows them to stay at rest and be relaxed, almost without a care in the world, just by letting the self-regulatory cycle do its work. The main idea behind the Gestalt cycle, then, is that when its cycling is left unperturbed, an organism may simply go through it without having to actively take care of any business. But whenever an internal or external disturbance of the cycle occurs, this will prompt the organism to redirect the deviating course of the cycle in order to restore homeostatic balance, to maintain a healthy metabolism, or to satisfy needs, in short, to return to the desired situation of rest and balance:

“The person [or organism] organizes his experience – his sensations, images, energy, interest and activity – around the need until he has met it. Once the need is met the person feels satisfied – so that particular need loses its interest for him and recedes. The person is then in a state of withdrawal, rest or equilibrium, before a new need emerges and the cycle starts all over again. In a healthy individual this sequence is self-regulating, dynamic and cyclical. Selfregulation does not, of course, necessarily ensure the satisfaction of the needs of the person. If the environment is deficient in one of the needed items – water in the desert or affection in a family – the person will not be able to quench his thirst or satisfy his need for love. Selfregulation implies that the individual will do his best to regulate himself in the environment given the actual resources of that environment.” (Clarkson and Mackewn 1993, 49)

The various stages that the Gestalt cycle goes through during each full turn, can be roughly described as follows (cf. Perls 1947/1969, 69): 1. the organism is in a state of rest; 2. the organism senses and becomes aware of a disturbance (which may be internal or external); 3. a Gestalt is being formed (i.e. a meaningful foreground pattern apparent from, yet seamlessly embedded within, its background patterning and intimately related with the entire whole and previous history of organism-environment system); 4. the organism prepares for action and then follows up on it, thus taking directed action with the aim of, 5. achieving a decrease in tension, which should then result in, 6. the return to the desired organismic balance.

125

During all this, there is an intimate interdependence between organism and environment to the extent that they can be seen as an inseparable whole whose process of subjective experience does not take place exclusively within the organism. Instead, subjectivity involves the entire bound-in-one multiplicity of experiential organismenvironment cycles – at least, that would be my understanding of it. Because the formation of a Gestalt (which can be loosely interpreted as an ‘experiential foreground pattern’, or ‘formed situation’) depends on the entire whole and history of the organism-environment system, it does not enable the organism to see the world as it is, but to experience it in terms of what may be called ‘motivational valences’:97 “Valences are opportunities to engage in actions that structure a motivated person’s perception of a situation and her subsequent actions. For a person who is motivated by hunger, a sandwich has valences that it does not have for a sated person, but only if it is reachable and does not belong to someone else. The key is that these valences appear in the environment as a function of the motivations of people, and vice versa. Valences are perceived forms that are a function of the person’s state and the environment’s characteristics.” (Käufer and Chemero 2015, 88)

Next to panexperientialism, biosemiotics and Gestalt psychology, other related theories of perception and conscious experience that should be mentioned here are: enactivism (Maturana and Varela 1973/1980), radical embodied cognitive science (Chemero 2009), Max Velmans’s reflexive monism, William James’s neutral monism (1890/2007), and, of course, Gerald Edelman’s and Giulio Tononi’s extended theory of neuronal group selection (2000) that drew heavily on James’s ‘specious present’ and his view on the conscious stream of experience. For now, however, I will not go further into their respective versions of the perception-action cycle. Instead we’ll focus on perceptual categorization, and what this means for a conscious being’s ‘sculpting into actuality’ of its sense of self and world.

97

Originally coined by Gestalt psychologist Kurt Lewin as ‘Aufforderungscharaktere,’ the concept of ‘valences’ later inspired J. J. Gibson to develop his theory of affordances (1979, 119-135).

126

4.3 Perceptual categorization, consciousness and mutual informativeness

As has already been briefly touched upon in Section 4.2.3, perceptual categorization is the process of carving up nature into categories, although nature itself does not contain any such categories at all (Edelman and Tononi 2000, 104). From early life onwards, it is by means of perceptual categorization that sentient organisms gradually get to differentiate salient, lifeaffecting foreground patterns from less relevant background patterns. This allows these organisms to chisel the world of initially uncoordinated signals into a multimodal, actionaffording and somatically meaningful scene for adaptive purposes (cf. Edelman and Tononi 2000, 48-49). This conscious scene should not be thought of as an inner-brain projection of a socalled ‘real world out there’, but rather as an unscripted, first-person live-drama scene in which sense of self and sense of world are two aspects of the conscious organism’s livingthrough its non-equilibrium body-brain-environment cycles. Although the term ‘scene’ probably reminds most people of audiovisual media, the formation of such a conscious scene involves the binding together of many sensorimotor and somatosensory streams, related not only to vision, but also to other sensory modalities, proprioception, interoception, value, and more. For sake of simplicity, though, we’ll first focus on the stereotypical example of visual perception. Incoming beams of light, originating from the optic array (Gibson 1966 and 1979, 58; see also Section 4.1) in the organism’s environment, pass through the eye’s light-refracting cornea and lens that project the beams onto the retina. As the light is absorbed by photosensitive proteins within the retinal photoreceptor cells, this prompts the generation of nerve impulses that run through the optic nerve to the thalamus which can be thought of as the sensorimotor and somatosensory ‘integration and intercommunication center’ of the brain, located above the brain stem, near the center of the brain. From there, the thalamus connects to the primary, secondary and higher visual cortices: “ ... we can construct a simplified neurophysiological scenario of what goes on in the brain when we perceive a given color. Various classes of neurons in the retina, lateral geniculate nucleus [i.e. the vision-dedicated part of the thalamus], primary visual cortex, and beyond, progressively analyze the incoming signals and contribute to the construction of new response properties in higher visual areas.” (Edelman and Tononi 2000, 161)

127

The selectively activated response signals in the visual cortices are sent back to the thalamus, where they are further distributed to various other areas of the brain. According to present neuroscientific understanding, the thalamus plays a role, not only as a ‘relay center’ between different subcortical areas, such as the hippocampus, and the cerebral cortex. Next to that, it contributes to the establishment of sleeping patterns, circadian rhythmicity, pineal melatonin production and secretion (Jan et al. 2009) and is involved in the facilitation of focal attention, and memory access. Moreover, as neuroscientist Luiz Pessoa argues, the thalamus is not only a passive relay station, but it also plays a big role in the integration of global signals, thus contributing to affective valuation as well as enabling integrative intercommunication among networks of brain regions, not just one-way signaling from one brain module to the other (which is the archetypical illustration of how the info-computational approach works), but system-wide, back and forth, and cross-hierarchic signaling within a highly interconnected, distributed network (Pessoa 2013). On top of that, the thalamus is considered a vital structure in the ‘firing up’ of consciousness because it seems to act as a gatekeeper that allows, or disallows exteroceptive, interoceptive and hippocampal signals to be conveyed to, amongst others, their associated sensory cortices, the insular cortex, and the pre-motor and motor cortices.

As all this is going on, the thalamus also signals to the prefrontal cortex which has evolved to perform the task of linking sensorimotor activity patterns with internal goals of the organism. This can occur, amongst others, because dopamine-releasing reward systems in the subthalamic brain stem region project specifically to the prefrontal cortex. Other diffusely projecting value systems – neuromodulator-secreting nuclei such as the noradrenergic, serotoninergic and histaminergic cell nuclei that have become evolutionarily associated with events that are biologically relevant to the organism – also condition the firing patterns and newly developing connection patterns in the thalamocortical region. Because of this, an acquired history of organismically meaningful events can become memorially embedded within the thalamocortical network in a distributed way. This is achieved by the laying down of dispositional action repertoires that, given the occurrence of perceptually similar circumstances, enable the organism to prompt what may be called a ‘thoughtful act’ by reactivating a certain somatosensory-sensorimotor performance routine that has proven to be successful before. In this way, the therewith associated perception-action cycle is started up so that the organism can adaptively respond to the situation with which it is confronted.

128

This can happen because both the pre-motor and the motor cortex are affected by these value systems and can thus play their part in such ‘thoughtful acts’. But although they participate in the thalamocortically directed perception-action cycles, they do not signal back to the thalamus. Instead, they serve as outgoing ports as they are dedicated primarily to coordinating and smoothening the activity of motoneurons that set the musculoskeletal apparatus in motion (cf. Edelman and Tononi 2000, 180). Unlike the other cortical areas, that are typically engaged in intense back-and-forth signaling with the thalamus, the signals of the motor cortices basically take the detour of motor activation and musculoskeletal manipulation of the organism’s environment. Like this, the physical impact of these motor acts on the environment may then make a relevant enough difference to the optic array, so that any changes can be picked up by the organism’s visual apparatus, thus completing the organism’s sensation-valuation-motor action-world manipulation cycle. Anyhow, all the other non-motor signals, as they criss-cross all over the thalamocortical region in a complexly reciprocating way, participate in what Edelman has called the ‘dynamic core’. This dynamic core, then, is the highly dynamic, constantly fluctuating ‘swarm’ of activity patterns that reverberates all over the thalamocortical region as its many contributing neuronal groups make a difference to each other’s firing dispositions (see Fig. 4-3 further down below; cf. Edelman and Tononi 2000, 143-154). Like this, the dynamic core is thought to facilitate the emergence of primary and higher-order consciousness through the activity of reentry – the widespread reciprocal signaling in the thalamocortical region of the brain: “ ... reentry is a process of ongoing parallel and recursive signaling between separate brain maps along massively parallel anatomical connections, most of which are reciprocal. It alters and is altered by the activity of the target areas it interconnects.” … “The correlation of selective events across the various maps of the brain occurs as a result of the dynamic process of reentry. Reentry … leads to the synchronization of the activity of neuronal groups in different brain maps, binding them into circuits capable of temporally coherent output. Reentry is thus the central mechanism [or rather: process] by which the spatiotemporal coordination of diverse sensory and motor events takes place. … It is important to emphasize that reentry is not feedback. Feedback occurs along a single fixed loop made of reciprocal connections using previous instructionally derived information for control and correction such

129

as an error signal. In contrast, reentry occurs in selectional systems across multiple parallel paths where information is not prespecified.” (Edelman and Tononi 2000, 106-106 and 85) Reentry can thus be involved in the formation of local, nested cycles as well as in that of global cycles, thus giving shape to the local-global reentrant organization of associative thalamo-cortical, cortico-cortical, and thalamo-cortico-thalamic pathways. Accordingly, the process of reentry allows a sentient organism to carve up its initially unlabeled livingenvironment into conscious self and scenery without having to rely on a homunculus or dataprocessing computer program (cf. Edelman and Tononi 2000, 85). In a nutshell, reentry facilitates the culmination of exteroceptive, interoceptive and value signals into a multimodal, yet bound-in-one stream of experience, thus giving rise to a thinker of thoughts, a doer of deeds and a feeler of feelings (cf. Thompson 2015, 325) – all wrapped in one: “This sculpting of a multimodal stream of experience … is facilitated by the extremely high degree of mutual informativeness within and between the mind-brain’s neuronal groups. Through reentry, neuronal groups will fire back and forth in response to each other’s in- and outgoing neuronal spike trains, neurosecretory signals, etc., thus giving rise to internally meaningful activity patterns (cf. Edelman and Tononi 2000, 127-131). That is, as ‘worldrelated’ exteroceptive signals are reentrantly associated with ‘organism-related’ interoceptive signals, this enables the realization of higher-order perceptual categorization. Accordingly, exteroceptive signals can acquire somatic meaning through their linkage with interoceptive signals … Hence, as ongoing perceptual categorization makes a difference to the mind-brain’s association patterns as well as to the organism’s overall physiology, it enables the conscious organism to re-cognize different outer-organism ‘world states’98 by living through the therewith associated inner-organism ‘body states’99 (Edelman 1989, 93-94; Pred 2005, 262264).” (van Dijk 2016)

As they participate in perceptual categorization and the binding together of exteroceptive, interoceptive and value-laden signals, all these activity patterns collectively maneuver all through the brain as one ongoing flow process. Depending on shifts in attention and on which sensorimotor, and somatosensory circuits are involved, this ‘swarming’ process constantly 98

I.e., different patterns of exteroceptive stimuli that are held to pertain to the ‘state’ of the outer-organism world, but also of proprioceptive signals pertaining to the organism’s musculoskeletal positions and movements within that world. 99 I.e., the totality of interoceptive patterns relating to the entire homeostatic and physiological condition of the organism’s body.

130

varies its local density and composition. In this way, it is continuously changing which parts of the thalamocortical network are participating in foreground signaling and which parts are instead engaged in background activity. In early life, all of the mind-brain’s thalamocortical activity patterns are still relatively undirected and unpolished, but the release of dopamine and other neuromodulators condition neural pathways that are active at that moment (for instance the motor circuits for grabbing and the visual circuits for optical focusing). In this way, the conscious human being can develop habitually grooved action repertoires with a high level of not only specialization, but also flexibility to changing circumstances.

Fig. 4-3: Stationary and swarming cortical activity patterns in non-REM sleep and wakefulness. A healthy subject’s sensorimotor cortex is exposed to Transcranial Magnetic Stimulation (TMS) while brain activity is being recorded using electroencephalography (EEG). In order for the locations with maximum activity to light up, thresholding at 80% is applied to the density distribution of the recorded action potential voltages. With appropriately tuned stimulation parameters the recorded activity patterns in Fig. 4-3A will remain quite stationary during non-REM sleep as they linger slightly beneath the TMS coil. During wakefulness, however, the activity patterns ‘swarm’ across large portions of the cortex (Fig. 4-3B). In the case of non-REM, dreamless sleep (during which subjects are commonly thought to have no or negligible conscious experience) the stationary activity patterns indicate the absence of mutual informativeness among neuronal groups in the thalamocortical region. In the case of wakefulness, on the other hand, there is rich mutual informativeness which typically enables the occurrence of avid swarming behavior. Although consciousnessfacilitating mutual informativeness is thought to occur mainly within the thalamocortical region of the brain (Edelman and Tononi 2000, 139-154), the here depicted EEG-image only show activity patterns whose signal intensity is in the top range (80-100%). (Images edited from: Massimini et al. 2007, 8499 - Fig. 4)

131

4.3.1 Integration, differentiation and the mind-brain’s mutual informativeness According to Edelman’s Theory of Neuronal Group Selection, the brain facilitates the coming together of multiple somatosensory, sensorimotor and value-related activity patterns, so that a bound-in-one integrated experience can occur. This basically means that it’s impossible to experience individual aspects in one’s stream of consciousness separately from all the others. For instance, although we, sighted conscious organisms, are indeed able to distinguish color, shape and texture – all of which with their own possible neural correlates – the conscious experience in which these aspects come to the fore will always form a unified and integrated whole. Hence, despite the mind-brain’s capability to partition the world of signals along many different perceptual dimensions, none of these dimensions can be experienced in strict isolation from the others. It is through the extraordinarily high level of reentry-driven mutual informativeness between neuronal groups participating in the dynamic core that the binding of these different perceptual modalities is achieved. If this mutual informativeness is absent – as in dreamless, non-REM sleep, or during deep coma – conscious experience will not occur (see Fig. 4-3A). On the other hand, if mutual informativeness does occur (cf. Fig. 4-3-B), it facilitates the coming into actuality of a unified ‘conscious Now’100 that enables the organism to distinguish between many different conscious events: “The ability to differentiate among a large repertoire of possibilities constitutes information, in the precise sense of ‘reduction of uncertainty’. Furthermore, conscious discrimination represents information that makes a difference, in the sense that the occurrence of a given conscious state can lead to consequences that are different, in terms of both thought and action, from those that might ensue from other conscious states.” (Edelman and Tononi 2000, 29-30)

Hence, when adopting an information-theoretical perspective, we may state that the emergence of each such conscious state (or ‘conscious Now’) rules out a vast range of other possibilities. The combinatorial potential of possible perceptual categorizations in each 100

This ‘conscious Now,’ which Gerald Edelman (1989) has coined a ‘remembered present’ can be thought of as an ongoing conscious scene of self and world (see also: Edelman and Tononi 2000, 102-112). In higher-order organisms – capable of symbolic thought, language, and, hence, the construction of imaginary ‘storylines’ about possible futures – this remembered present can even be called an ‘anticipatory remembered present.’

132

emergent moment of consciousness is practically infinite, and the coming into actuality of each culmination of such categorizations amounts to an enormous reduction of uncertainty, or, in other words, information (Edelman and Tononi 2000, 127-129; Tononi 2008, 217-218; van Dijk 2016).

4.3.2 Self-organization and the noisy brain

In the biological brain noisiness is an indispensable system property. The presence of noise can help weak somatosensory and sensorimotor input signals to overcome the activation thresholds of synapses. In the presence of an already available input signal, random neural signaling can add up to prompt further signal transmission along coupled response chains branching across the brain’s many levels of organisation. Terrence Deacon mentions that, over the course of evolution, neurons have developed out of general-purpose cells that gradually came to function as long-distance signaling fibers (2012, 499) while still having to carry out many other tasks in service of the cell’s life support. Because of this, neurons are intrinsically noisy: “It would probably not be too far off the mark to estimate that of all the output activity that a neuron generates, a small percentage is precisely correlated with input, while at least as much is the result of essentially unpredictable molecular-metabolic noise; and the uncorrelated fraction might be a great deal more. Neurons are effectively poised at the edge of chaos, so to speak. They continually maintain an ionic potential across their surface by incessantly pumping positively charged ions ... outside their membrane. On this electrically unstable surface, hundreds of synapses from other neurons are tweaking the local function of these pumps, causing or preventing what amounts to ion leaks which destabilize the cell surface. As a result they are constantly generating output signals generated by their intrinsic instability and modified by these many inputs.” (Deacon 2012, 499-500)

Put a huge number of neurons together to make up a neural network and it is not so hard to imagine how the noisiness may start to dominate the network’s signaling patterns:

133

“... brains the size of average mammal brains are astronomically huge, highly interconnected, highly re-entrant networks. In such networks, noise can tend to get wildly amplified, and even very clean signal processing can produce unpredictable results; ‘dynamical chaos’, it is often called. But additionally, many of the most relevant parts of mammal brains for ‘higher’ cognitive functions include an overwhelmingly large number of excitatory connections – a perfect context for amplifying chaotic noisy activity. … Both self-organizing and evolutionary processes epitomize the way that lower-order, unorganized dynamics – the dynamical equivalent of noise – can under special circumstances produce orderliness and high levels of dynamical correlations. Although unpredictable in their details, globally these processes don’t produce messy results. This is the starting point for a very different way to link neuronal processes to mental processes.” (Deacon 2012, 501-502) To see how this might work, let’s take a look at how nonlinear systems like the brain can streamline their performance under the influence of noise. In many systems, the occurrence of inner-system noise can facilitate significant improvement of signal quality through noisedriven signal-amplification, a phenomenon that is also known as Stochastic Resonance (SR) (Gammaitoni et al. 1998; McDonnell & Abbott 2009). The occurrence of Stochastic Resonance is well established in neural networks both experimentally and theoretically with the help of externally added noise (Linkenkaer-Hansen 2002, 17) and has indeed been found to occur through endogenous neural noise as well (Emberson et al. 2007). In the latter case, system-induced noise facilitates ‘intrinsic Stochastic Resonance’ which appears to be essential to an organism’s optimal processing of somatosensory and sensorimotor signals (Linkenkaer-Hansen 2002, 17 and 27). It encourages system-wide distributed signaling, as it effectively makes it easier to overcome signal-constraining excitation thresholds between synapses, neurons, and neuronal groups. Although constraints are, in our everyday language, often interpreted as some kind of stumbling block or a barricade blocking the shortest route towards some prospective destination, signal-constraining thresholds should definitely not be considered undesirable. That is, a systematic lack of both constraining thresholds and the value signals through which they can be modulated, will cause neural traffic to over and over again follow many different neural pathways without preference. Unfortunately, however, this would automatically lead to the provocation of arbitrary responses (e.g. uncoordinated motor action) without any means of error adjustment or gradual learning by practice. Such a lack of dynamical threshold functionality will result in the brain’s failure to enforce specific neuronal routes that would

134

otherwise become preferred trajectories because of more frequent use. In this way, the automatically occurring selectional rivalry between more and more neural circuits will be undermined which will lead to below-standard brain performance, dysfunctional learning capability, and impaired memory formation. On the other hand, when thresholds between neuronal groups are too high to be regularly overcome by excitatory signals, neural traffic tends to become ‘locked in’. This yields a situation of problematic ‘overstaticness’ in which there may still be quite some activity within stimulus-receiving neuronal groups, but very little communication between these groups.101 In fact, a delicate, close-to-critical balance between antagonistic effects102 seems essential for healthy development and functioning of the brain (Jung et al. 1998, 1098 and 1101).

Fig. 4-4: Varying degrees of neuroanatomical complexity in a young, mature, and deteriorating brain

That is, in a close-to-critical brain, coupled dynamic subsystems can establish reentrant synchronization under the regime of each other’s stochastic side-effects, or, in other words, through widely distributed system-induced noise. In this way, self-organizing 101

In fact, this narrows down the range of possible neural patterns to a relatively small set, thus leading to invariability. 102 E.g. influx versus dissipation; system constraints versus system dynamics; excitatory versus inhibitory forces; synaptic growth versus decay.

135

neuroselectionism along the lines of Edelman’s Theory of Neuronal Group Selection (with value-steered neuroplasticity and reentrant activity in the thalamocortical region) can eventually lead to neuronal networks in which brain signaling is optimized for adaptive goaland task-directed performance, as well as pleasure-seeking, risk-avoiding, and crisismanaging behavior, and, not to be forgotten, long-term anticipatory behavior. An illustration of these three cases – that is, 1) the far-from-optimally connected, juvenile network, 2) the close-to-critical network, and 3) the network in decline – is given in Fig. 4-4. Here, Fig. 4-4A-c represents neuronal groups in the young, still unconditioned brain; Fig. 4-4A-b stands for the same set of neuronal groups in the healthy, matured brain; and Fig. 4-4A-a depicts these same neuronal groups in a deteriorating brain. Although in a normally functioning young brain, neuromodulators like dopamine will slowly but surely sculpt the cortical organization towards that of Fig. 4-4A-b, in the absence of threshold-adjusting neuromodulatory signals no further optimizing development of the neuroanatomy is to be expected. In a close-to-critical brain, reentrant synchronization of somatosensory and sensorimotor signals will enhance the development of the organism’s regulatory biochemistry (Edelman & Tononi 2000, 41 and 89-90), as well as the adaptivity and optimality of cognitive and behavioral performance (Edelman & Tononi 2000, 48-49 and 95-99; Aks 2008, 29; Newell et al. 2008; Kitzbichler et al. 2009), including motor control, perceptual categorization, language acquisition, etcetera. Sub- and supracritical brain dynamics, on the other hand, will indeed frustrate these brain-facilitated competencies.

4.3.3 Self-Organized Criticality and action-potentiation networks

Next to the brain there are countless other natural systems and phenomena in which the birth and development of organized structure require such a close-to-critical balance between antagonistic forces. These systems can be ranked under the label of Self-Organized Criticality (SOC) whenever the development of their close-to-critical behavior occurs spontaneously, despite quite significant variations in any external control parameters (Bak et al. 1987; Bak 1996; Jensen 1998, 1-6). Self-organized criticality typically appears in slowly-driven nonequilibrium dissipative systems with many similar member elements and steadily ongoing material, energetic, or informational influx. Moreover, SOC-systems typically exhibit

136

widespread reciprocity in that local dynamics affect other local, as well as more distant and even global system activity, and vice-versa. In these slowly-driven systems, local changes can potentially ‘permeate’ the entire system through intricately linked ‘action-potentiation chains’ – dispositional patterns of connection that can effectively mobilize even distant system localities to come up to, and then surmount, their activation thresholds. This, in fact, is what makes the system ‘critical’. That is, it can be regarded ‘critical’ in the sense that it can maintain its global organizational integrity only when there’s a critical balance between the system’s formative driving force (e.g. a steady, unidirectional inflow of matter or energy) and its dispersive antagonistic dynamics (the internal interaction forces between the similar system elements, such as diffusion, dissipation, friction, etc.). Together, these opposing forces will lead to growth and decay, buildup and relaxation of constraints, fill-up and spillover of local pockets of potential (a.k.a. ‘potential wells’), and in the case of nervous signaling in the brain, the hyper- and depolarization of a nerve fiber’s cell membrane. Another characteristic of such SOC-systems is that all constituent system elements and system events (e.g. neurons and their action potentials; granular particles and their avalanches; forest trees and forest fires) influence each other with correlations that decay algebraically (instead of exponentially) with distance (Jensen 1998, 3). In this way, the relatively adaptable action-potentiation chains that have gradually developed between the system elements facilitate system-wide interconnectedness. This, then, enables arbitrarily remote system regions to become engaged in mutualistic interaction, thus making a difference to each other’s activity patterns. In general, SOC-systems are open non-equilibrium systems that can develop a relatively stable structural-functional architecture via the throughput of energy-, matter- and information-conveying elements. On their turn, these elements can serve as energy-providing nutrients or energy-absorbing buffers, as building material, or as an activation signal or catalyst of some kind. In this way, multiple thermodynamic cycles may emerge through the interplay between gradient-driven forces with an external origin (such as gravitation or electromagnetic stimulation) and ‘inter-member’ interaction forces manifesting within the system (such as, friction or electrical resistance). The standard educational example of a self-organized, criticality-seeking (SOC) system, introduced by founding fathers Per Bak, Chao Tang and Kurt Wiesenfeld (1987), is a sand or rice pile (situated on a table of arbitrary size) whose constituent sand or rice grains are being released one by one, at random locations above the base of the pile (thus making up the

137

driving force). Each sand grain will thus topple downwards until: 1) it settles in a gap or slight dip somewhere along the slope (thus amounting to a local threshold, held together by friction between contributing grains)103; 2) it triggers an avalanche of unpredictable size on its way down; 3) the tumbling grain prompts a combination of 1) and 2), as it elicits a small avalanche whose member grains all get ‘absorbed’ behind local thresholds scattered along the slope of the pile. For educational reasons, it can indeed be helpful to think of self-organized criticality in terms of the sandpile example, but it must be stressed that SOC has been found to occur in countless other complex open systems as well, ranging from neural networks to stars (cf. solar flares and nucleosynthesis), tectonic plates (cf. earthquakes), forests (cf. incidence of fires), infectious diseases (cf. their spread among populations), and so on. For now, we’ll stick to general terminology so that either of the above possibilities will do as a case in point. In open systems that are perturbed by some arbitrary driving force, explicit criticality is considered to occur only when the gradient-driven influx process impacts much slower on the system than the internal relaxation processes (Jensen 1998, 3). In this way, the force of impact will have to build up local potential capacity before it can overcome the system’s internal thresholds. Typically, this will take longer than the maximum time it takes for perturbed system elements to end up into a resting state, so that the impact of thresholdoverflow events can potentially ‘permeate’ the entire system through intricately branched potentiation chains, without being prematurely ‘overwritten’ by novel overflow events (reference needed). While the driving force is in play, incoming energy gradually accumulates ‘behind’ inner-system thresholds, thus forming local pockets of potential, or ‘potential wells’. Sudden energy release (i.e., dissipation) then occurs when an arbitrary internal threshold is overcome so that the system gets perturbed (Jensen 1998, 4). For instance, due to the ongoing release of sand grains above a pile, an unstable hump on the slope may, at any arbitrary moment, get perturbed, thereby causing an avalanche of unpredictable size. The precise value of built-up potential energy that is required to trigger such a catastrophic event, depends on the precise history and internal configuration of the entire open system and the exact context-dependent details of the external driving force.

103

As mentioned earlier, such a threshold can thus form a local pocket of potential (a.k.a. potential well).

138

As a result of the antagonistic forces, the constituent elements of the system – in so far as these can be identified as such104 – will thus link up into branched action-potentiation chains of all possible sizes which involve system elements occupying coupled ‘islands’ of relative instability (Buchanan 2000, 59). In this way, each impact of the external driving force may provoke responses that propagate across the system in the form of catastrophic system events, many of which will occur on a small-scale, several on a medium-sized scale, and only very few on the largest of scales; the latter being capable of inducing system-wide changes in just one go (Christensen & Moloney 2005, 252). Hence, these system events are without any characteristic spatiotemporal scale, and their statistical distribution – which is used to describe event sizes and their frequency of occurrence – will follow power laws (Bak et al. 1987; Jensen 1998, 5-11). Accordingly, a slight increase in energy buildup behind thresholds may induce impact events that can lead to entirely unpredictable changes in the system’s configuration. There is no telling when, or in what size – small, medium, or large – these reconfiguring changes will occur: “To predict the event, one would have to measure everything everywhere with absolute accuracy, which is impossible. Then one would have to perform an accurate computation based on this information, which is equally impossible.” (Bak 1996, 61)

What can be predicted, though, is that in SOC-systems, over and over again, reconfiguration events will enable any nearby subcritical structures to become members of the earliermentioned ‘islands of instability’ and approach local criticality. Accordingly, they will then link up in newly accessible branches of the action-potentiation network, thus in turn readying other linked structures to overcome local thresholds and cascade through the system. In the long run, action-potentiation chains will progressively permeate the entire system across structural hierarchies, thus forming what may be called a holarchy (Koestler 1967).

104

It must be emphasized that, although SOC-systems are typically thought of in terms of some kind of constituent elements (e.g. sand grains, neurons, carriers of disease, solar flares, etc.), these elements are basically singled out by our subjective nature-dissecting gaze. They should therefore not be considered truly atomistic elements of the system in question. Instead they’d better be seen as relatively autonomous process-structures (Jantsch 1980, 21-24) that may perhaps be treated as individual constituents, but are ultimately seamlessly embedded endo-processes within the greater embedding process that our nature-dissecting gaze has labeled “the SOC-system.” These SOC-systems are not composed of some finite set of static unchanging components, but of endo-processes that should be understood as relatively stable manifestations of nature’s processuality (for instance, a sand grain may appear to be atomistic, but has a deeper processuality within it). So, despite our learned habit of depicting processes in terms of interacting objects – which, historically, has proven to be of great didactic use – this mode of operation eventually results in practically useful object-oriented figure of speech that, despite appearances, has definitely no absolute truth to it.

139

As a result, the action-potentiation chains will become highly correlated, while rearranging the system-wide network of local inner-system thresholds along the way. In other words, through the unpredictable ‘absorption-saturation-discharge cycles’ of the system’s thresholds, the system will develop its own internal dynamic constraints biasing the preferred path of its dynamics, thereby controlling how impact events propagate through the system (Linkenkaer-Hansen, 2002, 8). Hence, the system may now be said to have grown a structurally embedded memory functionality in that its future evolution will increasingly depend on its entire foregoing history and the highly correlated, holarchically distributed system dynamics that have thus come established. The system has now effectively developed a global mutual informativeness,105 in that every locality within the system has grown to become intimately connected with the system as a whole so that each change within the system makes a difference to all the rest, and vice versa. Some even go so far as to state that in SOC-systems each locality has become capable to ‘sense’ the global system state based on local information (Hesse and Gross 2014, 10). In a similar vein, we may say that everywhere within the system there is a locally available, but globally distributed ‘knowing-by-doing’ of how to remain close to overall, system-wide criticality. As such, the system can even be thought of as having developed a primordial form of adaptive, self-preserving behavior under the pressure of precarious conditions, external impact events and/or the influx of matter, energy, and information.

105

This in full agreement with the meaning of ‘mutual information’ in Sections 4.2.1, 4.3 and 4.3.1.

140

5. Process Physics: A biocentric way of doing physics without a box

As mentioned in Section 3.3, our contemporary mainstream physics needs a fellow physics on its side. Although our conventional way of ‘doing physics in a box’ has been hugely successful in mathematically spelling out the behavior of many natural systems within their respective domains of application, it comes up short in other departments. For instance, as Lee Smolin argued (2013, xxiii), it will inevitably fail whenever it tries to cover the whole of nature. Moreover, it is unable to deal properly with those aspects of nature that cannot be quantified, including all the aspects that we hold so dear because they make up the essence of our being: feeling, purpose, meaning, value, and the like. Other aspects – such as creativity, novelty, complexity, and that which cannot be mathematically predicted – also cannot be properly dealt with by doing physics in a box. Especially all that is related to the qualitative aspects of nature – our conscious inner-lives, the ‘what-it-is-likeness’ of sensory experience, and, according to Stuart Kauffman (2013, 10-11), even the entire biosphere – cannot be drawn within the grasp of exophysical-decompositional physics. So, just to have some compensation for these downsides, mainstream physics could well do with a nonexophysical-nondecompositional companion; one that can make up for the weaknesses of doing physics in a box without undermining its strengths.

5.1 Requirements for doing physics without a box

Thus far, numerous clues have come to the fore that suggested how this nonexophysicalnondecompositional physics should hang together and which requirements should be met. On the wish list we can find the following points (without any particular order of importance):

1. A nonexophysical-nondecompositional physics should be bio-centric and, as Terrence Deacon (2012) suggests, it should not leave it absurd that we exist; 2. Considering a) the widespread occurrence of Self-Organized Criticality and nonequilibrium thermodynamical systems106 throughout nature, and b) their defining

106

Nature is not made up from quasi-isolated, equilibrium-seeking systems such as those that are portrayed by classical thermodynamics. Dissipative systems that behave according to non-equilibrium thermodynamics (NET)

141

characteristics of cyclicity and feedback loops (which play an especially crucial role in the emergence of life and consciousness; cf. Sections 4.2.3 to 4.3.1) any new way of doing physics should have recursive dynamics as an inherent feature; 3. In line with John Bell’s suggestion (1988, 29-30), there should be no true objectsubject boundary altogether. This is basically equivalent with Whitehead’s recommendation to avoid the bifurcation of nature; 4. The universe is not a giant computer. In other words, nature does not work in an infocomputational way, but rather in a process-informative or mutually informative way (cf. Sections 4.3.3, 4.3 and 4.2.1-4.2.2). Hence, any new way of doing physics should take this into account; 5. Additionally, a nonexophysical-nondecompositional physics should find a way around psycho-physical parallelism and externalistic representationalism (which both imply info-computationalism);107

Next to this list, there are also some of John Archibald Wheeler’s requirements (Wheeler 1989/1999, 313-315) that are worth being mentioned:

6. No ‘tower of turtles,’ i.e., there should be no infinite regress of would-be elementary constituents; 7. No pre-existing space and no pre-existing time, but rather a pre-geometry. 8. No laws, but rather ‘law without law.’108

Wheeler – often referred to as the physicist who coined the term ‘black hole’ – put forth the following arguments in support of these requirements:

are the rule, rather than the exception (reference needed), and on many levels of organization these systems show signs of Self-Organized Criticality (Bak 1996, 5; Jensen 1998, 2). 107 By implementing an alternative for exophysical representationalism (ER) and psycho-physical parallelism (PPP), we can avoid the fallacy of misplaced concreteness as well as what I like to call the physicist’s fallacy (namely: “To suppose that the objects of thought, as found in introspection, must have their origin in independently existing external objects residing in the entirely physical ‘real world out there’, instead of being sculpted into actuality through a process of sense-making that takes place within the integral and inseparable whole which is the undivided organism-world system”). Last but not least, getting rid of PPP and ER may help not to take so literally the would-be fundamental concepts of ‘state’, ‘system’, ‘apparatus’, ‘measurement’ that were criticized by Bell. Instead, it would become clear that these concepts are ultimately just figures of speech – convenient within a certain context of use, but meaningless without it. 108 The most serious candidate for this ‘law without law’ criterion seems to be what Charles Sanders Peirce called a ‘tendency to take habits’ (1992, 277).

142

Ad 5: No ‘tower of turtles’ – “Existence is not a globe … supported by a turtle, supported by yet another turtle, and so on. In other words, [there should be] no infinite regress. No structure, no plan of organization, no framework of ideas underlaid by yet another level, by yet another, ad infinitum, down to a bottomless night. To endlessness no alternative is evident but loop, such a loop as this: Physics gives rise to observerparticipancy; observer-participancy gives rise to information; and information gives rise to physics.” (Wheeler 1989) Ad 6: No pre-existing space and no pre-existing time – “Heaven did not hand down the word “time”. Man invented it, perhaps positing hopefully as he did that ‘Time is Nature's way to keep everything from happening all at once’. If there are problems with the concept of time, they are of our own creation! As Leibniz tells us, ‘ ... time and space are not things, but orders of things... ;’ or as Einstein put it, ‘Time and space are modes by which we think, and not conditions in which we live.’ … We will not feed time into any deep-reaching account of existence. We must derive time … out of it. Likewise with space.” (Wheeler 1989) Ad 7: No laws – “So far as we can see today, the laws of physics cannot have existed from everlasting to everlasting. They must have come into being at the big bang. There were no gears and pinions, no Swiss watch-makers to put things together, not even a pre-existing plan ... Only a principle of organization which is no organization at all would seem to offer itself. In all of mathematics, nothing of this kind more obviously offers itself than the principle that ‘the boundary of boundary is zero.’[109] Moreover, all three great field theories of physics use this principle twice over ... This circumstance would seem to give us some reassurance that we are talking sense when we think of ... physics being as foundation-free as a logic loop, the closed circuit of ideas in a self-referential deductive axiomatic system.”[110] (Wheeler 1989)

Although implicit in some of the other criteria, there is one final requirement that should not be overlooked: 109

Here, Wheeler did not include an explanation of what ‘the boundary of a boundary is zero’ should mean. He probably meant to say that it is a fundamental assumption in physics that various conservation laws hold in every physical system that is properly isolated from its environment (cf. von Kitzinger 2001, 177). 110 This last remark – about physics having to be, in a sense, foundation-free – can be linked with the second and fifth requirement on the list. That is, if we are to avoid any logical paradoxes, impossibilities, infinite regresses, etc., we should stay away from using any hypothetical set of elementary building blocks as a foundation. Instead, what Wheeler calls ‘existence’ should keep itself ‘up and going’ through recursive loops capable of ‘bootstrapping’ themselves into actuality from an otherwise undifferentiated background (cf. Chew 1968; Cahill and Klinger 1996; Cahill, Klinger and Kitto 2000; Cahill and Klinger 2005).

143

9. No lowest-level foundations, but rather ‘foundations without foundation.’

This last point follows the same logic as Wheeler’s ‘law without law’ requirement. Just as there were no gears, no pinions, no engineers, and no building plans in the earliest beginnings of the universe, there were no true foundations in the hierarchical sense of the word. That is, a priori entities (such as so-called ‘elementary particles’, strings, knots, and so on) can never be fundamental to our modeling of nature. This is because these a priori entities are always preceded by pre-theoretical interpretation (see Section 3.1.2). Also, we can never be sure if these a priori entities are actually referring to nature’s lowest level of organization. After all, bearing in mind the considerable amount of so-called elementary particles in the (as yet still incomplete) Standard Model of Particle Physics, none of them should be taken serious as the one and only fundamental one.111 Next to that, this requirement of ‘foundations without foundation’ is also meant to save physics from regressing into an infinite downward spiral of supporting ‘turtles’ (cf. requirement nr.5).

5.2 Process Physics as a possible candidate for doing physics without a box

Whereas doing physics in a box typically requires us to get involved in pre-theoretical interpretation, nature-dissecting acts of decomposition, and the like, Process Physics basically enables us to avoid much of all this by doing physics without a box. It does so by setting up a model that manages to give rise to its own foundation-free foundations, so to say. Accordingly, the model starts out with an as good as patternless homogeneity that can perhaps best be likened with what in quantum field theory is called ‘the vacuum state’ or ‘quantum vacuum’. Despite its name, this vacuum state is usually not so much thought of as an entirely empty void, but rather as a fiercely fluctuating ocean of virtual energy potential which contains all of existence in latent form (cf. Dewitt 1999, 178). From this vacuum-like stage, then, the initial uniformity in the Process Physics model should get its internal pattern

111

As a possible solution for the foundation problem, it seems desirable to rethink Geoffrey Chew’s bootstrapping procedure (1968) which was later used by early string theory pioneers, such as Veneziano (1968) to formulate string theory – see also (Cushing 1990) for a historical overview. It should be noted, however, that this updated version should not be one in which the same foundational problem is being invoked all over again by introducing strings, elementary particles, or other a priori entities that have to be bootstrapped into existence.

144

formation ‘up and going’ through recursive loops that ‘bootstrap’ themselves into actuality from their otherwise undifferentiated background (cf. Cahill and Klinger 2005, 109). To get this internal pattern formation going, Process Physics depends on only a few general nondecompositional preconditions, namely: universal interconnectedness; holarchic instead of hierarchic organization; self-reference; and initial lawlessness. In our conventional way of doing physics in a box, we typically rely on well-trusted assumptions that we’ve gotten used to so much that we tend to totally forget about their actual status as metaphors, approximations, and idealizations. That is, prior to writing down the physical equations for specifying the behavior of electrons, photons, electromagnetic fields, and all other phenomena in nature, physicists do not think too much about all their pre-theoretical interpretation, acts of decomposition, and the like. Instead, they basically start out by assuming that all these things actually already exist as such (cf. Chown 2000, 25). To be fair, however, physicists most often do not literally assume that things like electrons really exist before they formulate physical equations about them. Instead, they typically like to think of ‘elementary particles’ like the electron as transient fluctuations, manifesting from underlying quantum fields into the classical world. By assuming these fields, however, the same problem arises all over again, thus leading to something quite akin to Wheeler’s ‘tower of turtles’ problem. The physicist’s solution seems to be to just choose a particular level of description, postulate it as fundamental, and then proceed from there onwards. So, in this way, it is still a valid diagnosis to state that physics as we know it typically presumes the existence of what it is trying to describe. This, however, confronts us with a foundational problem: if not by mere postulation, how can we actually be sure that a physical equation does indeed pertain uniquely to its intended referent?112 On top of that, taking into consideration that, just after the ‘Big Bang’, many of these referents had not even come into existence yet, we might ask ourselves what the most elementary initial conditions of these physical equations should be, and why this should be so? As Lee Smolin was particularly worried by these issues, he addressed them as follows: “We, in our time, are led by our faith in the Newtonian paradigm to two simple questions that no theory based on that paradigm will ever be able to answer: [Firstly:] Why these laws? Why 112

Among these referents may be found the earlier-mentioned electrons, photons, and electromagnetic fields, but also all so-called elementary particles of the Standard Model of particle physics. To the best of our current knowledge, there doesn’t seem to be any explanation for the physical equations that we use to specify the behavior of these entities.

145

is the universe governed by a particular set of laws? What selected the actual laws from other laws that might have governed the world? [Secondly:] The universe starts off at the Big Bang with a particular set of initial conditions. Why these initial conditions? Once we fix the laws, there are still an infinite number of initial conditions the universe might have begun with. What mechanism selected the actual initial conditions out of the infinite set of possibilities? The Newtonian paradigm cannot even begin to answer these two enormous questions, because the laws and initial conditions are inputs to it. If physics ultimately is formulated within the Newtonian paradigm, these big questions will remain mysteries forever.” (Smolin 2013, 9798)

Process Physics, on the other hand, since it is not rooted in the Newtonian paradigm of doing physics in a box, does not have these problems that seem to be so inevitably associated with the use of physical equations. That is, Process Physics simply does not avail itself of any formal system of law-like mathematical equations. In stark contrast with the math-based models of mainstream physics, Process Physics introduces a non-formal, self-organizing modeling of nature – based on a stochastic iteration routine that reflects the Peircean principle of precedence (1992, 277), rather than law-like physical equations. Thanks to its intrinsic stochastic recursiveness, the Process Physics model eventually manages to evolve many features that we also find in our own natural world: emergent threedimensionality; emergent relativistic and gravitational effects; non-locality; emergent pseudodeterministic classical behavior; creative novelty; habit formation; an internal sense of (proto)subjectivity made possible by its mutual informativeness; an intrinsic present moment effect with open-ended evolution; 113 and more. Without having discussed how Process Physics actually works, however, it’s of course still way too early to label it the final cure-all for the main problems in today’s physical sciences. So, therefore, let’s get down to the finer details and explore if Process Physics can really be taken serious enough as a way of doing physics without a box to have it join forces with our conventional way of doing physics in a box.

113

An intrinsic present moment effect causes the external present moment indicator to become redundant (see Section 2.1.3 for more details on the external present moment indicator).

146

5.3 Process Physics: going into the details

Process Physics is a neurobiologically inspired way of doing physics without a box that is derived from the global color model in quantum field theory (see Section 5.3.2). Because it aims to model nature practically from scratch, Process Physics doesn’t rely on pre-theoretical interpretation in the way that mainstream physics does. In mainstream physics, i.e. physics in a box, we first need to postulate how ‘the box’ should be put together and which basic entities are to inhabit it (cf. Sections 3.1.2 and 3.1.1). In this way, however, we’re already presupposing what we’re trying to make sense of. That is, through pre-theoretical interpretation we’re actually filling in beforehand what it is that our physical equations are trying to come to grips with (cf. Chown 2000, 25). In other words, we are prematurely identifying what the referents of our physical equations should be, thereby synonymizing that which is found in observation with what is thought to constitute the system under investigation itself. This, then, amounts to what Whitehead called the undesirable ‘fallacy of misplaced concreteness’ which is, arguably, the most prevalent fallacy in contemporary mainstream physics. However, the map is not the territory and we should not pretend that it is. There is no inventory of landscape elements that can exhaustively sum up all features of the landscape being mapped. Likewise, there is no shortlist of ultimate physical constituents of nature, whether they be elementary particles, strings, knots, or any other such entity, that can exhaustively cover the whole of our natural world. None of these alleged ‘primitives’ can ever be truly fundamental, since their explanation and interpretation necessarily has to lie outside the system being modelled (Cahill 2003, 19) just as the inventory of landscape elements is external to both the map and its landscape.114 After all, even when being engaged in our deepest-probing science – particle physics – there is always:

114

Trying to model nature with the help of such supposedly fundamental physical constituents necessarily has to rely on pre-theoretical interpretation. And in the Cartesian-Newtonian paradigm the first task to be performed during pre-theoretical interpretation is to draw the Galilean cut which slices away any subjective aspects of the phenomena under investigation. However, as has been emphasized throughout this paper, it is a mistake to think that this would successfully divide nature into, on the one hand, ‘entirely physical constituents’ and, on the other hand, our ‘entirely subjective experiences’ of those constituents. This would amount to the undesirable bifurcation of nature, which, once having been put into effect, cannot be undone. That is, ‘nature in the raw’ cannot be cut into bits and pieces and still be kept intact, i.e., in conformity with ‘naked fact.’

147

“ … a subjective element in the description of atomic events, since the measuring device has been constructed by the observer, and we have to remember that what we observe is not nature in itself but nature exposed to our method of questioning.” (Heisenberg 1958, 58)

Following roughly the same line of reasoning, fellow physicist Bernard d’Espagnat put it like this: “The wider our knowledge expands, the greater grows the part of it which bears on ourselves – on our structures as human beings – at least as much as on some hypothetical ‘external world’ or ‘eternal truth’.” (d’Espagnat 1983, 17)

In the Cartesian-Newtonian paradigm, however, it is all too easily forgotten that our target of interest is not nature in the raw, but a combination of 1) nature as framed by our naturedissecting intellect, and 2) nature in interaction with our measurement equipment. So, because the aspect of subjectivity will thus always be implicit in our linguistic, analytical and quantitative labeling of what is being submitted to observation, it is, logically speaking, impossible to find physical equations that can pertain to any potential ‘deepest’ level of nature in the raw.115 In fact, most working physicists have chosen to take an instrumentalist position of abstaining from any interpretation of physical equations (reference needed). As such, they have given up on formulating any hypothesis of how and why physical equations should work, but became focused only on the fact that they work (cf. van Dijk 2011, 78; 2016). In so doing, however, they are often unconsciously falling back onto the straightforward CartesianNewtonian idea that there is indeed really an entirely physical ‘real world out there’ for their mathematical equations to pertain to. By taking this easy way out, though, they are in fact already presuming what they are trying to explain, thus basically making it impossible to reach any deeper level of understanding.

115

Please note that the word ‘deepest’ implies a layered hierarchy of lower- and higher-order levels of organization. However, this use of language should be considered metaphorical rather than true to nature; in reality, it makes more sense to think of nature in a holarchic way – with each part being a seamlessly integrated member of the whole in which it participates, and, in turn, with each whole itself being interpretable as such a seamlessly integrated part as well (cf. Koestler 1967). All this is characteristic of self-similar fractal organization.

148

5.3.1 Foundationless foundations, noisiness, mutual informativeness, and lawlessness

So, in order to avoid these problematic inconveniences of doing physics in a box, Process Physics resorts to the earlier-mentioned idea of ‘foundations without foundation’ (see Section 5.1). In order to achieve such ‘foundationless foundations’, Process Physics starts out, not with a number of would-be fundamental constituents, but with a uniform, featureless network of initially homogeneous, dispositional relations. As will be discussed in more detail below, (see Section 5.3.3), this relational network, whose inner-system connection strengths are being indexed by a connection matrix, is driven by a stochastic iteration routine that gives rise to internal pattern formation. As such, Process Physics basically uses a noisy connectivity matrix to index from scratch how connection patterns relate to each other in an initially unlabeled, featureless universe.116 This noise, then, basically ‘blankets’ the entire network with each cycle of the stochastic iteration routine. Like this, it actually enables the initially uniform, low-level processuality in the Process Physics model to self-organize into mutually informative foreand background patterns. So, remarkably, the same kind of mutual informativeness that turned out to be such a characteristic aspect in SOC-systems (see Section 4.3.3) and that played such a crucial role in the emergence of higher-order consciousness (see Sections 4.3.1 and 4.3.2), ends up being of the greatest essence to Process Physics as well(!) Although, in Classical Information Theory and in electrical engineering, noise is typically thought of as an irregular, residual distortion signal, in Process Physics it is an expression of the inherent lawlessness of nature. That is, while our long-standing tradition of doing physics in a box dictates that we try to capture natural systems in terms of algorithmically expressed laws of nature, there is a lot in nature that seems to be unfit to be specified like that. For instance, although complex systems and biological systems may indeed seem regular and predictable when looked at during short enough time spans or in between phase transitions, their behavior eventually cannot be compressed into concise

116

As already mentioned in Section 3.1.2, there are many different ways to refer to the initially unlabeled natural world. A wide variety of names can be used, all of which have their own context of use and are the result of a specific set of beliefs on how nature works. Although these terms – the Kantian ‘noumenal world’ or ‘nature-initself,’ Alfred North Whitehead’s ‘extensive continuum’ (1929/1978, 66-67), John Archibald Wheeler’s ‘pregeometric quantum foam’ or ‘pre-space,’ David Bohm and Basil Hiley’s ‘holomovement’ and ‘implicate order,’ Bernard d’Espagnat’s ‘veiled reality,’ the ancient Greek ‘apeiron,’ John Stewart Bell’s world of preobservational ‘beables,’ or other words, like ‘vacuum’, ‘void’, or the Buddhist ‘plenary void’ – can all be used to refer to this primordial stage of nature, no one of them can be crowned as the ultimate candidate.

149

algorithmic expressions capable of faithfully reproducing the empirical data extracted from these systems (cf. Kauffman 2013, 9-22). In the words of physicist Joe Rosen:117 “In our effort to understand, we first search for order among the reproducible phenomena of nature, and then attempt to formulate laws that fit the collected data and predict new results. Such laws of nature are expressions of order, of simplicity. They condense all existing data, as well as any amount of potential data, into compact expressions. Thus, they are abstractions from the sets of data from which they are derived, and are unifying, descriptive devices for their relevant classes of natural phenomena. … [Then again,] we do not claim that nature is predictable in all its aspects. But any unpredictable aspects it might possess lie outside the domain of science by the definition of science that informs our present investigation.” (J. Rosen 2010, 40 and 36) So, in Joe Rosen’s view, science, by definition, is not meant to deal with irreproducible and unpredictable phenomena.118 The empirical data extracted from any such phenomena cannot be compressed into a smaller data-reproducing algorithm. Therefore, no empirically adequate physical equation can be put together that may deserve the label of ‘law of nature’. Such phenomena, whether they are too random to find any regularity within their data, or too unique to be reproduced, can therefore be called ‘lawless’. Under this banner we can not only gather complex systems like the mind-brain (which only exhibits reproducibility in a restricted sense), but also nature as a whole. After all, the universe at large, because it cannot be compared to any other and exceeds the reach of any algorithmic compression, is utterly irreproducible:

117

As far as I know Joe Rosen is of no family relation to theoretical biologist and biophysicist Robert Rosen (1934-1998). 118

Earlier on, Joe Rosen defines science as our attempt to understand the reproducible and predicable aspects of nature as objectively as possible (2010, 30). Like this, by the authority of this definition, he excludes from science all phenomena that are not reproducible and/or predictable. In his view, science is not meant to deal with such phenomena. Although this seems to give us quite a clear and well-defined description of what science is, it does not point out that the reproducibility and predictability of empirical data often can be established only by allowing margins of error. In other words, because of these margins of error, neglect of noisy deviations, application of statistical meta-rules, etc., we may just as well conclude that absolute reproducibility and predictability is in fact never possible; it is always reproducibility and predictability under certain pretheoretical restrictions.

150

“When we push matters to their extreme and consider the whole universe, we have clearly and irretrievably lost the last vestige of reproducibility; the universe as a whole is a unique phenomenon and as such is intrinsically irreproducible.” (J. Rosen 2010, 72) Process ecologist Robert Ulanowicz, then, drove home a similar point in his very engaging book The Third Window: Natural Life beyond Newton and Darwin: “As most readers are probably aware, Kurt Gödel (1931), working with number theory, demonstrated how any formal, self-consistent, recursive axiomatic system cannot encompass some true propositions. In other words, some truths will always remain outside the ken of the formal system. The suggestion by analogy is that the known system of physical laws is incapable of encompassing all real events. Some events perforce remain outside the realm of law, and we label them chance. Of course, analogy is not proof, but I regard Gödel’s treatise on logic to be so congruent with how we reason about nature that I find it hard to envision how our construct of physical laws can possibly escape the same judgment that Gödel pronounced upon number theory.” (Ulanowicz 2009, 121-122)

Drawing on the work of physicist Walter Elsasser (1969; 1981), Ulanowicz calls such complex chance events ‘aleatoric’ (2009, 119-122). On this aleatoric account, these complex chance events are irreproducible and unpredictable since they involve a unique coincidence of nature’s locally-globally actualizing processes. By Joe Rosen’s definition this makes them ‘lawless’. Moreover, due to the nature-wide abundance of these aleatoric events, particularly in nonequilibrium systems, lawlessness should actually be considered the rule, rather than the exception. In line with Ulanowicz above quote on number theory and physical law, Algorithmic Information Theory (AIT; Solomonoff 1960; Chaitin 1987; Kolmogorov 1987), a mathematical discipline interested in the lossless compression of data into data-reproducing algorithms, can also be applied to empirical data:

“ … in AIT the amount of data compression that can be achieved in reproducing empirical data can be taken as a measure of the level of scientific understanding about the associated target system (Solomonoff 1960; Chaitin 2007, 35). The main argument can be stated as follows: the more compression, the better the understanding about the system’s recorded behavior (Chaitin 2007, 227 and 286). In this way, the extent of knowledge about a natural

151

system is thought to peak as the algorithm for reproducing its empirical data approaches its minimum size.” (van Dijk 2011, 77)

Moreover, any numerical data that cannot be compressed into an algorithm, or any data whose algorithmic expression has the same number of digits (or even more) as the data sequence itself, cannot be algorithmically compressed and is therefore defined as being ‘algorithmically random’. In this context, a random truth is a data string that cannot be encoded into any algorithm since its sequence of digits is entirely unpredictable. Analogously, noise – the equivalent of an algorithmically random truth – can be seen to stand for everything that cannot be caught by a physical equation. In other words, each activity pattern that has no mathematically compressible regularity to it can be seen as a random fact with entirely unpredictable micro-fluctuations. Altogether, Process Physics purports that the process of nature is routine-driven, or, in other words, habit-based, rather than governed by fixed and eternal laws of nature. Counter to the currently prevailing view of a law-abiding natural world, Process Physics suggests that the universe in its earliest stage came into actuality from an initially undifferentiated and structureless kind of pre-space. Reflecting the fact that all of nature’s activity patterns are ultimately seamlessly interconnected and must thus be seen to make up a complex, random, and thus fundamentally irreproducible and unpredictable whole, the Process Physics model is driven by a noisy (hence lawless) iterative update routine. In this way, it forms a selforganizing, habit-establishing, and internally meaningful whole in which all affects, and actively makes a difference to, all else, and vice versa. This in stark contrast with mainstream physics which conceives of nature as if it were ultimately no more than a collection of mechanistically interacting physical contents governed by externally imposed laws of nature. But as Smolin (2013, 97-98) already argued in Section 5.2, finding the actual reason for these ‘laws’ will be a hopeless cause if we stubbornly continue to hang on to the Newtonian paradigm. In fact, physical equations that – implicitly, or even explicitly – presume that nature consists of some kind of mechanistically interacting physical contents, can never really explain how nature works. Rather they can only offer pseudo-explanations which are themselves equally in need of an explanation.119

119

Although physical equations are, when combined with their post-theoretical interpretations, often thought to provide an explanation of how nature works, they do not really do so. Just as there can be no neutral algorithm for the choice of physical equations – i.e. for deciding which physical equation best describes a given set of

152

5.3.2 Process Physics and its roots in quantum field theory

Despite all this talk about Process Physics being a nonexophysical-decompositional way of doing physics without a box, based on natural routine rather than natural law, we still haven’t discussed its relation with mainstream physics. Despite all the criticism of mainstream physics that we’ve come across so far, we are still badly in need of its exopysical-decompositional methodology; if only to compare any newly proposed physics with our established physical theories and interpretations, for instance, by subjecting it to quantitative analysis. Also, to make sure that Process Physics – or any other new way of doing physics – is compatible with everything that science has hitherto been able to teach us, it makes good sense to see if such a new physics can be derived from our familiar and well-respected way of doing physics in a box. So, for this purpose, let’s take a look at how the Process Physics model can be extracted from quantum field theory. Quantum field theory is the deepest-seated successful theory of present-day mainstream physics. Entirely in line with the post-geometric CartesianNewtonian paradigm, it gives an abstract mathematical account of the behavior of ‘elementary particles’ in the background of a fixed spacetime construct. The most explicit and revealing formalism of quantum field theory is the functional integral formalism. This formalism is used in the Global Colour Model of quark physics (Fritzsch et al. 1973) that grew from the seminal work of Dirac and Feynman, and approximates low-energy hadronic behaviour from the underlying quark-gluon quantum field theory (for a review paper, see Cahill and Gunner 1998).

However, it turned out that the functional integral formalism isn’t necessarily the ultimate climax in quantum field theory. That is, by introducing a stochastic formalism in which randomness was artificially added, Parisi and Wu (1981) demonstrated an even lower level of description. In their formalism, the added stochastic iterative procedure facilitates the random sampling of all possible system configurations. Originally, this formalism was meant empirical data (Kuhn 2012; in the postscript 1969) – there can also be no finite and fairly balanced procedure for finding the best interpretation of equation-based theories like quantum theory or Einstein’s relativity theories. Therefore, an interpretation merely confirms the context of use within which a given physical equation reached its mature form (cf. Van Dijk 2016). This, then, is the reason that no conclusive final answer can be found as to which interpretation should be the best one.

153

only to provide a better way of computing properties of particles within quantum field theory. Accordingly, its stochasticity was interpreted to represent no actually existing property of nature, but rather to be a mere computational aid which permitted the computations to explore various configurations (Cahill 2005b, 9). However, since Parisi and Wu’s method eventually leads to the same results as the functional integral formalism, it’s not at all too outrageous to suppose that their stochastic quantization procedure involves more than just a convenient computational aid. After all, functional integrals can be thought of as arising as ensemble averages of Wiener processes. These are normally associated with Brownian-type motions in which random processes are used in modelling many-body dynamical systems. But instead of considering the randomness as an uninteresting side-effect, it can be argued that random processes actually underlie the emergent hadronic structures of nature, thus reflecting the random facts of Section 5.3.1. All these considerations inspired Reg Cahill and his main accomplice Chris Klinger to put together Process Physics which is based on a stripped-down version of the stochastic quantization procedure. That is, by removing all elements from Parisi and Wu’s formalism that were associated with externally postulated, non-emergent qualities (particularly those indicative of a presupposed spacetime metric), Cahill and Klinger could isolate the terms that, by their expectations, would be responsible for emergent pattern formation. In fact, as explained in Section 5.3.3, the remaining terms involve only iterative stochastic dynamics; see also (Cahill 2003, 22) for technical details. Stripping away redundant elements enables Process Physics to model the universe as an all-encompassing Prigoginean dissipative structure capable of renewing itself through the activity of Self-Referential Noise. As an added bonus, this Self-Referential Noise will spontaneously establish a regime of Self-Organized Criticality (Cahill & Klinger 2000a). This, in turn, automatically leads to ‘universality’ – i.e. the occurrence of self-similar events at all scales within the system as a whole. This basically means that any small perturbation can trigger events of all possible sizes; from many small ones (that do not seem to lead to any explicit activity other than low-level noisy fluctuations) to very rare giant ones (that can shake up the entire action-potentiation network, thus drastically renewing it in one go). In the famous sand pile systems, for instance, the characteristic events are avalanches of all sizes; from many small cascades to rare large sand slides and even none at all (e.g. when a single grain falls directly in a local ‘pocket of potential’ – see Section 4.3.3). Likewise, in the Process Physics model, each noisy perturbation of the network as a whole can trigger a) the emergence of many small, low-level phenomena (i.e. ‘events,’ ‘actualities’ or ‘nodes’)

154

with weak or negligible connectivity, b) less frequent medium-size phenomena with more robust connectivity, and even much more rare phenomena with proliferating higher-order connectivity. Like this, universality (or, in other words, the occurrence of systemcharacteristic phenomena of all scales) can cause the system’s low-grade starting level to eventually become ‘hidden from plain view’ as the higher-order phenomena cascade all across the emergent action-potentiation network. Like this, the Self-Organized Criticality in the Process Physics model is what actually facilitates the possibility of ‘foundations without foundation’ – one of the earlier-listed requirements for doing physics without a box (see section 5.1).

5.3.3 Process Physics and its stochastic, iterative update routine

To take into account that we are trying to model the in itself unlabeled natural world in a purely relational way, Process Physics sets up an initially uniform and structureless network of what may be called dispositional relations. In this dispositional network of relationships, which is being indexed with the help of a ‘connectivity matrix’ (see Table 5-1), the start-up nodes 𝑖 and 𝑗 are held to have (1) no internal connectivity worthy of mention, and (2) no explicit actuality relative to the network as a whole. In order to meet these preconditions, (1) anti-symmetry 𝐵𝑖𝑗 = −𝐵𝑗𝑖 has to be applicable within the network matrix so that selfconnections 𝐵𝑖𝑖 will always be zero, and (2) all nodes within the system have to start off with close-to-zero connection strength to model the absence of initial order (Cahill and Klinger 2005, 109). As a result, these start-up nodes 𝑖 and 𝑗 can be seen as mere indexical labeling for something that is not really present (yet). Moreover, this indexing of the nodes within the connectivity matrix does not relate these nodes to anything external like a reference frame, coordinate axes, timeline, or whatever. In fact, the iterative indexing activity, as it is engaged in ‘weaving’ an intricate network of connection strengths, relates everything within the network to everything else, thus giving rise to a sense of where everything is with respect to each other without the need of any external number-tagging.

155

Table 5-1: The indexical relation matrix – When nodes 𝑖 and 𝑗 are connected, they will be indexed as having a non-zero connection strength 𝐵𝑖𝑗 . Anti-symmetry (here indicated by matching background colors of the matrix cells) guarantees that the strength of any self-connection (𝐵𝑖𝑖 ) will always be zero. Positive or negative signs of the actual 𝐵𝑖𝑗 values depend on the direction of the arrows between nodes 𝑖 and 𝑗 (see Fig. 5-1). node 1

2

3

4

5

6

1

𝟎

𝑩𝟏𝟐 (= −𝑩𝟐𝟏 )

𝑩𝟏𝟑 (= −𝑩𝟑𝟏 )

𝑩𝟏𝟒 (= −𝑩𝟒𝟏 )

𝑩𝟏𝟓 (= −𝑩𝟓𝟏 )

𝑩𝟏𝟔 (= −𝑩𝟔𝟏 )

2

𝑩𝟐𝟏 (= −𝑩𝟏𝟐 )

𝟎

𝑩𝟐𝟑 (= −𝑩𝟑𝟐 )

𝑩𝟐𝟒 (= −𝑩𝟒𝟐 )

𝑩𝟐𝟓 (= −𝑩𝟓𝟐 )

𝑩𝟐𝟔 (= −𝑩𝟔𝟐 )

3

𝑩𝟑𝟏 (= −𝑩𝟏𝟑 )

𝑩𝟑𝟐 (= −𝑩𝟐𝟑 )

𝟎

𝑩𝟑𝟒 (= −𝑩𝟒𝟑 )

𝑩𝟑𝟓 (= −𝑩𝟓𝟑 )

𝑩𝟑𝟔 (= −𝑩𝟔𝟑 )

4

𝑩𝟒𝟏 (= −𝑩𝟏𝟒 )

𝑩𝟒𝟐 (= −𝑩𝟐𝟒 )

𝑩𝟒𝟑 (= −𝑩𝟑𝟒 )

𝟎

𝑩𝟒𝟓 (= −𝑩𝟓𝟒 )

𝑩𝟒𝟔 (= −𝑩𝟔𝟒 )

5

𝑩𝟓𝟏 (= −𝑩𝟏𝟓 )

𝑩𝟓𝟐 (= −𝑩𝟐𝟓 )

𝑩𝟓𝟑 (= −𝑩𝟑𝟓 )

𝑩𝟓𝟒 (= −𝑩𝟒𝟓 )

𝟎

𝑩𝟓𝟔 (= −𝑩𝟔𝟓 )

6

𝑩𝟔𝟏 (= −𝑩𝟏𝟔 )

𝑩𝟔𝟐 (= −𝑩𝟐𝟔 )

𝑩𝟔𝟑 (= −𝑩𝟑𝟔 )

𝑩𝟔𝟒 (= −𝑩𝟒𝟔 )

𝑩𝟔𝟓 (= −𝑩𝟓𝟔 )

𝟎

nrs.

→𝒋



.

𝒊

In line with Wheelers ‘law without law’ (1980), the earlier-mentioned ‘foundations without foundation,’ and the fact that we are trying to model our initially unlabeled natural universe, we may refer to this indexical labeling as ‘labeling without labeling.’ Accordingly, in the Process Physics model, these ‘(un)labeled’ start-up nodes (or, in other words, ‘events’, ‘sub-actual start-up nodes’, ‘sub-actualities’ or ‘pseudo-objects’) are being used as temporary scaffolding to enable connectivity among them (cf. Cahill et al. 2000).120 So, like this, although they facilitate patterns of relationship among them, the nodes themselves remain ‘pseudo-actualities.’ Once the network starts to evolve any higher-order activity patterns, the level of start-up nodes gets hidden from plain view by way of Self-Organized Criticality (cf. Section 4.3.3).

120

Think, for instance, of a temporary wooden support on which to rest the building bricks when constructing an archway. Although its semicircular shape indicates where the bricks should be placed, once the arch is completed the support is no longer needed and can be conveniently removed.

156

Figure 5-1: Schematic representation of interconnecting nodes – Connections between nodes 𝑖 and 𝑗 with arrows indicating non-zero connection strengths 𝐵𝑖𝑗 . The direction of the arrows determines the sign of the connection strengths; when nodes are thought to be (as yet) unconnected, the arrows are absent indicating a connection strength 𝐵𝑖𝑗 = 0. Connection strengths indicated by ‘darkness’ (with black arrows denoting highstrength connections and lighter-colored arrows implying weaker connectivity).

In matrix notation, the connectivity matrix can be written as an 𝑖 × 𝑗 matrix (i.e. a matrix with 𝑖 rows and 𝑗 columns): 𝑏11 𝑏21 𝑏 𝐵𝑖𝑗 = 31 · · 𝑏 [ 𝑖1

𝑏12 𝑏22 𝑏32 · · 𝑏𝑖2

𝑏13 𝑏23 𝑏33 · · 𝑏𝑖3

· · · · ·

Anti-symmetry then gives:

· · · · ·

𝑏1𝑗 𝑏2𝑗 𝑏3𝑗 · · 𝑏𝑖𝑗 ] 0 −𝑏12 −𝑏13 𝐵𝑖𝑗 = · · [ −𝑏1𝑗

with 𝑖, 𝑗 = 1, 2, 3, … , 2𝑀 and 𝑀 → ∞.

−𝑏21 0 −𝑏23 · · −𝑏3𝑗

−𝑏31 −𝑏32 0 · · −𝑏3𝑗

· · · · ·

· · · · ·

−𝑏𝑖1 −𝑏𝑖2 −𝑏𝑖3 · · 0 ]

157

Or:

0 −𝑏12 −𝑏13 𝐵𝑖𝑗 = · · −𝑏 [ 1𝑗

𝑏12 0 −𝑏23 · · −𝑏3𝑗

𝑏13 𝑏23 0 · · −𝑏3𝑗

· · · · ·

· · · · ·

𝑏1𝑗 0 𝑏2𝑗 𝑏21 𝑏 𝑏3𝑗 = 31 · · · · 0 ] [ 𝑏𝑖1

−𝑏21 0 𝑏32 · · 𝑏𝑖2

−𝑏31 −𝑏32 0 · · 𝑏𝑖3

· · · · ·

· · · · ·

−𝑏𝑖1 −𝑏𝑖2 −𝑏𝑖3 · · 0 ]

Process Physics uses its connectivity matrix to model the gradually evolving connection strengths of emergent activity patterns within the initially uniform and orderless universe. In order to meet Wheeler’s requirement of ‘law without law’ (1980; see also Section 5.1), an iterative update routine is used to enrich the network of connection strengths with systemwide connectivity combined with a layer of system-renewing noise. Like this, as the system continuously keeps on going through its stochastic iteration cycles, with each such loop being indexed by the relation matrix, slowly but surely, higher-order patterns of connectivity will emerge.121 The iteration routine in question is in fact derived from the bilocal field representation that is used in Quantum Electrodynamics – hence the use of the symbol 𝐵 in Eq. 5.1 as it refers to ‘bilocal.’ By stripping away all terms that refer to any presupposed geometrical aspects, the following update routine is achieved: 𝐵𝑖𝑗 → 𝐵𝑖𝑗 − 𝛼(𝐵 + 𝐵 −1 )𝑖𝑗 + 𝑤𝑖𝑗 , with 𝑖, 𝑗 = 1, 2, 3, … , 2𝑀 and 𝑀 → ∞.

(5.1)

Cahill has summarized his stochastic iteration routine in the following way: “The iteration system has the form 𝐵𝑖𝑗 → 𝐵𝑖𝑗 − 𝛼(𝐵 + 𝐵 −1 )𝑖𝑗 + 𝑤𝑖𝑗 . Here 𝐵𝑖𝑗 is a square array of real numbers giving some relational link between nodes 𝑖 and 𝑗. Here 𝐵 −1 is the inverse of this array: to compute this all the values 𝐵𝑖𝑗 are needed: in this sense the system is totally self-referential. As well at each iteration step, in which the current values of 𝐵𝑖𝑗 are replaced by the values computed on the right-hand side, the random numbers 𝑤𝑖𝑗 are 121

The noise-driven update routine has an effect that is quite similar to that of neuromodulation (which enables brain plasticity in the initially unconditioned, newly developing fetal brain). Analogous to Self-Referential Noise in the Process Physics model, neural noise and reentry play an indispensable role in neuromodulation, neuroplasticity, the optimization of motor control, and the like (cf. Sections 4.3 and 4.3.1). In the case of the Process Physics model, however, there is no explicit, pre-developed substructure like a prewired brain.

158

included: this enables the iteration process to model all aspects of time. These random numbers are called self-referential noise (SRN) as they limit the precision of the selfreferencing. Without the SRN the system is deterministic and reversible, and loses all the experiential properties of time.” (Cahill 2005b, 12) To recap, the first term 𝐵𝑖𝑗 embodies the network’s entire acquired past (up to the immediately preceding iteration) as it holds the iteratively built-up connection strengths among connection pairs 𝑖 and 𝑗. As such, it may be called the precedence term – this entirely in line with the Peircean ‘principle of precedence’ (cf. Peirce 1992, 277; Smolin 2013, 47). The second term −𝛼(𝐵 + 𝐵 −1 ), which may be referred to as the cross-linkage or binding term, facilitates universal interconnectedness by hooking up the single matrix 𝐵 with its inverse counterpart 𝐵 −1. Like this, something close to a holarchic feedback loop becomes active within the system. Next to that, this setup requires anti-symmetry 𝐵𝑖𝑗 = −𝐵𝑗𝑖 . This is needed to ensure that self-connections 𝐵𝑖𝑖 will always be zero; this in conformity with the above requirement that there is no internal subnetwork connectivity to the start-up nodes themselves. Furthermore, the parameter 𝛼 within this second term is comparable to a tuning parameter in sand and rice pile models; its precise value may indeed affect the self-organising dynamics of the system – for instance, in sand pile systems it determines the narrow region of near-critical angles in which avalanches will tumble down the slope – but the parameter itself is non-critical since it can vary widely without frustrating the occurrence of Self-Organized Criticality.122 Last but not least, in every new iteration and for each ‘connection pair’ 𝑖 and 𝑗, the noise term 𝑤𝑖𝑗 = −𝑤𝑗𝑖 is randomly chosen from a probability distribution relevant to quantum field theory (see Cahill and Klinger 2000b for more detailed information).

122

In fact, in rice and sand pile systems such a tuning parameter can ‘gather under its umbrella’ the effects of various phenomena: 1) stickiness between grains; 2) the average mass of grains; 3) the precise magnitude of the gravitational constant (which may vary with the latitude at which the experiment is performed); 4) the average downward velocity of the grains being dropped; 5) possible wind sheer … and so on. Accordingly, such a tuning parameter may influence the self-organizing dynamics of the sand or rice pile system in question. Particularly, it will set the angle (or, better put, the small range of near-critical angles) at which avalanches will be able to tumble down the slope. The occurrence of Self-Organized Criticality itself, however, will remain unaffected. By the same token, all such details can be likewise covered by one generic parameter 𝛼 in the Process Physics model. In both cases, the precise features of all contributing micro-factors and subnetwork activities do not matter too much, just as long as self-organized criticality will be achieved. And just as avalanches can occur at a wide range of different angles in rice and sand pile models, many different values of the tuning parameter 𝛼 may be used in the Process Physics model without them affecting the ongoing self-organized coming-into-actuality of ‘foreground cells’ of activity patterns (i.e. connection nodes) from a background of activity patterns with lowerorder connectivity.

159

Fig. 5-2: Artistic visualization of the stochastic iteration routine – a) the noise-driven iteration routine of Eq. 5.1 can be subdivided into a precedence term, a binding term, and a noise term; b) At a coarsegrained level, the 𝐵𝑖𝑗 form a smooth and homogeneous ‘indexing landscape’ – this in line with the absence of connectivity. However, when zooming in onto a finer-grained level, the indexing landscape give a much rougher and more spikey impression characteristic of randomness; c) The precedence, binding, and noise terms are visualized as ‘indexing landscapes’ thus forming a map of connection strengths.123 Going through the iterations again and again will eventually lead to the formation of higher-order connectivity in a small region of the total indexing landscape. (original images: © Paul Bourke 1997)

The connection strengths between all these ‘connection pairs’ 𝑖 and 𝑗 are not themselves visible features, so the above visualizations can only serve as instructive metaphor and should not be thought of as images of nature itself. In order to avoid the fallacy of misplaced concreteness, after all, we should realize that the here depicted ‘connectivity landscape’ pertains to an indexical mapping of connection strengths. Since the connection strengths are held to be initially practically zero – reflecting the absence of connectivity – there is no initial ‘meaning’ of the indexical network. In fact, meaning only gets to be established later on, as the network starts to give shape to itself through its mutually informative processuality. That is, the indexicality, or, in other words, the relational mapping of connection strengths, offers a means to ‘inform’ each

123

The here depicted images are white noise frequency spectra (Bourke 1997) which are used for educational and aesthetic reasons only.

160

local ‘island of connectivity’124 about how it relates to everything else within the network. In contrast with classical information theory, this does not take place via the transmission and reception of symbolically expressed numerical data, but through mutual informativeness – a.k.a. process-informativeness or process-information (Corbeil 2006; van Dijk 2011 and 2016). That is, all events125 actively make a difference to each other through their mutualistic, diaphoric processuality,126 so that the network as a whole gradually becomes internally meaningful and habit-establishing. The Process Physics model does not only give rise to internal meaningfulness and habit formation, though. As will be shown in the next section, it also facilitates the emergence of three-dimensionality and enables the network to become organized in a quantumfoam-like way.

5.3.4 From pre-geometry to the emergence of threedimensionality

In the Process Physics model, or, to be more specific, in the connectivity matrix which facilitates the indexical mapping of connection strengths, there are no a priori elementary constituents whose behaviors are held to be governed by any pre-available ‘laws of nature’. Instead, there’s only a lawless, initially nondescript background of iterative, noise-driven activity patterns. Although rare, more stable and relatively isolated branching structures will start to emerge from the noisy background activity when the system goes through enough update iterations. This is because the noise term 𝑤𝑖𝑗 not only enriches the system with random novelty, but also gives rise to rare large-valued connection strengths 𝐵𝑖𝑗 . In comparison to the smaller-valued connection strengths 𝐵𝑖𝑗 in their background vicinity, these large-valued 𝐵𝑖𝑗 can more easily persist under the regime of the system124

When talking about ‘islands of connectivity’, the terms ‘branching structures’, ‘connected nodes’, ‘(sub)actualities’, ‘events’, etc., can all be used interchangeably. For sake of clarity, the term ‘monads’ or ‘pseudo-objects’ is used to refer to the start-up level of the connectivity network (or a subnetwork). At this startup level, internal connectivity is typically thought of as non-explicit, because, due to universality (i.e. scale-free phenomena), any higher-order structure can be used interchangeably as the low-level start-up activity of yet another, higher level of organization. In any case, all these terms are ultimately just educational figures of speech. That is, nature in itself is ultimately unlabeled, which means that what happens in nature can never be fully synonymized with our linguistic tags. The connectivity patterns themselves, however, become meaningful to each other, despite their unsuitedness to be named or, in other words, to be externally given meaning in any unambiguous way. 125 Events: a.k.a. ‘actualities,’ ‘nodes,’ or Whiteheadian actual occasions. Using less short and snappy language, they can also be described as ‘local-global (i.e. holarchic) centers of connectivity.’ 126 ‘Diaphoric’ means ‘difference-making.’

161

renewing iterations (Cahill and Klinger 2000b). In the long run, those specific linkages that are strong and durable enough to survive the system’s noisy iterations will then hook up to form tree-graph-shaped connectivity patterns (see Fig. 5-3). This is because short-distance connections between neighboring monads are by far the most probable ones. As a logical consequence, the majority of those rare large-valued connections will thus be established between nearest neighbors. Meanwhile, an already significantly smaller portion links to the second-nearest neighbors, and an even tinier fraction is capable to attach to more distant neighbors (see the right hand column of Table 5-2).

Fig. 5-3: Tree-graphs of large-valued nodes 𝑩𝒊𝒋 and their connection distances 𝑫𝒙

Table 5-2: The amount of connections arranged by distance and connection strength

short-distance

low connection

medium connection

high connection

strength

strength

strength

overwhelming majority

few

scarce

few

scarce

very, very scarce

scarce

very, very scarce

extremely scarce

medium-distance long-distance

As can be shown through numerical analysis of the indexed connection strengths, these treegraph-shaped branching structures of elevated connectivity become organized in such a way that they have a natural embedding within a 3-dimensional hypersphere. To be more specific, their topology approximates the geometry of a 3-dimensional hypersphere 𝑆 3 . (cf. Fig. 5-4). For this numerical analysis to be performed, we first need to filter out which nodes are participating in branching structures of elevated connectivity. This can be done by introducing a lower threshold of connection strength and then sieving out only those nodes that have larger connection strengths.

162

Fig. 5-4: Emergent 3D-embeddability with ‘islands’ of strong connectivity – With increasing iterations (number of iterations indicated by 𝑘 ranging from 0 to 145) the connectivity nodes take on a distribution in which they are embeddable within a hyperspherical geometry 𝑆 3 . To allow plotting the fourth coordinate has been suppressed. The sphere-within-sphere embedding can be best observed when looking at Fig. 5-4f. Figure and text edited from: Fig. 7.53 in (Klinger 2010, 281).

After having applied this lower-theshold, in the indexical matrix we can then see isolated islands of elevated connectivity. These are the ones that may be depicted as treegraphs of connected monads (see Fig. 5-3). Although the tree-graphs are made up from monads 𝑖, 𝑗, 𝑘, 𝑙, … , etc., whose respective ‘starting positions’ are given by the row and column numbers in the indexical connectivity matrix, the self-organizing regime of noisy update iterations effectively ‘neutralizes’ this pre-imposed hierarchy. That is to say, the initial hierarchy of the 𝐵𝑖𝑗 is irrelevant. That is, just as people’s home addresses do not tell anything about which members of the community are closest to them, the cell-coordinates of the connection strengths 𝐵𝑖𝑗 within the matrix, as well as the row and column numbers of individual 𝐵𝑖𝑗 within the tree-graphs are of no actual significance, just as long as the connections with their neighbors, their neighbor’s neighbors, etc., are being catalogued by the iteratively built-up connectivity index: “Consider the connectivity from the point of view of one monad [also nameable as ‘event,’ ‘actuality’ or ‘node of connectivity’], call it monad 𝑖. Monad 𝑖 is connected via these large 𝐵𝑖𝑗

163

to a number of other monads, and the whole set of connected monads forms a tree-graph relationship. This is because the large links are very improbable, and a tree-graph relationship is much more probable than a similar graph involving the same monads but with additional links. The set of all large valued 𝐵𝑖𝑗 then form tree-graphs disconnected from one-another; [see Fig. 5-3]. In any one tree-graph the simplest ‘distance’ measure for any two nodes within a graph is the smallest number of links connecting them. Indeed this distance measure arises naturally using matrix multiplications when the connectivity of a graph is encoded in a connectivity or adjacency matrix. Let 𝐷1 , 𝐷2 , … , 𝐷𝐿 be the number of nodes of distance 1, 2, ... , 𝐿 from node 𝑖 (define 𝐷0 = 1 for convenience), where 𝐿 is the largest distance from 𝑖 in a particular tree-graph, and let 𝑁 be the total number of nodes in the tree. Then we have the constraint ∑𝐿𝑘=0 𝐷𝑘 = 𝑁.” (Cahill and Klinger 2000a) With all this in place, we can now start to count the number 𝒩(𝐷, 𝑁) of different 𝑁-node trees that, seen from the perspective of reference node 𝑖, have the same distance distribution {𝐷𝑘 }. With all possible linkage patterns included, this would give: 𝐷𝐿 𝐷 𝐷 (𝑀−1)!𝐷 2 𝐷2 3 … 𝐷𝐿−1

1 𝒩(𝐷, 𝑁) = (𝑀−𝑁−2)! 𝐷

1 ! 𝐷2 ! … 𝐷𝐿 !

,

(5.2)

After having specified this number 𝒩(𝐷, 𝑁), Cahill and Klinger proceed: 𝐷

“Here 𝐷𝑘 𝑘+1 is the number of different possible linkage patterns between level 𝑘 and level 𝑘 + 1, and (𝑀 − 1)!/(𝑀 − 𝑁 − 2)! is the number of different possible choices for the monads, with 𝑖 fixed. The denominator accounts for those permutations which have already 𝐷

been accounted for by the 𝐷𝑘 𝑘+1 factors. We compute the most likely tree-graph structure by maximising ln 𝒩(𝐷, 𝑁) + 𝜇(∑𝐿𝑘=0 𝐷𝑘 − 𝑁) where 𝜇 is a Lagrange multiplier for the constraint. Using Stirling’s approximation for 𝐷𝑘 ! we obtain 𝐷𝑘

𝐷𝑘+1 = 𝐷𝑘 ln 𝐷

𝑘−1

− 𝜇𝐷𝑘 + ½.

(5.2)

which can be solved numerically. [Fig. 5-5] shows a typical result obtained by starting [equation (5.2)] with 𝐷1 = 2, 𝐷2 = 5 and 𝜇 = 0.9, and giving 𝐿 = 16, 𝑁 = 253. Also shown is an approximate analytic solution 𝐷𝑘 ~ sin2 (𝜋𝑘/𝐿) found by Nagels.These results imply that

164

the most likely tree-graph structure to which a monad can belong has a distance distribution {𝐷𝑘 } which indicates that the tree-graph is embeddable in a 3-dimensional hypersphere, 𝑆 3 . Most importantly monad 𝑖 has a 3-dimensional connectivity to its neighbours, since 𝐷𝑘 ~𝑘 2 for small (𝜋𝑘/𝐿). We call these tree-graph B-sets gebits [because they seem to act as bits of emergent geometry – geometry bits].” (Cahill and Klinger 2000a)

In more informal language, we may say that the connectivity nodes of the tree-graph-like branching structures have practically the same distance distribution as uniformly arranged points in a three-dimensional space: “The trees branch randomly, but if you take one pseudo-object [a.k.a. connectivity node] and count its nearest neighbours in the tree, second nearest neighbours, and so on, the numbers go up in proportion to the square of the number of steps away. This is exactly what you would get for points arranged uniformly throughout three-dimensional space.” (Chown 2000, 27)

Fig. 5-5: [Dk-k]-diagram: “Data points show numerical solution of 𝐷𝑘+1 = 𝐷𝑘 ln 𝐷𝐷𝑘 − 𝜇𝐷𝑘 + ½ for 𝑘−1

distance distribution 𝐷𝑘 for a most probable tree-graph with L = 16. Curve shows fit of approximate analytic form 𝐷𝑘 ~ sin2 (𝜋𝑘/𝐿) to numerical solution, indicating weak but natural embeddability in a hypersphere S 3 .” (Cahill et al. 2000, 193-194)

Emergent branching structures that manage to persist under ongoing iterations can hook up with each other to form yet another higher-order level of branching structures. Like this, the thus emerging 3-dimensionality starts to spread across wider and wider ranges of the network, thereby giving rise to a fractal, quantumfoam-like web of connectivity. Consistent with the

165

universality (i.e. scale-free phenomena) that is so characteristic of self-organized critical systems, 3D-embeddable nested subnetworks of all sizes can be found to occur in this quantum-foam-like network (see Fig. 5-6 for an artistic impression of such a quantum foam network).

Fig. 5-6: Fractal (self-similar) dynamical 3-space – Artistic impression of fractal (i.e. holarchic) dynamical 3-space, a.k.a. three-dimensional process-space. A comparison with Fig. 5-1 and Table 5-1 can be made to better understand the linkage with the iteration routine (5.1). Original image on the lower right hand side (the other images have been edited); retrieved from (Cahill et al. 2000).

From further analytical and numerical study, it can be concluded that this network of fractal, cell-like bits of geometry behaves as a Prigoginean dissipative structure. That is, the ‘cells’ arise from a noisy, initially uniform background, much like Bénard convection cells, or the emergent cellular reaction patterns in certain Gray-Scott reaction-diffusion systems (see Fig. 5-7). Through the combined effect of the binding and noise term in Eq. 5.1, the network will act as an order-disorder system in which these fractal (i.e. holarchic) cell-like processstructures come and go as they are engaged in slower- and faster-going growth-decay cycles – depending on the local-global context and their internal reactivity (cf. Cahill and Klinger 2000a and 2000b).

166

Fig. 5-7: Gray-Scott reaction-diffusion model – Subsequent steps in a reaction-diffusion model as it evolves from low-level random noise. The reaction-diffusion system consists of two arbitrary chemical species 𝑈 and 𝑉. The variables 𝑢 and 𝑣 represent their concentration for each point in space. At the start of the simulation, the concentration of these two chemical species varies randomly. As the simulation progresses, the chemical species react with each other and diffuse through the available medium, thus yielding dynamically varying concentration levels and the therewith associated pattern formation at any given location. Depending on the parameters used, the two chemical reactions take on different rates at each point within the medium. The reactions involved are: 𝑈 + 2𝑉 → 3𝑉 and 𝑉 → 𝑃, with 𝑃 being an inert product that does not participate in any further chemical reactions. For sake of simplicity it is assumed that there is an abundant supply of reagents, so that the reactions only occur in this one direction and not in the opposite one. Since 𝑉 acts as a reactive chemical as well as a reaction product it can be seen as a catalyst for its own reaction. Hence, 𝑉 takes part in an autocatalytic cycle. The simulation of the reaction and diffusion processes is based on the partial differential equations

𝜕𝑢 𝜕𝑡

= 𝐷𝑢 ∇2 𝑢 − 𝑢𝑣 2 + 𝐹(1 − 𝑢) and

𝜕𝑉 𝜕𝑡

= 𝐷𝑣 ∇2 𝑉 − 𝑈𝑉 2 − (𝐹 + 𝑘)𝑉, with 𝑢 and 𝑣 as the location-

and time-dependent concentrations. The first part of the first formula 𝐷𝑢 ∇2 𝑢 is the diffusion term with parameter 𝐷𝑢 = 2.00 ∙ 10−5 ; the second part −𝑢𝑣 2 is the reaction rate, and the third part 𝐹(1 − 𝑢) is the replenishment term (with feed rate 𝐹 = 0.0600) which is needed to replenish the chemical species 𝑈 because I gets used up in the reaction. For the second partial differential equation, the parameters are: 𝐷𝑣 = 1.00 ∙ 10−5 , feed rate 𝐹 = 0.0600 and diminishment term 𝑘 = 0.0620. The simulation was originally performed by the XMorphia simulation software (authored by Roy Williams at Caltech) with partial differential equations as discussed in (Pearson 1993). However, samples a) to f) are taken from a renewed simulation run by Robert Munafo (see http://mrob.com/pub/comp/xmorphia/index.html for more details).

167

As the network continues to go through its update iterations, even higher-order structures can arise from this initially low-level process. The model’s network of higher-order processstructures will thus start to exhibit all kinds of characteristics that can also be found to occur in nature. Among the signature features of the Process Physics model we can find, for instance, nonlocality, emergent quantum behavior, a quasi-classical world, gravitational and relativistic effects, inertia, universal expansion, black holes and event horizons, and also a present moment effect inherent to the system itself (cf. Cahill 2003, 11-12).

5.3.5 Process Physics, intrinsic subjectivity and an inherent present moment effect

Whereas mainstream physics, with its dependence on the geometrical timeline, does not allow for a unique and exclusive Now (cf. Section 2.1.3), the Process Physics model has an inherent present moment effect to it:

“The introduction of process and the stochasticity of self-referential noise not only provides the spontaneous and creative generation of spatial structure, it also captures what may be termed the ‘present moment effect’, and thus the essence of empirical or experiential time. Successive iterations … generate a history … which might be recorded and replayed precisely, and in this there is a clear ‘arrow of time’ because, unlike a recording (which can be played forward or backward arbitrarily to find and examine specific instances), one cannot simply run the system in reverse to recover an earlier state since the mapping is unidirectional – the presence of the noise term precludes an inverse mapping. However, while the history may be broadly inferred from the presence of persistent relational forms so that there is a sense of a natural partial memory, the ‘present moment’ is entirely contingent both on the specific detail of that history and on the SRN [i.e. Self-Referential Noise] so that the future awaits creation.” (Klinger 2016, 170)

Although the iterations of the update routine are not literally synonymous with the phenomenon of time, they facilitate the ongoing renewal of the system’s connectivity patterns and thus are constitutive of what Whitehead (1929/1978, 128 and 222) calls ‘the creative advance into novelty.’ Like this, each iteration can be thought to bring on a new present moment. Moreover, the being engaged in those cyclic update iterations makes a meaningful

168

difference to the network’s islands of elevated connectivity. That is, by slightly modulating the connectivity landscape with each turn, the update iterations affect connection strength, spread, durability, and reactivity of the network’s ongoing patterns of relationship: “Numerical studies show that the outcome from the iterations is that the gebits [i.e. the 3Dembeddable branching structures with elevated connectivity strength] are seen to interconnect by forming new links between reactive monads [i.e. reactive start-up nodes] and to do so much more often than they self-link as a consequence of links between reactive monads in the same gebit. We also see monads not currently belonging to gebits being linked to reactive monads in existing gebits. Furthermore the new links, in the main, join monads located at the periphery of the gebits, i.e these are the most reactive monads of the gebits. … [T]he new links preserve the 3-dimensional environment of the inner gebits, with the outer reactive monads participating in new links. Clearly once gebits are sufficiently linked by 𝐵 −1 they cease to be reactive and slowly die via the iterative map. Hence there is an on-going changing population of reactive gebits that arise from the noise, cross-link, and finally decay. Previous generations of active but now decaying cross-linked gebits are thus embedded in the structure formed by the newly emerging reactive gebits.” (Cahill and Klinger 2000a)

In fact, the relation between strength, node distances, and reactivity is such that it gives rise to an internal, dispositional preference of how to connect. That is, these locally evolving, emergent characteristics affect the islands of connectivity in a way that makes them hook up with ‘kindred’ ones (cf. Fig. 5-8b). Analogous to what happens in reentrant neural networks (see Sections 4.3 and 4.3.1), the iterative noise in the Process Physics model gives rise to a kind of plasticity in which simultaneously active structures link up with each other. Accordingly, it can be derived from the numerical analyses that ‘connectivity structures that are reactive together, hook up together,’ this in surprising agreement with the well-known motto from neurodevelopment: ‘[neural] cells that fire together, wire together’ (Lowel and Singer 1992; Edelman and Tononi 2000, 83). In many complex adaptive systems, such activity-driven mutualism is known to give rise to self-similar fractal network structures in which the same patterns of relationship occur at all levels of organization. In a few words: fractal self-similarity means that the whole and its parts are similarly shaped. As already hinted at in the last part of Section 4.3.3, a fractal network structure is one that achieves the maximum correlation among the constituent network elements. In fact, in a fractal network system, successful branching structures persist

169

as they do not stop to participate in structure-enriching network cycles, while poorly interassociated and less interactive subnetworks will sooner or later fade away as connectivity with the rest of the system drops below a sustainable level:

“If the inputs to a system cause the same pattern of activity to occur repeatedly, the set of active elements constituting that pattern will become increasingly strongly interassociated. That is, each element will tend to turn on every other element and (with negative weights) to turn off the elements that do not form part of the pattern. To put it another way, the pattern as a whole will become ‘auto-associated’.” (Allport 1985, 44) And as fractal structure formation facilitates a high level of auto-association, the network’s constituent elements become so intimately connected with the network as a whole that we may even be so bold as to state that they can ‘sense’ the global state of the system through their access to deeply correlated local information (Hesse and Gross 2014, 10). Moreover, in the long run these strongly interassociating systems turn out to develop an optimal ratio between 1) efficiency (i.e. the capability to follow ‘the path of least resistance’) and 2) diversity (i.e. the capability to adapt, or take another, alternative pathway in the face of changing local-global conditions). The near-critical balance between these two features then leads to structure formation as found, for instance, in the optimally complex neural network of Fig. 4-4A-b. Indeed, similar fractal pattern formation can be found in systems as diverse as neural networks (Fig. 5-8a), tree root networks (Fig. 5-8b), river delta’s (Fig. 5-8c), blood vascular networks (Fig. 5-8d), ant foraging trails, and many more natural systems, even at the level of galactic superclusters (cf. Fig. 5-10 further down below).

In summary, all the pattern formation in these systems thrives on dispositional activity. Whenever branching structures, under the influence of internal, external, local and global contingencies and constraints, get to become each other’s ‘adjacent possible’ (Kauffman 2009, xiii; 2013, 15), they will likely hook up and, depending on the level of mutual sustainment, get involved into a more durable relationship or not. Simply because the probability to hook up will increase when branching structures are 1) simultaneously active, 2) equally strong, 3) equally durable, and 4) equally reactive, it would certainly not be too much off-target to say that these branching structures develop in an anticipatory, or at least, a proto-anticipatory way. As all branching structures are ‘biased’ in the sense that they tend to connect with resembling parts, this can also be thought of as a primitive form of subjectivity.

170

The network as a whole, then, will exhibit what Robert Ulanowicz has called ‘ascendency’ – the tendency to develop towards ever-higher, and increasingly intense complexity.

Fig. 5-8: Fractal pattern formation leading to branching networks. a) Four fluorescently stained neurons from a bird’s brain (finch). One small interneuron and three projection neurons in RA, the robust nucleus of the arcopallium; a brain area involved in the control of fine muscle movements required for the production of learned song (photo authored by Mark Miller (2011), postdoctoral fellow at UCSF School of Medicine); b) Excavated root network of Balsam Poplar with the arrow indicating a ‘root graft’ (a shared connection) between two individual trees. Root grafts are exquisite examples of ‘kindred’ dispositional branching structures that hook up with each other, thus contributing to optimal mutualistic connectivity within the network. Location: Quebec, Canada (Adonsou et al. 2016); c) Satellite picture of the fractal-shaped branching structures of the Selenga River delta on the southeast shore of Lake Baikal in Russia (source: U.S. Geological Survey); d) Computer model of fractal blood vessel network in human lungs (edited from Haber et al. 2013)

171

In fact, in the Process Physics model, the occurrence of dispositionality, and the emergence of primitive anticipatoriness, proto-subjectivity and ascendency (cf. Ulanowicz 1997) is achieved by the system-renewing effect of the noise-driven iterative update routine. Accordingly, each round of iterations ‘in-forms’ the entire connectivity network about its own newly acquired internal connectivity. Through the complex interplay of 1) the memory-like precedence term, 2) the Whiteheadian prehension of local-global data by the cross-linking binder term, and 3) the creativity-infusing noise term, the initially undifferentiated connectivity network will give rise to a present moment effect unparalleled by any timelinebased model. Aside from their geometrical timeline, those models have to rely on an external time pointer to get from one moment to the next. The present moment effect in the Process Physics model, however, is inherent to the connectivity network in which it arises. So much so, even, that the network and the present moment cannot be told apart in any satisfactory way. Accordingly, the present moment effect should be understood as forming one inseparable whole with the connectivity network. Like so, network and present moment are best thought of as one integrated process – a dispositional habit-driven present, or an Anticipatory Remembered Present (cf. Edelman 1989; see also Fig. 5-9). The most striking thing, here, is that we’ve already used the same term when we were discussing the coming-into-actuality of an organism’s conscious Now (cf. Section 4.3.1, note 93; see Sections 4.2.3 - 4.3.2 for more specific details). When taking a step back to contemplate this peculiarity, however, we may notice that the same basic repertoire – namely 1) memory, 2) linkage-establishing reentrant signaling, and 3) neural noise – is at work in the thalamocortical region of the mind-brain. This repertoire, in turn, is kept on the go as the organism’s perception-action loop continues to go through its cycles. So, as long as this repertoire remains intact, it can give rise to the conscious organism’s emergent sense of self and world through which not only the experience of an immediately apparent reality becomes possible, but also the thinking up of all kinds of scenarios of events that might or might not happen in the future. Moreover, an organism with higher-order consciousness – or, in other words, a well-developed Anticipatory Remembered Present – would also be able to imagine events that could perhaps have happened in the past if circumstances would have been different. Like this, the organism’s thoughts and actions would not necessarily be targeted solely on the ‘conscious Now,’ but could also be aimed at imagined possible past or future realities.

172

Fig. 5-9: Seamlessly integrated observer-world system with multiple levels of selfsimilar, neuromorphic organization. Simplified illustration depicting a) a within-nature observer who’s b) seamlessly embedded within the same natural world he, from early life into adulthood, gets to make sense of through c) the workings of his mind-brain and the perception-action cycles, and other non-equilibrium cycles that enable him to go through life (these cycles are not depicted here; see Fig. 4-1 as a replacement). A, d) this brain-equipped observer, by going through his perception-action cycles, gets to sculpt a conscious view onto the greater embedding world, which at the supragalactic level, is also organized in a ‘brain-like’ way. Finally, then, e) Process Physics shows that, at the ‘deepest’ level of organization, the process of nature branches out into an all-encompassing, optimally interconnected complex network of ‘neuromorphic’ activity patterns. This is characteristic of a self-organizing, criticality-seeking, complex fractal network process. Next to that, all this suggests that this self-organizing network process gives rise to habit formation, internal meaningfulness through universal mutual informativeness, and all experiential aspects that mainstream physics systematically overlooks.

Although it would of course go too far to state that nature at its deepest level already possesses such a highly evolved memory-based anticipatoriness, all the above forces us to admit that a primordial form of it could well be present from early beginnings onward. After all, analogous to the dispositional behavior of branching structures in the Process Physics model, a conscious organism’s Anticipatory Remembered Present arises through dispositional

173

Fig. 5-10: Structuration of the universe at the level of supragalactic clusters. These zoom images are generated by the Millennium Simulation. Each individual window shows the simulation-generated structure in a slice of thickness 15ℎ−1 Mpc (the order of magnitude for each window can be derived from the distance indication in the lower right and left hand corners). The sequence of windows gives consecutive enlargements, with a factor four magnification for each step. The simulation aims to model structure formation in the universe in a way that agrees with the results obtained by the Sloan Digital Sky Survey and the 2-degree Field Galaxy Redshift Survey (2dFGRS). To achieve this, it is assumed that the early universe exhibited only weak density fluctuations and was otherwise homogeneous. Starting from such initial conditions, these fluctuations are then thought to be amplified by gravity. Dark matter and energy are invoked to enable the applicable gravitational equations to achieve neuromorphic structuration compliant with the above-mentioned reference surveys. In Process Physics, the fractal, neuromorphic structure formation results directly from the iterative update routine and no dark matter or dark energy hypothesis needs to be invoked (Source of image: Springel et al. 2005).

174

memory repertoires within perception-action cycles that thus enable the repetition of psychophysical acts such as thought, imagination and musculoskeletal control (Edelman and Tononi 2000, 57-61). Given all this, we are now hopefully ready to conclude that protosubjectivity, time (i.e. nature’s ‘becomingness’ as ‘the going through its iterative cycles’), universal interconnectedness, something akin to Whitehead’s ‘subjective aim’ (cf. ‘dispositional preference’; teleology) and mutual informativeness are intimately related aspects of the in itself indivisible process of nature.

175

6. Overview and conclusions Throughout this paper, we’ve seen that our conventional way of doing physics in a box got us into trouble over and over again. The common source of all these troubles seems to be the general methodology behind doing physics in a box, or, in other words, the Newtonian paradigm. For sake of clarity, let’s retrace the steps through which the very method that was invented specifically to improve our physical understanding of nature (which, arguably, it did), could ever so paradoxically end up being such an inhibitor of any deeper understanding as well. To begin with, the Newtonian paradigm holds that the natural world consists of nothing more than entirely physical contents. These contents are thought to behave, to a greater or lesser extent, in a regular manner that can be expressed in the form of lawful physical equations. Moreover, it is one of the core beliefs in the Newtonian paradigm that nature as a whole can eventually be captured by just a handful of these lawful physical equations so that, eventually, the entire universe can be said to be ‘governed’ by only a small set of physical laws. In a nutshell, this is roughly what the Newtonian paradigm amounts to. However, the Newtonian paradigm comes with quite a number of tacit assumptions that we typically like to forget about once we’re in the midst of putting it into practice. A prime example among these tacit assumptions is the ‘Galilean cut’ which is the idea that quantifiable aspects of nature (such as location, size, shape and weight) belong to the ‘objective real world out there’, whereas qualitative aspects (such as color, touch and smell) belong to the subjective inner-life of the observer. A related, but not entirely synonymous, idea is that a simple dividing line can be drawn between the system-to-be-observed and its observational system, including, in particular, the conscious observer behind the switches and knobs of the measurement equipment. Another major point on the list of tacit assumptions, then, is the idea that the environmental influence on a well-isolated system can be safely neglected. Although all these ideas are crucial elements in the Newtonian paradigm – elements without which our present way of doing physics in a box would not even be possible at all – they cannot be upheld whenever our locally successful ‘laws of nature’ are extrapolated to nature as a whole. Such an extrapolation would lead to the following patchwork of arguments:

176

1. By trying to apply our local physical equations to the universe at large we are in fact committing the cosmological fallacy (Smolin 2013, 97; also cf. J. Rosen 2009, 72). 2. Once having done so, we basically act as if we can position ourselves outside of nature, along with our measuring rods, calibrated clocks, and other scientific instruments, just to take on an exophysical ‘view from nowhere’ (cf. Nagel 1986; van Dijk 2016). But it is of course impossible to observe the universe from the outside and any attempt to stubbornly stick with this exophysical methodology will lead to conclusions that are impossible to check, such as that the universe should appear static and frozen solid when looked at from the outside (cf. Smolin 2013, 80).127 3. Then, the application of the Galilean cut – which is 1) intimately related to the abovementioned exophysical view and 2) an absolute necessity to make the formulation of physical equations possible at all – automatically leads to the undesirable ‘bifurcation of nature’. That is, it splits up our natural world into lifeless nature and nature alive (Whitehead 1938, 173-232; Desmet 2013, 87), or, in other words, into an inanimate part that is describable in terms of physics and mathematics, and another animate part that is not. Unfortunately, however, this leaves unexplained all kinds of things that we typically like to associate with life, such as meaning, subjectivity, value, creativity, novelty, and so on. As a result, we are now left with an exophysical-decompositional physics that, in the words of Terrence Deacon (2012, covertext), leaves it absurd that we exist. 4. This same exophysical-decompositional physics, by making it so natural and intuitive to think of nature in terms of empirical data and their data-reproducing algorithms, all too easily persuades us to confuse those empirical data and their physical equations128 with the natural world to which they are referring. Given the utter staticness of those data records, one may be tempted to conclude that the referents to which these records

127

“The requirement that the clock that measures time in quantum mechanics must be outside the system has stark consequences when we attempt to apply quantum theory to the universe as a whole. By definition, nothing can be outside the universe, not even a clock. So how does the quantum state of the universe change with respect to a clock outside the universe? Since there is no such clock, the only answer can be that it doesn't change with respect to an outside clock. As a result, the quantum state of the universe, when viewed from a mythical standpoint outside the universe, appears frozen in time.” (Smolin 2013, 80) 128 Next to the possible confusion of empirical data and data-reproducing algorithms with their referents, all their further math- and geometry-based abstractions (such as point-observers) can be similarly mixed up with what they are supposed to refer to. In this case, for instance, abstract point-observers are easily confused with their intended referents – live conscious observers that are seamlessly embedded within the greater embedding process of nature as a whole.

177

are held to pertain129 are themselves equally static, and thus ‘frozen in time’ (Smolin 2013, 33). However, this would amount to mistaking empirical data for what are held to be their referents. Like so, we would actually be committing the fallacy of misplaced concreteness (Whitehead 1929/1978, 7 and 18), which is undesirable since it ultimately leads to all kinds of confusing results, such as the denial of time in Minkowski’s block universe interpretation of Einstein’s Special Theory of Relativity.

For the purpose of arriving at a proper, to-the-point conclusion, we do not need to go too deep into all technical details of relativistic physics and its timeless block universe interpretation. Instead, we can suffice by zooming in onto where the main assumptions of the block universe interpretation go wrong. To recap, these main assumptions are:

1) nature is an objectively existing, mind-independent real world out there; 2) natural events reside in a geometrical continuum; 3) relativity of simultaneity means that there is no passage of time and that any experience of time passing by is thus illusory.

Remarkably, though, all three assumptions are in fact symptoms of what may be called the physicist’s fallacy. This fallacy, which may count Galileo, Newton, Einstein and many others among its victims, leads one to suppose that what is being identified as an object in one’s experience, must naturally have its origin in an external, mind-independent world of entirely physical objects. This basically amounts to the idea that our experience occurs somewhere in an exophysical center of subjectivity and that it only has to import and interpret the information gathered from a pre-coded ‘real world out there’. But nature is by itself unlabeled and unframed by our filter of observation. As such, it does not contain any categories, concepts or pre-coded information from which our mind-brains can construct naturerepresenting mental content. Instead, as is shown in Sections 4.2.3-4.3.1, organisms get to sculpt their ‘conscious sense of self and world’ through perceptual categorization – the ability of conscious organisms to partition nature into value-laden categories, although nature by itself does not contain any such categories at all (Edelman and Tononi 2000, 104). That is, sense of self and

129

These referents could be anything that, according to the physicalist paradigm of mainstream physics, can be thought to exist in the real world out there, for instance, ‘states’, ‘events’, ‘objects’, the ‘snapshot takes’ of what is thought to be an object in motion, and so on.

178

world emerge as two aspects of one and the same stream of experience as conscious organisms live through their multimodal perception-action cycles (also nameable as ‘sensation-valuation-motor activation-world manipulation’ cycles, or, for short, ‘Gestalt cycles’) as well as the therewith associated nutrient-waste cycles, O2-CO2 cycles, and the like. Like so, an organism’s experiential world is carved out as a somatically meaningful ‘selfcentered world of significance’ through a process of sense-making that takes place within the integrated whole of the seamlessly interconnected organism-world system – not within some exophysical center of subjectivity. This not only debunks the first main assumption of the block universe interpretation – namely, that nature is an objectively existing, mind-independent ‘real world out there’ – but also the second and the third. To drive home the point that the block universe interpretation is mistaken, though, it will for now be enough to focus exclusively on the flaw in the first main assumption and keep the flaws in the other assumptions on standby (see Sections 2.5.2 and 2.5.3 for these back-up details). As for the first assumption, it should be quite obvious that the long-cherished ideal of a mind-independent ‘real world out there’ is flatly contradicted by the above-mentioned finding that observing organism and observed world are ultimately one. In fact, this finding is utterly incompatible with our entire current enterprise of doing ‘physics in a box’ (or ‘exophysical-decompositional physics’, as it is indeed sometimes referred to as well).130 Due to this incompatibility, and some other reasons as well, it seems we need to 1) temporarily put aside our exophysical-decompositional way of doing physics in a box and save it exclusively for practical purposes,131 and 2) look out for a nonexophysicalnondecompositional way of doing physics without a box to thus be able to get a modelling method in which mutual informativeness is an integral part of the system, so that we no longer need to bump into the problem of pre-coded information being imported from the outside132

130

Our current way of doing physics in a box can be characterized as exophysical-decompositional, or, to be more elaborate, as taking an external perspective onto a world that is held to be decomposable into entirely physical constituents (van Dijk, forthcoming). A core characteristic of exophysical-decompositional physics is that it implicitly suggests that its mathematical labels are synonymous with nature itself. This, then, typically leads to the fallacy of misplaced concreteness (Whitehead 1929/1978, 7 and 18) and therewith associated unrealistic conclusions such as nature being geometrical and timeless. 131 I.e., practical purposes like the design and manufacture of computer chips; the sending out of space-craft on missions into space; the deployment of a properly working GPS-system, and so on. 132 Please note that the intake of pre-coded information by an exophysical observer implies the presupposition of what one is trying to explain. That is, it means that the alphabet of expression with the help of which this observer is trying to describe nature, is already given beforehand (cf. Kauffman 2013, 11). This is like trying to describe the spectrum of sunlight in terms of primary colors only. Obviously, one will then be left incapable to include ultraviolet, infrared, etc., into the picture. In other words, pre-coded or pre-stated information necessarily

179

(cf. Kauffman 2013, 9-22) as would be required for an observer whose exophysical center of subjectivity is processing the sense data originating from the allegedly mind-independent ‘real world out there’. This problem of pre-coded information can be avoided when information is in fact an initially unlabeled process of mutual informativeness through which the model system is dynamically being given shape from within, i.e., a process through which all activity patterns can make a difference to all other activity patterns within the system, and vice versa. Without such mutual informativeness, the above-mentioned process of perceptual categorization would not even be possible at all. It is through the mutual informativeness among and within neuronal groups in the thalamocortical region of the mind-brain that conscious organisms get to carve out their ‘Umwelt’ (i.e. their self-centered world of significance) from a less salient background of noisy, lower-order activity patterns. In a remarkably similar way, the mutually informative process of ‘autocatalysis’ is thought to have facilitated the advent of life by enabling the emergence of initially primitive biotic networks from a nondescript background of low-grade, slow-going chemical reaction cycles (cf. Kauffman 1995, 47-69). In both cases, a higher-order world of habit-establishing foreground patterns is ‘bootstrapped into actuality’133 through the mutually informative cyclic activity within the system itself. According to Reg Cahill’s Process Physics, this mutual informativeness is not only an essential characteristic of biological systems, but also of nature as a whole. In the Process Physics model, it is the mutual informativeness among inner-system activity patterns that gives rise to a complex world of criticality-seeking, habit-establishing foreground patterns. Similarly to what happens in the emergence of life and consciousness through autocatalysis and perceptual categorization, these foreground patterns get to ‘bootstrap’ themselves into actuality from an initially undifferentiated background process of noise-driven, mutually informative activity patterns. Because of this mutual informativeness, which enables it to avoid all the problems associated with pre-coded information (see above), Process Physics should be considered a prime candidate for a nonexophysical-nondecompositional way of doing physics.

leads to incomplete representations since our tools of observation and alphabets of expression can only denote so much – they always have upper and lower limits beyond which they cannot go. 133 This ‘bootstrapping’ refers to the Baron von Münchhausen who allegedly used his own bootstraps to pull himself out of the deadly swamp. A bootstrap, then, is the handgrip at the backside of a boot that can be used to pull it on.

180

Process Physics, by virtue of its ‘co-informativeness-based’134 way of doing physics without a box, introduces a non-mechanistic, non-deterministic modeling of nature based on a self-organizing stochastic iteration routine. As such, Process Physics can be said to work according to a Peircean principle of precedence (Peirce 1992, 277), so that it has no need for lawful physical equations and can thus avoid the many problems and fallacies that are associated with our conventional way of doing physics in a box. By means of its ‘habit-establishing, stochastic recursiveness’, the Process Physics model can give rise to constantly renewing activity patterns. In contrast to mainstream physics, in which any sense of processuality has been so worryingly absent, the thus achieved ‘becomingness’ can be associated with what we, in everyday life, experience as time. Instead of ending up with an utterly timeless and non-processual world such as the block universe which mainstream physics claims that we live in, the Process Physics model, by going through its habit-establishing iterations, gradually gives rise to an entirely processual network of self-organizing activity patterns that exhibit lots of familiar behaviors that can also be found to occur in nature itself. In so doing, the Process Physics model will slowly but surely start to show more and more features that are also so characteristic of our own natural universe: non-locality; emergent three-dimensionality; inertia; emergent relativistic and gravitational effects; emergent semi-deterministic classical behavior; creative novelty; inherent time-like processuality with open-ended evolution, and more. Finally, the perhaps most directly appealing aspect of Process Physics may well be its full compliance with our best theories on life and consciousness.

134

The term ‘co-informativeness’ is here used as a synonym for mutual informativeness.

181

Appendix A: Addendum to §2.5.2 ‘Events in nature can be pinpointed geometrically (or not?)’

Mathematically formulated physical equations do not represent nature-in-itself. In physics, the actual target systems are samples of raw empirical data that will acquire their eventual processed form only through the intimate interplay between what we in earlier times liked to label as subjective and objective aspects of nature.135 Accordingly, physical equations are to be thought of as intersubjective phenomenologies pertaining to how the results of measurement interaction are presented in terms of theory-laden data; like so, they do not pertain to nature itself. In other words, physical equations do not directly represent nature itself and there is no objective, one-on-one representational relation between any within-nature events and physical equations. At the end of the day, physical equations are instrument- and sensationbased phenomenologies of nature, rather than fully corresponding representations; they are approximations of regularities found in observational data whose coarse-grainedness depends on which measuring instruments, which measuring methods, and which background theories are being employed (see Section 3.2.2 for more details). For this reason, we should definitely reexamine the presupposition of relativity theory that events and observers in nature can be pinpointed geometrically. Instead of treating mathematics as the language of nature – as Galileo did when he introduced the geometrical timeline, thereby basically giving rise to modern physics – it makes much more sense to consider mathematics (and thus geometry) a later-arriving human artifact. Indeed, as Lee Smolin suggested in Time Reborn (2013, 33 and 245), we should think of mathematics as a tool by means of which we can analyze, predict and postdict the data extracted from observationally-intellectually singled-out natural systems. To understand how this could be, we should again focus on how we sculpt our conscious view of the natural world we live in (cf. Sections 4.2.3 to 4.3.1). For this, we should realize that we, as seamlessly embedded conscious organisms, learn to make sense of nature by associating ‘world states’ with value-laden ‘body states’. In other words, by living through 135

Because there’s always an element of subjectivity when it comes to scientific observation, it seems to be more appropriate to use the concept of intersubjectivity, instead of the absolute notions of objectivity and subjectivity. In physical measurement, the format of the experimentally acquired empirical data will always be affected by the subjective choice of a) which aspect of nature should be put under scrutiny, and b) which data-refining encoding to apply. At most we can attain some high degree of intersubjective agreement – i.e., getting the same results when probing nature in a certain way – but purely objective outcomes are out of the question (J. Rosen 2010, 420).

182

their own body states, conscious organisms will gradually learn to value nature in terms of how it dynamically affects their internal milieu.136 It is along these lines that the organism develops value-laden sensorimotor and somatosensory action repertoires through which both musculoskeletal and cognitive acts can be repeated while matching, repertoire-specific body states are called to the fore. The thus evolved psychophysical action repertoires are ‘dispositional’ in the sense that they constantly reroute their firing patterns under the influence of novel stimuli.137 Accordingly, the organism can develop adaptive behavior even within rapidly changing living environments (all this has been explained in greater detail in Sections 4.2.2 to 4.2.4). Accordingly, what in early life is experienced as a blooming, buzzing confusion (cf. James 1911, 50) is thus given bodily meaning and gradually becomes categorized into an inner- and outer-organism world.138 Like this, the thus developed experiential world does not represent the so-called ‘real world out there’, but arises within a joint effort of world and organism139 as ongoing perception-action loops are engaged in bringing somatically meaningful, non-representational percepts into actuality. It is only in this non-representational way that the richness of our percepts, Gestalts, conscious categorizations, and higher-order concepts has been able to emerge. What’s more, a case can be made that all mathematical concepts have actually originated like this (reference needed). This may indeed be quite hard to swallow for some – especially for those who, in the spirit of Galileo, like to think of mathematics as the pre-given language of nature. But despite the perhaps sobering effect of this non-representational approach, it also has a lot going for it. First of all, it offers an evolutionary account of mathematical thinking. Secondly, it opens up avenues for philosophers and scientists alike to think of nature as being routine-based (i.e., habit-forming) instead of law-governed (i.e., obeying math-based physical equations). In this way, thirdly, the question “Why these laws?” can be dropped and replaced by the question of how habit-forming activity patterns can arise, persist, and evolve in nature.

136

The organism’s internal milieu involves, amongst others, the state of its sensory apparatus and life organs, its homeostasis (as well as the therefrom derived feelings and emotions), the kinesthetics and position of limbs and joints, muscle tension, and so forth. 137 During use neural connections and muscle tissue are constantly engaged in a process of strengthening and weakening through brain plasticity, neuromuscular memory path formation, etc. (cf. Edelman and Tononi 2000, 46 and 79-95). 138 Cf. Von Uexküll’s ‘Umwelt’ or ‘self-centered world of significance’ (cf. von Uexküll and Uexküll 1940/2010; Koutroufinis 2016). 139 Please note that world and organism should not be seen as truly separate. The concept of “world” includes all within-nature organisms, while, in turn, each organism is fully embedded within the natural world in which it lives.

183

Analogous to non-representational conscious experience, then, mathematics should thus be seen ultimately, not as representing nature, but just as a tool that works with great precision within certain well-defined contexts of use. On this account, geometry too, is then finally no more than an idealizing tool with great pragmatic use. But its numerical specifications of lengths, surfaces and volumes should be seen as figures of speech rather than realistic representations of concrete reality – let alone as concrete realities by themselves.

184

References:

Adonsou, Kokouvi Emmanuel, Igor Drobyshev, Annie DesRochers, Francine Tremblay. “Root connections affect radial growth of balsam poplar trees.” Trees: Structure and Function. Berlin - Heidelberg: Springer, 2016. pp 1-9, May 2016. doi: 10.1007/s00468016-1409-2 Allport, D.A. “Distributed memory, modular systems and dysphasia”. In: S.K. Newman and R. Epstein (Eds.). Current Perspectives in Dysphasia. Edinburgh: Churchill Livingstone, 1985. Anderson, Michael L. After Phrenology : Neural Reuse and the Interactive Brain. Cambridge (MA): The MIT Press, 2014. Aristotle. Physics, (translated by P. H. Wicksteed, F. M. Cornford). Cambridge (MA): Harvard University Press, 1957. Arnett, D. Supernovae and Nuclesynthesis. Princeton (NJ): Princeton University Press, 1997. Bak, Per, Chao Tang, and Kurt Wiesenfeld. “Self-organized criticality: An explanation of the 1/f noise.” Phys. Rev. Lett. 59, 381 - 27 July 1987. –––––––––– .

How Nature Works: The Science of Self-Organized Criticality. New York:

Copernicus Press, 1996. Barandiaran, Xabier, and Kepa Ruiz-Mirazo, “Modelling autonomy: Simulating the essence of life and cognition.” (Introduction to Special Issue) BioSystems 91 (2008) 295–304 Barbour, Julian. The End of Time: The Next Revolution in Our Understanding of the Universe. London: Weidenfeld & Nicolson, 1999. Bateson, G., Steps to an Ecology of Mind, Chicago: University of Chicago Press, 2000 (first published in 1972). Bell, John Stewart.“Against ‘measurement.’” Physics World. August 1990. ––––––––––––––––––––– .

Speakable and Unspeakable in Quantum Mechanics. Cambridge

(UK): Cambridge University Press,1988.

185

Beller, Mara. “Inevitability, Inseparability and Gedanken Measurement.” Abhay Ashtekar, Robert S. Cohen, Don Howard, Jürgen Renn, Sahotra Sarkar and Abner Shimony (Eds.) Revisiting the Foundations of Relativistic Physics: Festschrift in Honor of John Stachel. Dordrecht: Springer, 2003 (438-450) Berry, Michael V. “Regular and Irregular Motion.” In Topics in nonlinear dynamics: A tribute to Sir Edward Bullard. Proceedings of the workshop conference at La Jolla (CA), December 27-29, 1977. AIP Conference Proceedings. Vol. 46. New York: American Institute of Physics, 1978, pp. 16-120. Białobrzeski, C. et al. New Theories in Physics. International Institute of Intellectual CoOperation, 1939. As quoted from a secondary source: Stacey, Blake C. “Von Neumann Was Not a Quantum Bayesian” Philos Trans A: Math Phys Eng Sci. 2016, April 18; Vol.374 (Issue 2068). doi: 10.1098/rsta.2015.0235. Block, Ned. “The mind as the software of the brain.” In Daniel N. Osherson, Lila Gleitman, Stephen M. Kosslyn, S. Smith & Saadya Sternberg (Eds.). An Invitation to Cognitive Science. MIT Press. 170-185 (1995) Bohm, David. Wholeness and the Implicate Order. London - New York: Routledge, 2002. (First published in 1980). ––––––––––––––– ,

and B. J. Hiley. The Undivided Universe : An Ontological Interpretation of

Quantum Theory. London - New York: Routledge, 1993. Bohr, Niels. “Wirkungsquantum und Naturbeschreibung.” Die Naturwissenschaften 17 (1929): 483-486. ––––––––––––– .

“Quantum physics and philosophy: Causality and complementarity.” In Essays

1958/1962 on Atomic Physics and Human Knowledge. New York: Interscience Publishers, 1963. First published in: R. Klibansky (Ed.) Philosophy at Mid-Century: A Survey Florence: La Nuova Italia Editrice, 1958. Bourke, Paul. Online gallery of noise frequency spectra. March 1997. http://paulbourke.net/fractals/noise/ (retrieved and edited on July 10, 2016).

186

Bros, Jacques. “The Geometry of Relativistic Spacetime: from Euclid’s Geometry to Minkowski’s Spacetime.” Séminaire Poincaré, 2005. Bruce, Vicki, Patrick R. Green, and Mark A. Georgeson. Visual perception : physiology, psychology, & ecology. Hove New York: Psychology Press, 2003. Byrne, Oliver. The First Six Books of The Elements of Euclid. London: William Pickering, 1847. Cahill, Reginald T. and Christopher M. Klinger. “Pregeometric modelling of the spacetime phenomenology.” Phys. Lett. A. 223(5), 313–319. 1996. –––––––––––––––––––––– ,

and Susan M. Gunner. “The global colour model of QCD for hadronic

processes: A review.” Fizika B, 7, 171-202. 1998. ––––––––––––––––––––––

, Christopher M. Klinger, and Kirsty Kitto. “Process Physics: Modelling

Reality as Self-Organising Information.” The Physicist 37(6), (2000): 191-195. arXiv:grqc/0009023. ––––––––––––––––––––––

, and Christopher M. Klinger. “Self-Referential Noise and the Synthesis of

Three-Dimensional Space.” Gen. Rel. and Grav. 32(3), (2000a): 529-540. arXiv:grqc/9812083v2 ––––––––––––––––––––––

, and Christopher M. Klinger. “Self-Referential Noise as a Fundamental

Aspect of Reality.” Proc. 2nd Int. Conf. on Unsolved Problems of Noise and Fluctuations (UPoN’99), eds. D. Abbott and L. Kish, Adelaide, Australia, 11-15th July 1999, Vol. 511, p. 43. New York: American Institute of Physics, 2000b. gr-qc/9905082. ––––––––––––––––––––––

. “Process Physics: From Information Theory to Quantum Space and

Matter.” Process Studies Supplement. Issue 5, 2003. ––––––––––––––––––––––

. Process Physics: from information theory to quantum space and matter.

New York: Nova Science Publishers, 2005a. –––––––––––––––––––––– .

“Process Physics: Self-Referential Information and Experiential Reality.”

Conference on “Quantum Physics, Process Philosophy, and Matters of Religious Concern” Sept. 28 - Oct. 2, 2005 in Claremont, CA: Center for Process Studies, 2005b.

187

(www.ctr4pro cess.org/publications/Articles/LSI05/Cahill-FinalPaper.pdf, pages 1-24, 2005b). ––––––––––––––––––––––

, and Christopher M. Klinger. “Bootstrap Universe from Self-Referential

Noise.” Progress in Physics, 2, (2005): 108-112. arXiv:gr-qc/9708013v1 ––––––––––––––––––––––

. “Black Holes and Quantum Theory: The Fine Structure Constant

Connection.” Progress in Physics, 4, (2006): 44-50. arXiv.org/abs/physics/0608206 ––––––––––––––––––––––

. “Resolving Spacecraft Earth-Flyby Anomalies with Measured Light

Speed Anisotropy.” Progress in Physics, 3, (2008): 9-15. arXiv.org/abs/0804.0039 Carnap, Rudolf. “Intellectual Autobiography.” In: P.A. Schilpp (Ed.) The Philosophy of Rudolf Carnap. LaSalle (IL): Open Court, 1963 (1-84). Cartwright, Nancy. How the Laws of Physics Lie. Oxford - New York: Clarendon Press Oxford University Press, 1983. Chaisson, Eric. Cosmic Evolution: The Rise of Complexity in Nature. Cambridge (MA): Harvard University Press, 2001. Chaitin, Gregory J. Algorithmic Information Theory, Cambridge University Press, Cambridge, 1987. –––––––––––––– . Thinking about Gödel and Turing: Essays on Complexity, 1970-2007, Singapore: World Scientific, 2007. Chemero, Anthony. Radical embodied cognitive science. Cambridge, Mass: MIT Press, 2009. Chew, Geoffrey. “‘Bootstrap’: A Scientific Idea?” Science 23 Aug 1968: Vol. 161, Issue 3843, pp. 762-765. doi: 10.1126/science.161.3843.762 Chown, Marcus. “Random Reality.” New Scientist, February 26, 2000. Clarkson, Petrūska, and Jennifer Mackewn. Fritz Perls. London: Sage Publications, 1993. Cobb, John B. Jr. “Bohm and Time.” In: David R. Griffin (Ed.) Physics and the Ultimate Significance of Time. Albany: SUNY Press, 1986.

188

Corbeil, Marc J.V. “Process Relational Metaphysics as a Necessary Foundation for Environmental Philosophy.” 6th International Whitehead Conference, Salzburg, Austria, 3-6 July, 2006. Cushing, J.T. Theory Construction and Selection in Modern Physics: The S-Matrix Cambridge: Cambridge University Press, 1990. Damasio, Antonio. The Feeling of What Happens: Body, Emotion and the Making of Consciousness. London: William Heineman, 1999 Davies, Paul C.W. About Time: Einstein’s Unfinished Symphony. London: Penguin Books, 1995. ––––––––––––––––––––– .

“Whitrow Lecture 2004: The Arrow of Time - Why does time apparently

fly one way, when the laws of physics are actually time-symmetrical?” Astronomy & Geophysics, Vol. 46, Issue 1, (pp. 1.26-1.29) 2005. ––––––––––––––––––––– .

“That Mysterious Flow.” Scientific American. Theme issue “A Matter of

Time.” Volume 16, Number 1, (pp. 6-11) 2006. Dennett, Daniel C. Consciousness Explained, New York: Little, Brown & Co, 1991. Desmet, Ronny. “On the difference between physics and philosophical cosmology.” In Weber, Michel, and Ronny Desmet (Eds.). Chromatikon Yearbook of Philosophy in Process, Volume 9, pp. 87-92, Louvain-la-Neuve: Les éditions Chromatika, 2013. 87-92 d’Espagnat, Bernard. In Search of Reality: The Outlook of a Physicist. New York: Springer Verlag, 1983. Dewitt, Bryce. “Quantum field theory and space-time – formalism and reality.” (176-186) in: Cao, Tian Y. (Ed.) Conceptual foundations of quantum field theory. New York: Cambridge University Press, 1999. Dijksterhuis, Eduard J. The Mechanization of the World Picture, Oxford: Clarendon Press, 1961. DiPaolo, Ezequiel. “Extended Life.” Topoi 28 (2009): 9-21

189

Drake, Stillman. “The Role of Music in Galileo’s Experiments”, Scientific American, June, 1975. Eastman, Timothy E., and Hank Keeton (Eds.) “Resource Guide for Physics and Whitehead.” Process Studies Supplements 2003. http://www.ctr4process.org/publications/pss/ Edelman, Gerald M. “Building a Picture of the Brain.” In: Gerald M. Edelman and JeanPierre Changeux (Eds.). The Brain. (augmented version of an issue of Daedalus, spring 1998). New Brunswick (US) - London (UK): Transaction Books, 2001 (pp. 37-69) ––––––––––––––––––––––

, and Giulio Tononi. Consciousness: How Matter Becomes Imagination.

London: Allen Lane - The Penguin Press, 2000. Einstein, Albert. Relativity: The Special and the General Theory. London: Methuen & Co. Ltd., 1920/1954. Elsasser, Walter M. 1969. “A casual phenomenon in physics and biology: A case for reconstruction.” American Scientist 57: 502-516. Elsasser, Walter M. 1981. “A form of logic suited for biology?” In Robert Rosen (Ed.) Progress in Theoretical Biology, 6: 23-62. New York: Academic Press. Epperson, Michael. Quantum Mechanics and the Philosophy of Alfred North Whitehead. New York: Fordham University Press, 2004. Fechner, Gustav T. Elemente der Psychophysik. Leipzig: Breitkopf & Härtel, 1860. Feynman, R. The Character of Physical Law. Cambridge (MA): MIT Press, 1967. Fodor, Jerry. The Modularity of Mind: An Essay on Faculty Psychology. Cambridge (MA): MIT Press, 1983. Forsee, A. Albert Einstein: Theoretical Physicist. New York: Macmillan, 1963. Frank, Adam. About Time: Cosmology and Culture at the Twilight of the Big Bang. New York: Free Press, 2011.

190

Galilei, Galileo. The Assayer (in Italian: Il Saggiatore). Fiorentina: Accademia dei Lincei, 1623. Translation by Stillman Drake, Discoveries and Opinions of Galileo, New York: Doubleday and Co., 1957. Galilei, Galileo. Dialogues Concerning Two New Sciences. (Originally published in 1632; Translated by Henry Crew and Alfonso de Salvio). New York: Cosimo Classics, 2010 Gibson, James J. The Senses Considered as Perceptual Systems. Boston: Houghton Mifflin Company, 1966. ––––––––––––––––––– .

The Ecological Approach to Visual Perception. Boston: Houghton Mifflin

1979. Giere, Ronald N. Science Without Laws, Chicago: University of Chicago Press, 1999. Goff, Philip. “Why science can’t explain consciousness.” 2013. (preview paper of the forthcoming book Consciousness and Fundamental Reality. Oxford (UK): Oxford University Press) Gould, Stephen J. The Richness of Life: The Essential Stephen Jay Gould. New York: Norton & Co., 2007. Greene, Brian. The Fabric of the Cosmos: Space, Time, and the Texture of Reality. New York: Alfred A. Knopf, 2004. Gribbin, John. Schrödinger’s Kittens and the Search for Reality: Solving the Quantum Mysteries, New York: Little, Brown & Co., 1995. Griffin, David, R. “Introduction: Time and the Fallacy of Misplaced Concreteness.” In: David R. Griffin (Ed.) Physics and the Ultimate Significance of Time. Albany: SUNY Press, 1986. –––––––––––––––––––– .

“The Whiteheadian Century!” (Banquet address at the 10th International

Whitehead Conference, Claremont (CA). Sunday, June 7, 2015). http://www.pandopopulus.com/griffin-whitehead-century-revisited/ Haack, Susan. Philosophy of Logics. Cambridge University Press, 1978.

191

Haber, Shimon, Alys Clark and Merryn Tawhai. “Blood Flow in Capillaries of the Human Lung.” J Biomech Eng 135(10), 101006 (Sep 20, 2013) Paper No: BIO-12-1427; doi: 10.1115/1.4025092 Harris, M.G. 1994 “Optic and retinal flow.” In: A.T. Smith and R.J. Snowden (Eds.) Visual Detection of Motion. London: Academic Press, 1994 (307-332) Herbert, Nick. Quantum Reality: Beyond the New Physics. New York: Anchor Books, 1985. Hesse, Janina, and Thilo Gross. “Self-organized criticality as a fundamental property of neural systems.” Frontiers in System Neuroscience. 8:166, 23 September 2014 doi: 10.3389/fnsys.2014.00166 Heisenberg, Werner. Physics and Philosophy: The Revolution in Modern Science. London: George Allen and Unwin, 1958. Hunt, Tam. Eco, Ego, Eros: Essays in Philosophy, Spirituality and Science. Santa Barbara: Aramis Press, 2014. James, William. The Principles of Psychology, Vol. 1. New York: Cosimo Classics, 2007. (originally published in 1890). –––––––––––––––––

. Percept and Concept: The Import of Concepts, in: Some Problems of

Philosophy: A Beginning of an Introduction to Philosophy, Longmans Green, New York, 1911. Jan, James E., Russel J. Reiter, Michael B. Wasdell, and Martin Bax. “The role of the thalamus in sleep, pineal melatonin production, and circadian rhythm sleep disorders.” J. Pineal Res. 2009; 46:1–7 doi:10.1111/j.1600-079X.2008.00628.x Jantsch, Erich. The Self-Organizing Universe: Scientific and Human Implications of the Emerging Paradigm of Evolution. Frankfurt: Pergamon Press, 1980. Jensen, Henrik J. Self-Organized Criticality: Emergent Complex Behavior in Physical and Biological Systems. Cambridge Lecture Notes in Physics 10. P. Goddard and J. Yeomans (general eds.) Cambridge (UK): Cambridge University Press, 1998. Jones, Roger. “Realism about what?” Philosophy of Science 58 (185-202) 1991.

192

Kauffman, Stuart A. At Home in the Universe: The Search for the Laws of Self-Organization and Complexity. Oxford: Oxford University Press, 1995. –––––––––––––––––––––––

. “Foreword: The Open Universe.” In Ulanowicz, Robert E. The Third

Window: Natural Life beyond Newton and Darwin. West Conshohocken, PA: Templeton Foundation Press, 2009. –––––––––––––––––––––––

. “Foreword: Evolution beyond Newton, Darwin, and Entailing Law.” In

Brian G. Henning and Adam C. Scarfe (Eds.). Beyond Mechanism: Putting Life Back into Biology. Lanham (MD): Lexington Books, 2013. (pp. 1-24) Klinger, Christopher M. “On the foundations of Process Physics.” (143-176) In: Timothy E. Eastman, Michael Epperson, and David Ray Griffin (Eds.) Physics and Speculative Philosophy: Potentiality in Modern Physics. Berlin - Boston: Walter de Gruyter GmbH Ontos Verlag, 2016. Koestler, Arthur. The Ghost in the Machine. London: Hutchinson, 1967. Kolmogorov, Andrey N. Selected Works of A.N. Kolmogorov, Volume III: Information Theory and the Theory of Algorithms. Dordrecht: Kluwer, 1993 (Translation by A. B. Sossinsky of: Kolmogorov, Andrey N., Избранные труды: Теория информации и теория алгоритмов. Moscow: Nauka, 1987). Koutroufinis, Spyridon. “Uexküll, Whitehead, Peirce. Rethinking the Concept of ‘Umwelt/environment’ from a Process Philosophical Perspective”, in: Maria Pąchalska and Michel Weber (Eds.) Festschrift for Neurophysiologist Jason Brown, Berlin - Boston: De Gruyter, 2016. (forthcoming) Kuhn, Thomas S. “Objectivity, Value Judgment, and Theory Choice.” In The Essential Tension: Selected Studies in Scientific Tradition and Change, 320-339. Chicago: University Of Chicago Press, 1977. –––––––––––––––––––

. The Structure of Scientific Revolutions. (Fourth edition). Chicago:

University of Chicago Press, 2012. (originally published in 1962).

193

Laplace, Pierre Simon. A Philosophical Essay on Probabilities, translated into English from: Essai philosophique sur les probabilities (1814) by Truscott, F.W. and Emory, F.L., New York: Dover Publications, 1951. Löwel, S. and W. Singer. (1992) “Selection of Intrinsic Horizontal Connections in the Visual Cortex by Correlated Neuronal Activity”. United States: American Association for the Advancement of Science. Science 255: pp. 209-212. (published January 10, 1992). Lloyd, Seth. “The computational universe.” In Paul Davies and N. Gregersen (Eds.) Information and the Nature of Reality: From Physics to Metaphysics. Cambridge: Cambridge University Press, 2010. (pp. 92-103) Massimini, Marcello, Fabio Ferrarelli, Steve K. Esser, Brady A. Riedner, Reto Huber, Michael Murphy, Michael J. Peterson, and Giulio Tononi. “Triggering sleep slow waves by transcranial magnetic stimulation.” Proceedings of the National Academy of Sciences of the United States of America. Vol. 104 no. 20, 8496–8501, May 15, 2007. McDougal, D.W. Newton’s Gravity: An Introductory Guide to the Mechanics of the Universe. New York: Springer, 2012. McTaggart, J.E. Studies in the Hegelian Dialectic (2nd edition). New York: Russell and Russell, 1964. Maturana, Humberto R., and Francisco J. Varela. 1973. “Autopoiesis and cognition: the realization of the living”. Robert S. Cohen and Marx W. Wartofsky (Eds.). Boston Studies in the Philosophy of Science 42. Dordrecht: D. Reidel Publishing Co, 1980. Mayr, Ernst. “Some Thoughts on the History of the Evolutionary Synthesis.” In: Ernst Mayr & William B. Provine (Eds.) The Evolutionary Synthesis: Perspectives on the Unification of Biology. Cambridge (MA) - London (UK): Harvard University Press, 1980 (1-48). Minkowski, Hermann. Space and Time. In: H. A. Lorentz, A. Einstein, H. Minkowski, and H. Weyl: The Principle of Relativity: A Collection of Original Memoirs on the Special and General Theory of Relativity. New York: Dover Publications, 1952, pp. 75-91; talk at the 80th Assembly of German Natural Scientists and Physicians on September 21, 1908. Nagel, Thomas. The View from Nowhere. New York: Oxford University Press, 1986.

194

Nauenberg, Michael. “Does Quantum Mechanics Require A Conscious Observer?” Journal of Cosmology. Vol.14, 2011. Nicolis, G., and I. Prigogine, Self-Organization in Non-Equilibrium Systems: From Dissipative Structures to Order through Fluctuations, New York: J. Wiley and Sons, 1977. Pąchalska, Maria, Małgorzata Lipowska, and Beata Łukaszewska. “Towards a process neuropsychology: Microgenetic theory and brain science.” Acta Neuropsychologica, Vol. 5, No. 4, 2007, 228-245 Papatheodorou, C., and B. J. Hiley, “Process, Temporality and Space-time.” Journal of Process Studies. Vol. 26, 247-278, 1997. (pp. 247-278) doi: 10.5840/process1997263/420 Pattee, Howard H., “The Physics of Symbols: Bridging the Epistemic Cut,” Biosystems, Vol. 60, pp. 5-21, 2001. Parisi, G. and Y. Wu, 1981. “Perturbation theory without gauge fixing.” Scientia Sinica, 24: 483-496. Pearson, John E. “Complex Patterns in a Simple System.” Science 261, 189, 9 July 1993. Peirce, Charles Sanders. “A Guess at the Riddle.” In: Nathan Houser and Christian Kloesel (Eds.) The Essential Peirce, Selected Philosophical Writings. Bloomington (IN): Indiana University Press, 1992. Perls, Fritz S., and Laura Perls. Ego, Hunger and Aggression. New York: Random House, 1969 (first published in 1947). Pessoa, Luiz. The Cognitive-Emotional Brain: From Interactions to Integration. Cambridge, Massachusetts: The MIT Press, 2013. Phillips, Theresa. (2008) The role of methylation in gene expression. Nature Education 1(1):116 Planck, Max. Where Science is Going. New York: W.W. Norton and Company, 1932. Popkin, Richard H. Philosophy of the sixteenth and seventeenth centuries. New York: Free Press, 1966.

195

Prigogine, Ilya. The End of Certainty: Time, Chaos, and the New Laws of Nature. New York: The Free Press, 1996. Quine, Willard V.O. Speaking of Objects. (reprinted in Ontological Relativity and Other Essays). New York: Columbia University Press, 1969. Robb, Alfred A. Geometry of Time and Space. Cambridge: Cambridge University Press, 2014. (Second, renamed edition of the original 1914 publication Theory of Time and Space) Rothstein, Jerome. “Information, Measurement, and Quantum Mechanics.” Science, 114, 171175 (1951) (republished: (pp. 104-108) in: Harvey S. Leff and Andrew F. Rex (Eds.) Maxwell’s Demon: Entropy, Information, Computing. Princeton (NJ): Princeton University Press, 1990). –––––––––––––––––––––

. “Thermodynamics and Some Undecidable Physical Questions.”

Philosophy of Science. Vol. 31, No. 1 (pp. 40–48), Jan. 1964. Russell, Bertrand. An Inquiry into Meaning and Truth. London: George Allen & Unwin, 1950. Schrödinger, Erwin. What is Life? – The physical aspect of the living cell (Canto edition with Mind and Matter and Autobiographical Sketches). Cambridge (UK): Cambridge University Press, 1992 (First published in 1944). Shannon, Claude E., and Warren Weaver, The Mathematical Theory of Communication, Urbana (Ill.): University of Illinois Press, 1949. Smolin, Lee. The Life of the Cosmos. New York: Oxford University Press, 1997. –––––––––––––– .

The Trouble with Physics : The Rise of String Theory, the Fall of a Science, and

What Comes Next. Boston: Houghton Mifflin, 2006. –––––––––––––– .

Time Reborn: From the Crisis of Physics to the Future of the Universe. London:

Allen Lane, 2013. Solomonoff, Ray, “A Preliminary Report on a General Theory of Inductive Inference,” Report V-131, Cambridge: Zator Co., Feb 4, 1960 (revision, Nov. 1960).

196

Springel, Volker, Simon D. M. White, Adrian Jenkins, Carlos S. Frenk, Naoki Yoshida, Liang Gao, Julio Navarro, Robert Thacker, Darren Croton, John Helly, John A. Peacock, Shaun Cole, Peter Thomas, Hugh Couchman, August Evrard, Jörg Colberg and Frazer Pearce. “Simulations of the formation, evolution and clustering of galaxies and quasars.” Nature. Vol 435, 2 June 2005. (pp. 629-636) doi:10.1038/nature03597 Stapp, Henry P. Mindful Universe: Quantum Mechanics and the Participating Observer. Berlin: Springer, 2007. Thompson, Evan. Waking, Dreaming, Being: Self and Consciousness in Neuroscience, Meditation, and Philosophy. New York: Columbia University Press, 2015. Tononi, Giulio. “Consciousness as Integrated Information: A Provisional Manifesto.” Biol. Bull. 215: 216-242. (December 2008). Turing, Alan M., and Darrel Ince. Mechanical Intelligence: Collected Works of A.M. Turing. Amsterdam - New York: North-Holland - Elsevier Science, 1992. Ulanowicz, Robert E. The Third Window: Natural Life beyond Newton and Darwin. West Conshohocken, PA: Templeton Foundation Press, 2009. van Dijk, Jeroen B.J. “An introduction to process-information.” In Michel Weber and Ronny Desmet (Eds.). Chromatikon Yearbook of Philosophy in Process. Vol. 7, 75-84. Louvainla-neuve: Les Éditions Chromatika, 2011. ––––––––––––––––––––––––

. “The process-informativeness of nature.” In: Lukasz Lamza and Jakub

Dziakowiecz (Eds.) Core Issues in Contemporary Process Thought. 2016. (forthcoming) van Fraassen, Bas. Scientific Representation: Paradoxes of Perspective. Oxford: Clarendon Press, 2008. Velmans, Max. Understanding consciousness (2nd edition). London - New York: Routledge, 2009. Veneziano, G. (1968). “Construction of a crossing-symmetric Regge-behaved amplitude for linearly rising Regge trajectories.” Nuovo Cimento A, 57, 190-197.

197

von Kitzinger, Eberhard. “Origin of Complex Order in Biology: Abdu’l-Bahá’s concept of the originality of species compared to concepts in modern biology.” In: Keven Brown (Ed.) Evolution and Bahá’í Belief: ʻAbduʼl-Bahá’s Response to Nineteenth-century Darwinism. Los Angeles: Kalimat Press, 2001. von Neumann, John. Mathematical Foundations of Quantum Mechanics. Princeton: Princeton University Press, 1955. (first published in German as: Mathematische Grundlagen der Quantenmechanik. Berlin: J. Springer, 1932) von Uexküll, Jakob, and Marina von Uexküll. A Foray Into the Worlds of Animals and Humans: With a Theory of Meaning (Joseph D O'Neil translation of 1940 ed.). Minneapolis: University of Minnesota Press, 2010. Weekes, Anderson. “The Many Streams in Ralph Pred’s Onflow.” In Michel Weber and Pierfransesco Basile (Eds.), Chromatikon Yearbook of Philosophy in Process, Volume 2, pp. 227-244, Louvain-la-Neuve: Presses universitaires de Louvain, 2006. Wheeler, John Archibald. “Law without law,” In P. Medawar and J. Shelley (Eds.). Structure in Science and Art, Amsterdam: Elsevier, 1980. pp.132-154. –––––––––––––––––––––––––––––

. “Information, Physics, Quantum: The Search for Links.” In: Hey,

Anthony J., and Richard P. Feynman (Eds.). Feynman and Computation: Exploring the Limits of Computers. Cambridge (MA): Perseus Books, 1999. (pp.309-336) First published in: S. Kobayashi, H. Ezawa, Y. Murayama and S. Nomura (Eds.) Proceedings of the 3rd International Symposium on Quantum Mechanics in the Light of New Technology. Tokyo, August 28-31, 1989. Tokyo: Physical Society of Japan, 1989. (pp.354-368) Whitehead, Alfred North. An Enquiry Concerning the Principles of Natural Knowledge. Cambridge: Cambridge University Press, 1919. –––––––––––––––––––––––––––––

. The Concept of Nature. Cambridge: Cambridge University Press,

1920. –––––––––––––––––––––––––––––

. Modes of Thought. New York: MacMillan, 1938.

198

–––––––––––––––––––––––––––––

, Process and Reality: An Essay in Cosmology. New York: Free

Press, 1978. (revised edition by David R. Griffin and Donald W. Sherburne; originally published in 1929). Woit, Peter. Not Even Wrong : The Failure of String Theory and the Search for Unity in Physical Law. New York: Basic Books, 2006.Wolfram, Stephen. A New Kind of Science, Champlain, Ill.: Wolfram Media, 2002. Wood, D.C. (1976). “Action spectrum and physiological responses correlated with the photophobic response of Stentor coerulius.” Photochemistry and Photobiology, 24, 261-266. Zupko, Jack. “Jean Buridan.” In: T. F. Glick, S. J. Livesay, F. Wallis (Eds.) Medieval Science, Technology and Medicine: An Encyclopedia. New York - London: Routledge, 2005, (pp. 105-108).