Breaking the Phoenix Cycle: an integrative ...

4 downloads 0 Views 2MB Size Report
but is not always a member of the audience at the Alhambra. Theatre. Sometimes it is a member .... but you could not see me eat my dinner yesterday or cycle to.
Breaking the Phoenix Cycle: an integrative approach to Innovation and Cultural Ecodynamics Essays based on briefings prepared for the

Aquadapt and

TiGrESS projects by Nick Winder

School of Historical Studies University of Newcastle upon Tyne NE1 7RU United Kingdom. [email protected]

Foreword Breaking the Phoenix Cycle is about cultural ecodynamics the dynamic coupling between the biosphere and the world of beliefs and about the way our attempts to understand and manage these processes have changed scientific enquiry. The crib-sheet (Thematic Glossary) at the end summarises the principal arguments. If you get bogged down or need a reminder, use it freely. Notes and pointers to further reading are provided in passages labelled ‘orientation’ at the beginning of each section. Use or skip over these as the spirit moves you. I write as a participant observer rather than as a scholar, so the book is more or less self-contained. It is organised into nine thematic sections, each containing a few short essays. Some deal with theoretical matters, some with method, others with the management of research projects or policy in respect of knowledge and innovation. Anyone capable of reading this page knows a great deal about cultural ecodynamics. We are humans, programmed by our genes and experience to engage in those processes. However, for many that knowledge is tacit and unexamined. This is a formidable obstacle to public and, indeed, professional engagement with recent policy initiatives. Drives to promote innovation, the knowledge-based society and competitive sustainable growth, for example, presume a consensus that does not yet exist. This book is to stimulate discussion and so facilitate that consensus. My central thesis is that many communities get locked into a pernicious cycle of synthesis, paralysis, conflagration and renaissance that retards our ability to innovate, especially at times when the need for new knowledge is most urgent. Our well-being and possibly even our survival as a species may be determined by our ability to break the Phoenix Cycle.

Copyright Nick Winder © (excepting quoted materials) You may read, copy, quote and distribute this text freely (on- or off-line) subject to the understanding that the text has been provided without warranty of any sort.

Newcastle upon Tyne, February 2005

2

Section 4: Dynamic and Historical Problems .................... 88

Contents Acknowledgments ..................................................................6 Section 1: Rigour and Imagination .......................................7 1. 2. 3. 4. 5. 6. 7.

Integrative Problems ............................................................ 10 The Phoenix Cycle ................................................................ 12 Disciplines and Communities ............................................... 15 Knowledge Creation ............................................................. 18 The Natural History of Knowledge ....................................... 21 The Modern Way of Reasoning ............................................ 27 Realism and Policy-Relevant Research ................................ 31

Section 2: Ontological Problems .........................................34 1. 2. 3. 4. 5. 6.

Mode 1, Mode 2 and the Universities ................................... 37 Generalising System Theory ................................................. 41 Systems Ontology – Old and New......................................... 44 Valorisation .......................................................................... 49 Systems and Ontological Problems ...................................... 51 What’s What and Who’s Who? ............................................. 57

Section 3: Space-Time and Understanding ........................61 1. 2. 3. 4. 5. 6. 7. 8.

Pseudo-Systems .................................................................... 64 Time Geography ................................................................... 69 Space, Time and Memory ..................................................... 72 Picturing the Cognitive Process ........................................... 76 Appreciation ......................................................................... 79 Value Judgments ................................................................... 81 Empathy and Explanation .................................................... 83 Innovation Jolts .................................................................... 85

3

1. 2. 3. 3. 4. 5. 6. 7.

The Uncertainty on a Teapot ................................................ 91 Mysticism and Dynamic Systems .......................................... 95 Dimensional Coherence ........................................................ 99 The Science of the Unreal ................................................... 103 Hard Science, Soft Science ................................................. 106 Reality Judgments in Hard and Soft Science ...................... 109 Dimensional Coherence and Boundaries ........................... 111 Systems Modelling .............................................................. 116

Section 5: Reasoning a fortiori ......................................... 120 1. 2. 3. 4. 5.

Strong and Weak Beliefs ..................................................... 123 Probability Methods............................................................ 125 Bayesian Inference .............................................................. 129 Many Worlds ....................................................................... 133 Ockham’s Razor .................................................................. 140

Section 6: Integrative Research ........................................ 146 1. 2. 3. 4. 5. 6.

Professional Knowledge Creation ...................................... 150 Why Study Integrative Research? ....................................... 154 Knowledge and Information ............................................... 156 Values and Empirical Testability ........................................ 160 Probabilities and interpretation ......................................... 164 Science as the Rational Pursuit of Knowledge ................... 166

Section 7: Managing and Regulating Innovation ........... 171 1. 2. 3. 4.

4

Regulation and Management .............................................. 174 Knowledge System Theory .................................................. 176 Regulatory Noise Abatement............................................... 180 The Funding Agency’s Appreciative Setting ....................... 183

Section 8: Deducing the Limits of Logic ...........................186 1. 2. 3. 4. 5. 6.

Are Humans Machines?...................................................... 191 Loose Ends .......................................................................... 195 Marx and Spencer ............................................................... 197 Realism and Russell’s Paradox .......................................... 202 Philosophical Subtleties? ................................................... 207 Integration and Advocacy ................................................... 211

Section 9: Appreciating Cultural Ecodynamics................215 1. 2. 3. 4. 5. 6. 7. 8.

Defining Evolution .............................................................. 220 The Challenge of Mendelian Genetics................................ 226 Early Subsistence Strategies ............................................... 234 Ratchets and Co-Evolution ................................................. 237 Domesticating Humanity .................................................... 240 Socio-Natural Policy and Artificial Selection .................... 247 Education or Acculturation ................................................ 255 Cranks and Boffins ............................................................. 265

Thematic Glossary..............................................................275

Acknowledgments This book draws on research funded by the British Academy, English Heritage, the Science and Engineering Research Council, the Leverhulme Trust, the Commission of the European Union under three consecutive Framework Programs and the Structural Fund of the European Union (among others). I am grateful for permission to re-use material originally published in: ‘Towards a theory of knowledge systems for integrative socio-natural science.’ HER Volume 11, pages 118-132, 2004. My interest in knowledge dynamics was quickened by colleagues on EPPM [EV5V-CT94-0486] and developed as part of my contribution to TiGrESS and Aquadapt. Paul Jeffrey, Brian McIntosh and Roger Seaton worked closely with me on these projects; my intellectual debt to them is enormous. David Hooper taught me about clouds and, coincidentally, about social systems while Claudia Dürrwächter’s decision to study innovation gave me an incentive to put these ideas in order. Heather Winder helped me nurse this manuscript (and many others) through several drafts in short order. I am sincerely grateful to them all. I am indebted to Professor Torsten Hägerstrand who drew my attention to his paper: Survival and Arena and suggested we needed an effort of re-conceptualisation more than another computer model. Without that green light I would have been reluctant to tinker. Finally I thank Professor Geoff Bailey who suggested I focus on one issue, leave out the equations and hint readers towards a course of action. Good advice (as always) operationalised imperfectly (as usual). The opinions expressed here are my own and have not been endorsed by any of the funding agencies, individuals or consortia who facilitated the work.

5

6

Section 1: Rigour and Imagination …we shall know a little more by dint of rigor and imagination, the two great contraries of mental process, either of which by itself is lethal. Rigor alone is paralytic death, but imagination alone is insanity. Gregory Bateson You are thinking historically...when you say about anything, 'I see what the person who made this (wrote this, used this, designed this, etc.) was thinking.' Until you can say that, you may be trying to think historically but you are not succeeding. RG Collingwood [Knowledge is] … a shared mental approximation to the real world that permits human beings to co-operate in acting upon it Vere Gordon Childe

What one fool can do, another can. (Ancient Simian Proverb)

Orientation Here I set the scene for a re-assessment of secular modernism that will explore early modern science (ontology) later modern science (dynamics) and cultural ecodynamics (the new science of socio-natural complexity). I will discuss probabilistic reasoning, alternative theories of evolution, the rediscovery of uncertainty, the systems revolution, time geography, the mode 2 revolution and the post-modern counter-revolution. Modernism affirms the distinction of reality (that which is independent of human belief) from existence (that which can be observed). A credit rating exists and yet is not real. Truth is real but does not exist. Science, therefore, cannot investigate truth directly; but can only be the rational search for useful beliefs - an endless process of reconceptualisation, similar, in many respects, to history or theology but with a hard edge of formal method. To seek the metaphysical essence of science, as philosophers often do, is like trying to catch moonshine in a bag.

7

Over the last half-century scientific contractors have been recruited to work in integrative problem-domains where issues of equity are intimately bound to belief. They must create new scientific knowledge that helps resolve conflicts of interest by facilitating yet more new knowledge among stakeholders. The methods of science are changing to accommodate this self-referential dynamic and tenuous bridges are being built between the natural sciences and the humanities. So the relationship between science and society is on the move again, and the first focus of that re-structuring must be the process of integrative research itself. Epistemic diversity is a resource as valuable in cultural ecodynamics as biodiversity is in ecology. However, a research team that shatters into factions can de-stabilise communities whose cultural and natural life-support is weak. Yet it seems to be in the nature of research teams to shatter. Every community depends on tacit, culturally embedded knowledge that forms part of a mature individual’s sense of identity. This is why elders become defensive when the young ask difficult questions. It explains how sadistic interrogators can erode self-respect by violating taboos and denigrating tacit beliefs. It is also why, despite the explosion of journals and publishers welcoming inter-disciplinary contributions, it remains difficult to publish work that integrates the hard and soft sciences. In academia some tacit knowledge is transmitted through the pedagogic literature, but most is passed on orally from teacher to student. Every academic community has exacting standards of professional accreditation and no-one will be taken seriously who writes about novice-stuff. So when you write about tacit knowledge reviewers will dismiss your work as trite, philosophical or irritatingly tutorial. If you can negotiate the labyrinth of ‘peer review’, you must then deal with your colleagues - squirming with embarrassment at your naivety. Finally you must face down the academic silverbacks who just want to punish you for violating their territory. Welcome to the joys of integrative research!

8

1. Integrative Problems

The quotations at the beginning are from: Bateson, G (1979) Mind and Nature: A necessary Unity. London: Wildwood House Collingwood, RG Clarendon Press.

(1939)

An

Autobiography.

Oxford:

Childe, VG (1956) Society and knowledge, the growth of human traditions. Harper: New York. p54 Thompson SP and M Gardener (1998) Calculus Made Easy. London Macmillan. For further reading about the history of philosophy see any good encyclopaedia. Encyclopaedia Britannica is available on CD. There are others. In addition I have made use of the following: Brundtland, G (ed) (1987). Our Common Future: The World Commission on Environment and Development. Oxford: Oxford University Press. Feyerabend, P (1978) Science in a Free Society. London: New Left Books. Haskins, CH (1927) The renaissance of the twelfth century. Cambridge, Mass: Harvard University Press. Kuhn, TS (1962) The Structure of Scientific Revolutions. Chicago: Chicago University Press Marenbon, J (1987) Later Medieval Philosophy (1150-1350). Routledge: London Moody, EA (1935) The Logic of William of Ockham. London: Sheed and Ward. Popper, K (1959) The Logic of Scientific Discovery. London: Hutchinson. Russell, B (1961) History of Western Philosophy and its connection with Political and Social Circumstances from the Earliest Times to the Present Day (second edition). London: Routledge. Vickers, G (1972) Freedom in a Rocking Boat. London: Pelican.

9

The Brundtland conception of sustainability (meeting the needs of today without jeopardising those of tomorrow) forces us to frame questions of inter-generational equity in terms of things (needs). Discussions about policy in respect of sustainability are thereby reduced to an ontological power struggle (what do people really need?), to questions of mitigation (can technologists help us meet current needs sustainably?) and of adaptation (must we re-adjust our needs to accommodate changing circumstances?). Mitigation and adaptation, though ferociously complex, are problems that can be resolved. For example, it is now widely accepted that the process of climate change is too far advanced for mitigation to be a complete solution - we will have to adapt to a changing climate. Ontological problems, however, are harder to solve because there is no consensus definition of human ‘needs’. Sustainability can be re-defined by a tiny effort of reconceptualisation as meeting the aspirations of today without jeopardising those of tomorrow. Aspirations refer to valued processes rather than merely to things. We aspire to live in a prosperous, peaceful environment, want our children to grow healthy and strong and find fulfilling lifeways. The ontological problem almost disappears when we do this. If the effect of our innocent aspirations on Global Commons (the seas, atmosphere and biosphere) is to create a world in which the same aspirations cannot be sustained in the future or, for that matter, in less fortunate regions today, our current socio-natural configuration is unsustainable. Shifting attention from things to processes banishes three thorny problems. We need not commit to providing uniform distribution of goods and services through space-time. Neither need we haggle about equity - trading our goods off against those of an unborn generation. We do not even have to trade our aspirations off against those of future generations. Rather, we must find ways of living (in a prosperous, peaceful environment where children can grow healthy, strong and be fulfilled) that leave the world in a

10

condition where others can do the same. The need to make difficult decisions - to adapt and mitigate - is invariant under this transformation, but there is less of the hair shirt in the new model. Consensus, though difficult, is at least possible because we are no longer playing a zero-sum game in which every player’s gain is another’s loss. The older consensus, which assigns values to goods rather than to processes, is deeply embedded in the culture of an influential group. We cannot sweep it away. How, then, are we to advocate the new model without provoking a destructive clash of cultures? The answer, in a nutshell, is that we must uncouple belief from reality - shifting attention from things to processes and negotiating shared values that serve as a basis for discussion. Sustainability is representative of a significant type of problem in which tacit values lead to logically irreconcilable beliefs about reality. These integrative problems can create conflicts on spatial and temporal scales from the village square on Market Day to a World War lasting years. We should study them as a class. Strong consensus gives us a sense of purpose, underpins co-operation and creates a strong system. However, even the strongest and most vigorous systems falter as consensus ceases to be part of the solution and becomes part of the problem. The custodians of consensus often confound custom with universal truth and mount a ferocious campaign to defend it. The cost-benefit ratio of reifying knowledge is dynamic and irreducibly inequitable. The benefits accrue in the present, but the costs are met in the future - usually by someone else. The reification of knowledge is a monstrous trap that can destroy nations and poison seas but Geoffrey Vickers has told us that, by solving the trap we can learn something about ourselves. That insight is the lens that brings this book into focus. By studying knowledge integration we may learn things about the human condition that generalise to the whole class of integrative problems.

11

2. The Phoenix Cycle There is a deep connection between common language and logic that makes it an unhandy instrument for describing logical unconnectedness and human agency. Language is fine for describing possible cause and deterministic (necessary) cause, because these can be expressed in a logical way - we can ‘do’ probability and necessity - but the study of human beliefs as an environmental agency requires us to describe elective change, i.e. choice. People make choices that can change the relationship between cause and effect in socio-natural systems. Cultural ecodynamics, the scientific approach to choice, possibility and necessity, is self-referential. Scientists in this field must understand that what we (scientists and nonscientists alike) believe determines what we do and can conceive. We cannot avoid incorporating value judgments into the scientific process. This ethical dimension is wellunderstood in applied research, but must also be considered by pure scientists. An academic economist, ecologist or meteorologist, for example, can change the course of history simply by changing our collective understanding of social and natural life-support systems. Many scientists are committed to the positivist ideal that science deals with objective facts and is (or should be) value free. However, scientists who study cultural ecodynamics must handle systems of which they, the scientists, are an integral part. The science of human agency must consider the effects of scientists not as impassive observers, but as agents whose beliefs and values help determine human behaviour which increasingly shapes the biosphere. If we really want a science-based response to global warming, tropical deforestation, or third world poverty, the sharp line between science and metaphysics must go and we must return to a model that treats science as the rational pursuit of useful beliefs. This is modernism - an approach to knowledge creation that can easily be traced back to the fourteenth century.

12

Modernism is in decline. Indeed, some years ago I came to feel that we stood on the threshold of a Dark Age of warring cultures. In despair I wrote:

It has become a source of personal sadness for me that modernism, an important part of my own cultural heritage, seems now to be dying. However, I had always assumed that its death was by natural causes; that the rise of secular democracies made people to feel so secure in their freedom that they no longer needed or wanted the protection modernism had to offer… (However, I have been)… forced to think about the relationship between political bullies and the hungry, cynical, amoral, frightened mass of humanity that does their dirty work. Is the post-modern world really so safe? The events of the last few years confirmed those fears and led me to focus on one facet of the predicament we face the temporal autocorrelation between chaos and innovation. Bertrand Russell described the problem that interests me thus:

A stable social system is necessary, but every stable social system hitherto devised has hampered the development of exceptional artistic or intellectual merit. How much murder and anarchy are we prepared to endure for the sake of great achievements such as those of the Renaissance? In the past, a great deal; in our own time, much less. No solution of this problem has yet been found, although increase of social organisation is making it continually more important. This regularity often manifests as a Phoenix Cycle of revolution, renaissance, stability and revolution. Stable societies discourage intellectual diversity and abstract reasoning but, as circumstances change, are ill-equipped to respond by creating new knowledge. Eventually, a demographic catastrophe (war, famine, plague or massmigration) weakens institutions and makes room for new ideas. As the crisis passes, administrative and educational structures are rebuilt, the new ideas are consolidated leading to stability and eventual paralysis.

understand the way knowledge is created and find ways of managing that process in a stable, peaceful society. The history of ideas is a scholarly minefield because the same word is sometimes used to describe different concepts while the same conceptual structure is often expressed in different terms. However, it is rather easy to explore a single philosophical problem and explain how and why the polarity of scholarly language shifts as often as it does. This is what I will do in the context of modernism - the intellectual platform on which I wish to set the new science of cultural ecodynamics. We will not find all the linguistic and conceptual structures needed to build a new science by raking over the ashes of old ones. We have to innovate. Some of the terms I need already exist and others can be re-defined to serve my needs. So I review the ideas that have influenced me before focussing on the recent history of system method and suggesting adjustments. The result is a revised systems ontology that can be used to manage knowledge integration or to describe recent developments in natural history with equal facility. I apologise to system theorists inconvenienced by this revision but feel the case for doing this is now urgent. Conventional systems ontology cannot express the range of ideas needed to explore cultural ecodynamics. Either we revise our conceptual structures to facilitate a peaceful renaissance or we wait for the Phoenix Cycle of chaos and re-birth to run its course. I am not prepared to do that.

The problem of facilitating change without plunging the world into chaos is the great challenge for science in the twentyfirst century and my principal reason for writing. We have to

13

14

3. Disciplines and Communities Over the last few centuries scientists and humanists have formed factions that argue about differences of emphasis, method, philosophy and belief. Even such words as ‘science’, ‘academic’ or ‘intellectual’ have different meanings for different people and in different countries. The phrase “human and social science” increasingly heard in Europe sounds perfectly natural in France, where a post-modern anthropologist can also be a scientist, but jars on the ear in Germanic countries and the U.S. where the boundaries between the humanities and the sciences are sharp wedges riving the fabric of university life. There are academic factions wherever there are universities but disciplinary boundaries are in different places in different countries. It is difficult, therefore, to resist the conclusion that some factions are functionally irrelevant and the boundaries between them only exist so academics have something to hide behind when they snipe at each other. To many researchers, particularly those at the start of their careers, the boundaries between epistemic communities appear as obstacles to empowerment or even as part of a conspiracy to sustain the status quo. However, the situation is more complex than this. Disciplines are administrative inventions. The boundaries between history and classics, chemistry and biology, sociology and anthropology, for example, are permeable and people cross them freely. The boundaries between communities, however, are cultural, not administrative artefacts and much harder to traverse. The difference between a discipline and a community corresponds, in broad terms, to that between etic and emic as drawn in linguistics, or hermeneutics (the interpretation of text). It is exemplified in the distinction of ‘phonetic’ from ‘phonemic’. An etic description is pragmatic and imposed on a social group from the outside. When post-modernists speak about ‘modernism’, for example, they characterise a bundle of attitudes and predispositions and give it a name. This is an etic description, focussed on external observables at the expense of personal motivation and beliefs.

15

An emic description, however, may be very difficult to make on external criteria alone. It calls for a teleological analysis that takes account of an actor’s social environment, explicit and tacit beliefs. It must also interpret those beliefs in the light of behaviours that give them significance or value to the actor. When I speak of modernism, for example, I speak of the tacit beliefs and values of a community, not a convenient bundle of externally observable attributes. If I become a little prickly about post-modernism it is because the caricature that post-modernists present devalues the community of which I am a member. Research that crosses disciplinary boundaries is often straightforward, but integrative research that links knowledge communities is very demanding. Knowledge communities are often logically unconnected and sometimes incoherent in relation to each other. Even reconcilable knowledge communities may address the same issues on different spatial and temporal scales. National or supra-national policy in respect of water management, for example, cannot easily take account of the circumstances experienced by the village mayor or the farmer who must spray crops in a frost-pocket over on the west field. Each responds to little accidents of history and geography and develops a policy that is, to some extent, logically unconnected to those above and below. The logical unconnectedness of these knowledge domains does not prevent us making use of them. If we need to explain patterning at the meso-level, we call in a regional geographer. If we need to know about very local circumstances, we call in an anthropologist or sociologist. But the knowledge we produce does not form a neat axiom system from which we can generalize to the whole world. We just set these insights alongside each other, check them for logical coherence and take such account of them as seems appropriate in the circumstances - tailoring policy instruments to the spatio-temporal scales on which we work. Logical unconnectedness imposes a significant burden on a research team because it must monitor processes at a wide range of spatial and temporal scales. Each of these scales is

16

the specialism of a different knowledge community. Decisions about what spatial and temporal levels will be investigated are pragmatic. Moreover, accidents of history do not stop happening simply because there is a group of scientists in town. Even the best policies will have unforeseen and possibly undesirable consequences and must be monitored and adjusted continually. A policy that works well on a macro-scale may be a disaster on the mesoor micro level. This being so, we must be a little careful about breaking intellectual boundaries down. There is a danger that the over-zealous destruction of boundaries will produce a reactionary omni-science in which intellectual diversity is discouraged and the undoubted advantages of working simultaneously on many scales are lost. We need to be bold, but not too bold: intolerant of rhetoric, but respectful towards genuine intellectual diversity. This is not merely an ideological statement; it is a pragmatic judgment consistent both with practical experience and rational analysis. We live in a logically unconnected world and should not dismiss another person’s beliefs because they do not make sense in the light of our own experience.

4. Knowledge Creation Knowledge creation is a social activity. We may be astonished that Mendel discovered the principles of particulate inheritance and Leonardo da Vinci understood the cause of arterio-sclerosis, but these were never communicated to a wider community and were re-discovered in the twentieth century. Clever recluses intrigue historians, but history itself ignores them. This is why, in the jargon of contemporary politics, the word ‘innovation’ is not applied to the moment Archimedes got into the bath and had a bright idea about specific gravity. Innovation refers to the moment he shouted “eureka!” and ran naked to launch it into the public domain. There is no innovation without communication. Modern knowledge communities use a mixture of written and verbal communication to maintain and develop knowledge. However, as academic communities became global, the output of literature exploded and the impact of any given document became ephemeral. Like specks of plankton sinking into the abyss, books and articles that lie unread in the library stacks are the smudgy artefacts of old knowledge systems. Scholarship is the process of bringing those traces back into the light and forcing new life into them. Knowledge is not data, scholarly literature or technical knowhow, it is a living tradition: the shared beliefs that enable people to communicate clearly and co-operate effectively. There is no knowledge in a library, only texts. Knowledge is carried in and out by those who interpret these artefacts. Some philosophical arguments are of such antiquity they can be used as a backdrop against which changing patterns of belief can be interpreted. One has to understand the substantive issues, of course, and humanists provide a simple introduction in first-year lectures and tutorials before moving onto more interesting material, which deals with the way words are re-defined and questions fade and re-emerge through time.

17

18

Natural scientists, on the other hand, are less interested in communities of scholars than humanists because their primary focus is the problem itself. The community working on the problem is important, but secondary. In unpublished tutorials and lectures, students often get a whistle-stop tour of the history of a scientific field but this is background information. Between leaving school and starting their professional careers, natural scientists immerse themselves in a problem-domain and develop a real working familiarity with problem-solving methods. Consequently, a natural scientist and a humanist can be talking about the same issues, using the same words and yet fail to communicate because habit leads them astray. The scientist wants to specify the problem and then solve it. The humanist wants to know about the social and cultural factors that caused the problem to be re-stated and reinterpreted through space and time. Little flips and shifts in usage add interest to the narrative and are part of the fun. Novices often get lost in this thicket of crossed purposes. Perhaps a natural scientist reading introductory textbooks finds some of the problems interesting. Surely, the only reason for specifying a problem is that you propose to solve it. From the scientist’s perspective, then, the humanist has made a poor fist of it. Some scientists actually solve the problems, publish the results and are hurt when humanists rubbish their work. Alternatively, a humanist gets hold of popular quantum or chaos theory and realises that, as a piece of philosophy or critical scholarship, what has been written is crass. S/he writes earnest papers explaining that all scientific knowledge is socially constructed. The only people impressed are other humanists. People find it easier to communicate if they have common interests. This is why teleology is such a powerful tool for those trying to interpret human action. The trick is to look for coherence between words and interests. Named groups like ‘historians’ or ‘Christians’, for example, resist teleological analysis because their stated beliefs and interests vary from place to place and generation to generation. The sense of continuity we get from the persistent name is illusory.

19

However, when people communicate effectively or use language in the same way they are probably acting harmoniously too. One can empathize with the intelligence behind those actions and understand why people did what they did. Empathy is one of the most useful tools of the branch of critical scholarship that deals with hermeneutics. The work is harder if we know what people did but not what they believed (as in prehistory) or if we know what people believed but not what they did (as in much of ancient philosophy). For this reason an archaeology department often houses prehistorians taking a natural science approach and proto-historians who favour a humanistic approach. If you imagine archaeology to be a coherent intellectual discipline you will be disappointed. Sometimes it feels more like a scholarly battleground. All becomes clear if you apply the teleological principle because you see that archaeology is an uneasy alliance of at least two knowledge communities each with different beliefs and purposes. This begs questions about where new beliefs come from, and the natural answer is that beliefs change when people’s interests change. Under certain circumstances, beliefs can be modified and the new beliefs may change the course of history but only if they ‘strike a chord’ with an influential sector of society. When this happens, a new congruence of interest and belief will be manifest. This is innovation. Communicating new beliefs requires that one party be trying to transmit information and others willing to receive it. Often this is not the case and an influential group will find its interests threatened by the new perspective. Rhetorical appeals to common sense, common knowledge and conventional moral standards are the commonest defence mechanisms of an unreceptive audience.

20

5. The Natural History of Knowledge The use of teleology in hermeneutics raises an interesting question: if, as I assert, people believe what it suits them to believe, why do so many hold passionate opinions about universal truth? Surely, they should temper their pragmatism with a little honesty? A trite answer would be that it is in their interest to believe in universal truth and in themselves as the custodians of that truth. This is a tenet of ‘post-modernism’, the idea that belief in universal truth and progress promoted by political and intellectual elites has locked human ecosystems into a selfdestructive path. This critique has much to recommend it, but has fuelled a rather silly, tub-thumping debate about intellectual hegemony and analytical rigour that only demonstrates how completely communications have failed. There is more to consider than the calculated self-interest of priests and politicians. Young people with dependent children have accepted a martyr’s death rather than renounce deeply held beliefs, and that obliges us to contemplate some unfamiliar ideas about what a person’s interests may be. From an evolutionary perspective, one might imagine interests that kill people would be eliminated by natural selection, but this is not so. All over the world people lead impoverished, truncated lives and suffer social exclusion or imprisonment for refusing to renounce what they believe are universal truths. To understand the natural history of knowledge is to add an additional layer of detail to cultural ecodynamics. Just as the evolutionary constraints imposed by the vertebrate body plan caused ichthyosaurs and dolphins to develop similar morphologies; so the constraints imposed by our cognitive equipment, in conjunction with socio-natural challenges, produced epistemological resonances in the history of western thought. These resonances are at once transient and persistent, like choppy water over a submerged reef. Early western philosophy provides a good basis for exploring this. The picture becomes more complicated as the volume of literature increases. Plato and Aristotle were grappling

21

with deep philosophical issues about ontology (the study of existence), metaphysics (the study of ultimate reality) and epistemology (the study of how humans know). Plato thought some concepts static and dependable, while the world of sensory experience was dynamic and unreal. He believed the static categories had an ideal, immutable template or form. His philosophy, therefore, was inferential philosophers must expect earthly experience to be misleading and try to apprehend the forms by a mixture of observation and disciplined contemplation. For Plato, forms were more real than things. Mathematical structures like numbers, for example, were not human artefacts; they were real, i.e. independent of human experience. If all the humans in the world were destroyed by some cosmic disaster, two twos would still equal four. Plato’s ideas were very attractive to later Jewish, Christian and Moslem thinkers. The Neo-Platonic conception of God was as unlike those dissolute hooligans on Mount Olympus as one can imagine. Under Plato’s influence, God became the author of a set of ideas or forms imperfectly manifest on earth. Humans could apprehend those forms through prayer and philosophy. By the late eleventh century in western Europe students of biblical hermeneutics had adopted a realist stance, arguing that human knowledge could be real that it was humanly possible to make statements that were true always and everywhere and to know these statements were true. When the evidence of our senses and our biblical knowledge were inconsistent, it was knowledge and not evidence that had priority. The realist idea was always a conservative force in religious politics and seems to have been justified with reference to Platonic formalism. In western Europe at this time, much of Aristotle’s work had been lost. The rest was seen as an interesting footnote to Plato. Nonetheless, a small, but vocal opposition to realism was influenced by Aristotle’s work on syllogistic logic and particularly his theory of universals - named classes of thing whose existence was implied by the use of nouns like ‘man’ or adjectives like ‘red’. The latter, for example, implies the existence of a set of all the red things that ever were, are or

22

will be. Can we, who are trapped in space and time, be sure that this universal is real? Those who denied the reality of universals were mediaeval sceptics. Probably the most famous of these was Peter Abelard who irritated the dogmatic hierarchy and was driven into obscurity. After his death a more complete corpus of Aristotle’s work found its way back to the West from the Crusades; when political tensions eased it became clear that Abelard had been on to something. Sceptics found Aristotle’s philosophy exciting because he distanced himself from Plato’s belief that ideas were independent of the substance of individual things (existence). It was not disciplined contemplation, but science and reason that led to a more complete understanding of the species and genera of things. Aristotle provided a philosophical justification for reform. Aristotle believed that, when statements or their corollaries were mutually contradictory, one or more of the statements must be false. Most of his logical research focussed on the syllogism, which he privileged as the key to linguistic reasoning. Thomas Aquinas undertook the task of reconciling Plato and Aristotle and his synthesis became the mainstay of scholastic theology. Aquinas was probably more interested in closing the rift between old-guard Platonists and Aristotelians than in historical veracity. However, his influence on western scholarship was enormous. His synthesis held until the fourteenth century when resistance to Roman authority in England and the emerging Germanic states could not be contained. As in the early twelfth century, the debate between sceptics and realists reflected political tensions between conservatives and reformers in the church. As the Roman Church lost political control of parts of northern Europe, William of Ockham and others went back to Aristotle to recover syllogistic method. Ockham drew a sharp line between reality and belief, arguing that universals were named classes - intentions of the mind that existed by negotiation. Universals were socially constructed.

23

Ockham’s pure form of Aristotelian reasoning gave logic absolute priority in empirical science but denied it any bearing on key matters of belief. He was excommunicated from the Roman Church but early Reformation scholars in Northern Europe had the choice between ‘ancients’ (rational realists) and ‘moderns’ (rational sceptics). However, as the Reformation began to encounter serious resistance, the Protestant hierarchy lurched into biblical realism. Europe entered a post-modern age of intolerance and oppression. Modernism survived as a diverse sceptical tendency protected by the humanist Popes and wealthy patrons of the renaissance. It became a secular movement. Up to the seventeenth century, science dealt with ontology (questions of existence) and humanistic research tackled metaphysical questions about reality. Scientists were not necessarily atheistic. Indeed, many were devout sceptics. Ockham, for example, taught physics, was deeply interested in metaphysics and yet denied the objective reality of human knowledge. Humanistic research and science were different types of activity, not different types of person. Early modern science never progressed beyond ontological problems. Aristotle’s complicated ideas about causality (reviewed in every good philosophy of science textbook) are irrelevant to later modern science. His only lasting contribution was to the life sciences, particularly systematic biology (taxonomy), which is explicitly focussed on ontological questions. Taxonomists still use a neoAristotelian terminology of species and genera in their work. The development of algebraic methods in the seventeenth century was such a significant step that many scientists and historians maintain a sharp distinction between the early modern (static, descriptive) and later modern (dynamic, predictive) approaches with the cusp somewhere between Galileo and Newton. The new methods cleared a log-jam of unsolved problems relating to dynamics and change. The later modern period followed the collapse of an old order and a host of intellectual niches came vacant. As the sectarian power struggle for Europe slowed to an uneasy stand-off, scientists were no longer obliged to deal with

24

accusations of heresy or argue about universal forms that might exist even in an unpopulated world. Most felt they had better things to do than argue with mediaeval crackpots. By the late seventeenth century philosophers and secular revolutionaries were celebrating a great scientific renaissance, the Enlightenment - a return to sanity and common sense that would liberate humanity from sectarian violence and poverty.

Although the extreme position was modified in the run-up to WWII, the parallel roles of scientists in the Nazi period and dogmatists in the days of witch-hunts and inquisitions have persuaded many that this bickering about definitions was pernicious. If you must define science as an ideology rather than as an activity, then any criterion that excludes de facto scientists (Einstein, for example, who was deeply interested in metaphysics) is falsified by positive evidence.

However, the new methods revived arguments about the essence of science. Scholars like David Hume noted the early modern distinction of ontological science (dealing with observable existence) from metaphysical non-science (dealing with reality) but used it as a lever to prize the sciences from biblical hermeneutics. To stray into metaphysics was somehow unethical - a violation of scientific values. Others, like Immanuel Kant, found nothing objectionable in trying to probe the reality beyond sense experience. There was no conflict provided one accepted that all human knowledge is imperfect and such insights cannot be imposed on others.

Although rooted in the French Revolution, Comte’s positivist agenda is even more influential in the U.S. and the Germanic countries. It is particularly active on the borders of applied and pure research - frustrating attempts to integrate and inflaming relations between the sciences and humanities, but extreme positions have been discredited almost everywhere.

In the nineteenth century the social and life sciences were struggling to assert their independence from dogmatic theology. Auguste Comte, a French Republican who coined the word ‘sociology’, developed a strict ontological approach that focussed exclusively on the positive (empirical) evidence. The positivists contended with the neo-Kantians up to the early twentieth century when another demographic catastrophe - the First World War - cleared the niches again. By the 1930s there had been revolutions in sub-atomic physics and mathematics. Ecology had come into existence and an extreme anti-metaphysical polemic, logical positivism, was being propagated from Vienna. For the logical positivists, non-science (metaphysics) was non-sense (mere biblical hermeneutics) and empirical existence was reality. Although many humanists and some scientists disagreed, the tide of political patronage, particularly in Germanic countries and the U.S., favoured the logical positivists who annexed dogmatic metaphysics by stealth - declaring themselves the custodians of objectivity.

25

General readers are often defeated by the scandalous collapse of semantic discipline over the last two centuries. The word ‘realist’ has re-surfaced as: one who believes the world of which our senses speak is real. This naïve realism consigned the giants of mediaeval scholarship to the scrapheap. It turned the sadistic inquisitor Torquemada into an anti-realist, while Ockham was re-invented as a realist. Naïve realism is a silly rhetorical flourish, but it persists. The ‘post-modern’ counter-revolution muddied the water further by denying the possibility of generalisation and allowing entities to multiply unchecked. Among postmodernists are some who deny the legitimacy of all scientific endeavour (relativists or irrational sceptics) while others oppose naïve realism on philosophical and methodological grounds - the latter are neo-modernists. Today the word ‘historicism’ often signifies the belief that history is predictable, but may mean the exact opposite. To be a ‘humanist’ is to be an atheist. To be a ‘sceptic’ is to be an anti-metaphysicist. There are at least two ways of being a ‘reductionist’. Ontology, metaphysics and epistemology have been so badly mauled in these semantic shenanigans one can no longer distinguish existence from reality from belief. Small wonder so many practitioners feel that studying the philosophy of science softens the brain.

26

6. The Modern Way of Reasoning Greek philosophers and mediaeval schoolmen tried to develop knowledge communities in which sceptics and realists could co-exist. They favoured a dialectic approach to knowledge creation that was intended to give reason priority over rhetoric. In practice, of course, there was always an audience inclined to keep score and the process often degenerated into bullfighting. Nonetheless, at its best, the approach undoubtedly made room for diverse opinions, provided all those involved accepted the guiding principles of rationalism. Ironically, rationalism’s strength as a guardian of disciplined diversity stems from its limitations. Logic cannot prove truth or falsity but can test beliefs for logical coherence. Any belief system that satisfies rational criteria merits consideration whether we find the ideas congenial or not. Furthermore, any logically incoherent belief system is flawed, common sense notwithstanding. To be a rationalist is to believe that every logically incoherent set of statements must contain at least one false statement. This statement, frequently misunderstood, is not equivalent to saying that every coherent set is true. Consider the statements: All birds can swim; Harry the haddock is a bird; Harry the haddock can swim. The set is internally coherent but hardly plausible. Rational method can be used to test the consistency of any set of propositions. For example, if I believe that all swans are white, that the object at my feet is a swan and that it is also black, I know that one (or more) of my beliefs is false. Perhaps some swans are not white, the object at my feet is not a true swan or it is not truly black. Rational method does not say which statement is false but does alert me to a flaw. Testing theories is a two-stage process. One must first convert beliefs into formal statements and then check for coherence. People often take the first step unconsciously. The phrase ‘this is a black swan’ is as much a statement of belief as ‘all swans are white’ - it is predicated on the existence of sets of black objects and swans.

27

Sir Karl Popper made this mistake when he suggested that the difference between a science and a non-science was that science used the particular observation to refute the universal generalisation. That this was a mistake can be demonstrated using Popper’s own method. Consider the universal statement: •

science is the investigation of falsifiable propositions

and the particular: •

comparative anatomy is a science

Popper had to resort to special pleading when confronted with mainstream sciences that violated his general rule. Rationalism is weaker than many imagine because it is usually impossible to express a non-trivial belief unconditionally. Illustrative examples can be found in any textbook of artificial intelligence. Consider the statement: ‘all mice have tails’. How many tails? If every mouse has one tail, what can we infer from this about the statement: ‘all swans have feathers’? How about: ‘every fish has a tail’, does this mean that there exists a tail which every fish has? Even simple statements contain implicit references to common knowledge and their truth is contingent on that knowledge. As Popper explained, exhaustive checks for logical consistency lead to infinite regress. If every mouse has a tail, what is a tail and what is a mouse? If a mouse is a furry rodent, what is a rodent and what does it mean to be furry? Every definition contains adjectives and speciesnames and these have to be defined. Soon the process of checking explodes beyond reasonable limits and we have either to accept that non-trivial beliefs have hidden contingencies or get lost in endless discussion. However, rationalism is also much stronger than many of the relativist critics of scientific method believe. Relativists are those who believe that there are no universal truths, only opinions constructed within some cultural context. A rationalist, however, cannot possibly agree.

28

The proposition: •

no statement is true

is a statement about statements and rationally incoherent. If it is true, it is necessarily false; therefore it is false. No competent rationalist is a relativist. This simple reasoning trick, based on Epimenides’ paradox, sets a natural limit on policy-relevant science. Relativism lies outside that boundary because it is rationally incoherent. This book is written from a modernist perspective so my personal opinion is clear. However, my purpose here is simply to present the post-modern debate to you in the form of a dilemma: •

The modern way of reasoning conserves disciplined diversity by giving reason priority over ideology. To modernists (physicists and metaphysicists alike) relativism seems like a return to the Dark Ages - a world where the triumphs of science are denigrated and there is nothing between dissidents and the rack but rhetoric and good intentions.



Rationalism is the preoccupation of an intellectual elite that has grown powerful by ignoring underdogs whose interests are better served by giving ideology priority over reason. The triumphs of science have been overvalued and rationalism has sold out to rhetoric so many times, it has shown itself an unworthy champion of dissident rights.

This ideological dilemma has no obvious synthesis. To go one way is to decide to be a modernist; to go the other way is to decide to be a relativist. If you go neither way (accepting both statements as of equal merit) you may be ridiculed for your willingness to tolerate a non sequitur, but that indecision, too, has much to recommend it (as we will see in Section 5 below). It is possible that, when we get down to the level of the individual human being, the distinction of modernism from relativism simply cannot be sustained unless a conscious decision has been made. People who are not consciously thinking about this dilemma may shift their position imperceptibly as circumstances dictate, without any inconsistency of belief and behaviour. At the level of unplanned action and praxis, therefore, the categories we name ‘modernism’ and ‘relativism’ are ideological tendencies, not objects whose existence can be determined etically by making an empirical observation.

Thesis and antithesis are both consistent with the evidence; modernism has failed and succeeded, science has had its triumphs and its disasters. It is not merely that different people with different experiences draw different conclusions (though that is undoubtedly so). There is no reason in principle why different people with identical experiences should not also draw different conclusions or, for that matter, simply refuse to draw any conclusion at all.

29

30

7. Realism and Policy-Relevant Research I wish to propose as an axiom of cultural ecodynamics that: •

Humans can sometimes change the course of history.

This proposition is not self-evidently true (I will try to prove it for you in Section 8). It is certainly conceivable that we live in a clockwork world where the course of history is predetermined and nothing we do will make a difference, but if that were so you would have no strong reason to read my book and I would have no strong reason to write it. When we apply rational principles to this proposition we can prove that some statements, though true in practice, cannot be proven to be true: they may be logically unconnected to any set of true axioms at our disposal. This seems a little abstract, but is quite easy to illustrate. Suppose two people, Peter and Paul are playing a game of ‘scissors, paper, stone’ and have a shared knowledge base that allows them to deduce what one of them (Peter, say) will do. Paul can deduce when Peter will choose ‘scissors’ and choose the ‘stone’ that blunts them. But Peter could use the same knowledge to deduce his own bid, infer Paul’s counterbid of ‘stone’ and choose the ‘paper’ that wraps it. However, if he does this, Paul’s deduction would be refuted by subsequent events and the axiomatic basis of that deduction must be flawed. This argument, which is loosely based on a passage in Aristotle’s Posterior Analytics, leads naturally to Jonah’s law of policy-relevant science, which states that humans can only predict the course of history in respect of processes no human action can change and can only change the course of history if our predictions are irreducibly uncertain. This is so because any prediction we make has a hidden contingency: •

Peter will choose paper (provided no-one uses this prediction to change the course of history)

31

For a rationalist engaged in policy-relevant research, Jonah’s law is very significant. Every prediction we make about the likely effect of policy instruments has a logically irreducible uncertainty. This applies, for example, to predictions about the likely impact of carbon taxes, treaties to control emissions, the relative importance of adaptation and mitigation strategies, and so on. When our predictions go wrong, therefore, we have to ask ourselves whether this is because the theory is refuted, or whether it was simply the effect of Jonah’s law. Indeed, a theory that simulates the recent past too well may be a less useful guide to future behaviour than one which performs poorly because the former gives undue weight to pivotal accidents of history that might not have occurred, but did in this particular case. In the socio-natural sciences, goodness of fit to a unique time series is, at best, weakly correlated with plausibility. It is possible to make the predictions of policy-relevant science empirically testable, but there are no objective rules. Each knowledge community must negotiate the standards by which a theory is judged. Often, when scientific theories collapse, they do so not because new evidence comes to light, but because a community has re-set the standard. This is significant. Many of the challenges we face demand a proactive response. If we wait till the evidence is incontrovertible, it may be too late. However, sometimes scientists move the goalposts as their values change. Shifting consensus can lead to scientific revolutions and dramatic ‘paradigm shifts’ - the scientific Phoenix Cycle so ably described by Thomas Kuhn. In these contexts, issues of self-interest, personal values and history are often more important than scientific evidence. We have to decide whether the discrepancy between observed and predicted behaviour is significant and that decision is never objective, even when quantitative methods are used. The standards set for deciding whether a new drug is safe to release are different from those that determine whether tobacco sales should be discouraged, or elderly women are at risk from violence. Conflicts of interest and values exist on many

32

socio-political scales. To ignore their impact on integrative research would be perverse.

Section 2: Ontological Problems Syst ma is a combination of syn, together, and a form of the Greek verb hist mi, to set up. To systematise is to make something fit together as a whole. Arne Naess

I can understand but never sympathise with those biologists who still believe that our culture is only a flimsy varnish masking the naked apes we really are. Maybe that I live in an artificial landscape makes me particularly aware of the idiosyncrasy of the cultural world. I see culture everywhere and discern little, if any, untouched nature around me. Culture is our true habitat. We are cultural, not natural beings. Jan Westbroek. I saw. It passed. I may forget that surge of sense. But it has made the heart’s rich humus deeper yet, merging with all it overlaid to leave more eager and more wise the impassioned wonder of the eyes. Geoffrey Vickers In the Train Comme quelque’un pourroit dire de moy que j’ay seulement faict icy un amas de fleurs estrangeres, que je n’y ay fourny du mien que le filet à les joindres. Possibly someone could say of me that all I have made here is a heap of outlandish flowers, that I have furnished nothing of my own but the net that draws them together. Michel De Montaigne Essais

33

34

Orientation Once upon a time, a theory was a testable proposition and a method was a procedure practitioners found useful. Humanists started their careers as practitioners (research assistants or demonstrators, say) and were not encouraged to take tenured academic posts until they had something really interesting to say. The doctorate was a jewel in the crown of a distinguished academic. The post-war systems revolution changed all that. Today theory is a polemic about method. Humanists start their careers as tenure-track lecturers and the person who empties their office wastebaskets may have a doctoral dissertation in prep. Universities house distinguished ‘theoreticians’ (who write about method, but seldom use it) and oafish ‘practitioners’ (who use method, but seldom achieve distinction).

The most extravagant and ill-considered claims that system theory was generally applicable were made by Ludwig Von Bertalanffy and the founders of IIASA, the International Institute for Applied Systems Analysis. When management scientists committed to the systems approach tested these claims, they found them baseless. There are not many social systems out there. However, I would emphasise that neither Bertalanffy nor IIASA can be dismissed. I am a particular admirer of Bertalanffy’s vision and case study work, but his manifest ignorance of systematic method beggars belief. Biologists are hardly taught systematic method these days, but when Bertalanffy was writing, they were expected to understand these things. For an excellent introduction to the mathematical and historical basis of hard system theory see: Acheson, D (1997) From Calculus to Chaos: an introduction to dynamics. Oxford: Oxford University Press.

How are the Mighty fallen! The quotations come from: Naess, A (1989) Ecology, Community and Lifestyle: outline of an Ecosophy (trans. Rothenberg, D). Cambridge: Cambridge University Press.

Early sociological theories about evolution are found in the work of Karl Marx and Herbert Spencer, all of which are freely available. Biological theories of evolution were explored by:

Westbroek, J (1992) Life as a Geological Force. W W Norton; New York.

Lamarck, JB. de (1809) Philosophie Zoologique

Vickers, G (1983) Moods and Tenses: occasional poems of an old man. Kendal, Cumbria: Titus Wilson. The Montaigne quotation is from his Essais. It is famous and usually mistranslated as “I have gathered a posy of other men’s flowers …”. This is what he actually said taken from the Bordeaux manuscript (1580). The distinction of Mode 1 from Mode 2 research was made by: Gibbons, M, C Limoges, H Nowotny, S Schwartzman, P Scott, and M Trow (1994) The New Production of Knowledge: The Dynamics of Science & Research in Contemporary Societies. London: Sage.

35

and opposed by his younger contemporary Georges Cuvier, a brilliant systematic biologist of the old-fashioned, static type, but a vindictive realist who hounded Lamarck into obscurity. The most important publication on biological evolution, of course, came fifty years later: Darwin, C (1859) On the Origin of Species by Natural Selection. Thomas Henry Huxley became Darwin’s spokesman and his work has strongly influenced me. I particularly like: Huxley, TH (1870) Palaeontology and the Doctrine of Evolution. Presidential Address to the Geological Society.

36

1. Mode 1, Mode 2 and the Universities The late forties and fifties brought tremendous growth in the number of unorthodox mature students entering university. The universities expanded to accommodate them and this capacity was maintained by removing barriers to entry in the 60s. School leavers filled places created for war veterans and standards undoubtedly fell. Students were able to matriculate who, before the war, would not have had a chance. This mini-renaissance was sustained into the early 70s by a Cold-War spending boom that created more university places and career opportunities for the new graduates through increased investment in research. The balance between teaching and research shifted dramatically during this period and the trend towards research-led tertiary education was sustained up to the end of the twentieth century. Academics publish more papers per caput, per annum now than at any time in the history of the university system. By the early 70s it was becoming clear that many of the more successful polemicists, particularly those who promised to extend hard science methods to the humanities, had not delivered on their promises. This was unfortunate because enthusiastic humanists had established successful careers by reproducing these polemics. Although the funding stream was sustained into the later 70s, the promise of novelty eventually became more important than the quality of the work. As the archaeologist Eric Higgs joked, science was like selling soap. A form of semantic inflation kicked in. Methodology became redundant because every methodologist claimed to be a theoretician. System theory, for example, is not a theory at all but the comparative study of system methods. Few people talk about method any more; if we must describe a new method we call it a ‘new methodology’ - as though a new species of fish is an ichthyology. Much of the research was funded through fixed-term contracts leading to what became known as the Mode 2 Revolution. In Mode 2, research is carried out in the context

37

of its potential application, and is necessarily interdisciplinary. Many Mode 2 researchers are contractfunded problem solvers with no job security and a careerexpectancy of less than ten years. Ambitious academics avoided these contracts by competing for a few figurehead roles. The effect of this on our universities was to drive a wedge between tenured and contract-funded staff. Humanities and social science departments now house a small, but influential cabal of senior staff and a mass of junior academics under relentless pressure to teach and publish. Student recruitment is excellent so their energies are consumed by teaching and administrative duties. An easy way to win distinction is by publishing relatively modest research exercises supplemented with scholarly analyses of the claims made by fellow theoreticians. These ‘critiques’ usually focus on claim and counter-claim, particularly in respect of ontology and methodology. The process of research (particularly case-study work) is of secondary interest - the polemic literature is the primary focus. By the time they win seniority, academics are simply too busy (and often too inexperienced) to undertake major case studies and either become ‘lone scholars’ (doing excellent, but relatively obscure research) or delegate most of the practical effort to others. The most experienced practitioners, therefore, are those transient contractors who have almost no formal career structure or long-term prospects. At its best the relationship between contractors and tenured staff is a partnership, but often it is parasitic or even downright hostile. The story in the hard sciences is different, but no less depressing. Universities in northern Europe now find it difficult to recruit natural science students and those they can recruit take longer to train. We no longer allow really incompetent students into university (as we once did), but the median standard (expressed in terms of skills and knowledge, not formal qualifications) is lower. It takes a year longer to prepare a student for research than it did fifteen years ago. The level of research income in the hard sciences is high, but much of the work is undertaken by contractors and the prospects for integrative research across

38

the boundaries of the humanities and sciences are poor because of professional rivalry and mutual incomprehension among tenured academics. Legislation to give contract funded researchers the same rights as tenured staff is coming into force that will make universities more circumspect about taking contractors on. Auditing procedures have been developed to stop universities over-trading in research markets by committing staff to more research hours than their timetables can sustain (a regrettably common practice). The most probable impact of these reforms is that universities will shed contractors. There is no conspiracy here. To create a stable career structure for researchers there must be a trade-off between the level of commitment to the individual contractor and the number of jobs. Admitting unorthodox mature students is a great way of creating a renaissance, but admitting unorthodox school leavers is not. For every brilliant anarchist who rises to prominence, a hundred drifters wander in and out wasting everyone’s time. As the proportion of school leavers entering university approaches 50%, the median level of innate ability must fall and universities rightly accommodate this by providing vocational training geared to their student’s ability and relative immaturity. As research activity increases, the impact of any single project on society as a whole is naturally lessened. As the number of papers hitting the library shelves explodes, the likelihood you will find the jewel you seek must diminish. Academics publish more papers sooner, so quality has fallen too - there are fewer jewels in proportion to total output than there once were. So the post-war renaissance is over. We tried and failed to sustain it with extra funding and incentives but somehow it all slipped away. The effects of that renaissance will be felt long after the baby-boomers have died. My children, for example, have a stakeholding in a tertiary education system to which their parents had access by accident and from which their grandparents were excluded. However, as the

39

costs of tertiary education rise and its benefits are eroded by democratisation that educational system has changed. Forty years ago universities were institutions that provided academic services (education to degree standard), trained professional academics and awarded higher degrees. Training colleges specialised in vocational training for managers and technologists but did not usually award academic degrees. Artisans generally learned their trades through craft apprenticeships. Today students who would once have taken an apprenticeship or attended a college now go to ‘universities’ that specialise in vocational training and professional accreditation. Most of these also award higher degrees so the distinction of academic from vocational training has been eroded. Traditional apprenticeships are rare. Courses in the new universities are often reduced to short modules. Regulators demand that desired ‘learning outcomes’ be formally specified and audited. The ability to integrate what has been learned in one module with that learned in another is difficult to measure, so strategic thinking skills are often neglected. The students are modularised too - designed to be interchangeable, like the components on a production line. This produces uniformly well-trained personnel for positions in junior management but is a poor way of training heterodox thinkers or, for that matter, of teaching the embodied knowledge or praxis that is the hallmark of a skilled artisan. Most universities are forcing-houses of conventional knowledge. We still need specialist academic institutions, of course, and an informal ‘Ivy League’ of elite universities now meets that demand. Some of these are twentieth century ‘red brick’ foundations; others are older. Despite their best efforts at democratisation, Ivy League institutions are not as accessible as mainstream universities. Selection procedures designed to ensure open access usually reward the conventionally brilliant (and very young) at the expense of the unorthodox outsider. In this way universities have settled into a more conservative mode - as institutions often do in quiescent phases of the Phoenix Cycle.

40

2. Generalising System Theory The concept of a system as a ‘drawing together’ or synthesis is not a twentieth century idea. It is found in the mediaeval theory of universals - a Latin word that means much the same thing. It is also found in the biological science of taxonomy (systematics). Early modernists like William of Ockham explored the social construction of these systems and their neo-Aristotelian approach informed Linnaean taxonomy. However, both types of system were, in their original conception, static and time-invariant. The first explorations of system dynamics were made by early researchers on celestial mechanics, notably Galileo whose Dialogo sopra i due massimi sistemi del mondo (dialogue concerning the two great world systems) was published in 1632. Newton, and Leibnitz perfected dynamic methods around 1670. The application of probability theory to thermodynamics extended dynamic methods in the nineteenth century. Evolutionary theories in sociology and biology belong to the same period. Topological approaches to dynamical systems in mathematics originated in the late nineteenth century. The Second World War brought investment in the study of managerial systems using mathematical and hard science methods. The development of the atomic bomb, the space race, computers, cybernetics and operational research, an approach to managing industrial and commercial systems, all date from this time. Computational methods are especially significant because they enabled scientists to tackle problems that would otherwise be intractable. The emergence of ‘system theory’ as a recognisable movement owed much to the influence of these researchers and gratitude for their contributions to the war effort. However, in peace-time they learned that, without a rigidly hierarchical structure, the integrity of social systems could not be assumed. They published their results and the systems movement began to split into hard and soft subsets about a decade before the universities began to experience their anti-science backlash.

41

The complex systems movement was in the vanguard of the Mode 2 revolution. It was well placed to exploit changes in science funding policy after WWII because system methods were already being used in engineering, physics, chemistry, applied biology and economics. Consequently system theory was a banner under which many could unite without making concessions to intellectual unity. System theory’s success in attracting funding gave the movement momentum. The argument that system methods could be generalized to environmental science and the humanities was certainly plausible and putative revolutions were initiated in geography, sociology, anthropology, archaeology, systematic biology and environmental science. Throughout the 60s and 70s any natural scientist could lecture social scientists about the obvious advantages of the scientific approach. Funding agencies cheered them to the echo and ambitious young graduates established successful careers as the interpreters of scientific (usually positivist) method. This was an absurd situation - natural scientists who knew next to nothing about society, training humanists who knew next to nothing about the sciences. Much of the early investment was in data-rich case studies and many of these databases are still unpublished forty years later. Part of the problem was inept database design, but that was not the whole story. Humanists and social scientists also discovered that system method did not generalise as well as its polemicists had predicted. Many were unnerved by this and, instead of announcing it as an empirical result (as some management scientists had) took the money and kept quiet. This error of judgment left them vulnerable to rhetorical denunciation. Although many classically trained humanists had already expressed reservations about systems polemic, funding agencies were pathetically ill-equipped to sort the wheat from the chaff. It was fifteen years before they realised that the number of useful case studies was too small to justify the level of investment. No sooner had the plug been pulled on the first wave of projects than chaos theory came along.

42

Chaos theory (now called ‘organisation theory’) was another damp squib. After twenty years of well-funded research and hundreds of popular books about fractals and strange attractors, everyone now knows that there never was a coherent theory of chaotic systems. The principal impact of these revolutions on mainstream humanistic research was jargon. Where once we were merely confused, now we speak of the ‘non-linearity of socio-natural systems’ to show we are confused on a higher level. By the late 1970s most of the classically trained humanists of the 50s had retired. Many (but not all) of their places had been filled by soap-salesmen deeply committed to hard systems approaches and by deconstructionists equally convinced they had nothing to offer. A decade or so later these two communities (and their students) were still locked in pseudo-gladiatorial combat while the rest tried to clean up the mess and complete a few case studies. Gradually the balance of funding for science-based humanistic research shifted from data-intensive work and methodology to small, more or less conventional exercises focussed on a forensic model of science. As computers became more sophisticated we stopped teaching analytical and systems method altogether. Scientists were whitecoated boffins who listed the contents of ancient rubbish heaps, prepared graphics and surveys, processed data (preferably using a fully-automated database management system) and compiled marginalia. The interpretation of this technical stuff was left to tenured academics, most of whom weren’t even interested. It is not clear whether communities that encourage integrative approaches to cultural ecodynamics are blazing a new trail or re-opening an old one. What is clear is that there is no going back. We have little to learn from fashionable commentaries except that it is easier to make mistakes than rectify them. The need for a truly general theory of complex systems is greater now than ever, but it is unlikely that practitioners will ever overcome the entrenched positions of those whom the status quo has served well.

43

3. Systems Ontology – Old and New Mediaeval philosophers knew that there were things and attributes of things. This ontology poses no insuperable difficulties in practice, but is a philosophical minefield. The attribute of redness, for example, implies the existence of a class of red things. That class is a thing. If we assume such classes are real (i.e. are independent of human understanding) every object in the universe must be a member of it or not. If two people disagree about the redness of some object, at least one of them must be wrong (or lying). This nonsense licenses endless arguments about whether a fox really is red. Ontological power struggles paralysed the universities in the thirteenth and fourteenth centuries and are the reason scholasticism, once a vibrant, creative method, became synonymous with scholarly obfuscation. Debates about the reality of classes are pernicious. This is so if the class under scrutiny is the set of all virtuous acts or of small businesses. For this reason modernists accept that these classes are simplified approximations to reality - they provide a useful basis for language but only exist outside the mind of an individual by negotiation and common consent. Algebra heralded a scientific revolution. It accommodated the ancient distinction of things from attributes but added a new category, the process, that could change attributes. Classical and mediaeval scholars found it very difficult to handle processes as objects on a par with attributes and things. Ancient scholars could argue about the shape of a planetary orbit, but made little progress in their understanding of the process that caused position to change. Newton and Leibnitz invented calculus, the theory of rates of change, independently in about 1670. Probability theory, which dealt with expectation and belief, had emerged in the 1660s as a mathematical method for reasoning a fortiori (deciding the stronger or more persuasive argument among competing propositions). These were used by mathematicians to predict the outcome of dynamic

44

processes like games of chance from the outset and were later adopted by physicists to study thermodynamic systems. These algebraic tools were developed in a single, astonishing decade and scientists began to develop a theory of dynamic systems.

single axiom set. Yet the practice of science has not changed to accommodate this insight. A simple approach would be to accept that named classes are epistemological constructs - categories imposed on reality that approximate past experience and allow us to co-operate in acting upon it.

The ontological problems that troubled mediaeval thinkers were not solved this way. Scientists just lost interest in those problems - dismissing them as the preoccupation of asinine theologians. By the eighteenth century some philosophers had drawn a sharp line between science and metaphysics. Many claimed that science dealt with objective facts about things while philosophers dealt with metaphysical nonsense about hermeneutics and the reality of classes. This vanity led, in the nineteenth century, to the positivist thesis that science was (or should be) value free.

Some of these categories are processes which, over certain intervals of space and time, behave in an understandable and predictable way. However, from time to time, events occur that change our understanding. In natural science contexts these epiphanies may be rather rare. They are much commoner in the life sciences and almost ubiquitous in the human and social sciences.

By the end of the nineteenth century the belief that the natural sciences dealt with facts was deeply embedded and many scientists became realists (they believed scientific knowledge was objectively real and scientific facts independent of human beliefs). Many physicists actually claimed you could reduce biology and sociology to extensions of physical mechanics, at least in theory. Newton’s laws were deterministic so if you knew the positions and momenta of every atom in the universe you could, in principle, deduce the evolution of ants or the sinking of the Titanic. This was the reductionist thesis - the belief that all sciences could be reduced to Newtonian physics. Unsurprisingly social and biological scientists were reluctant to accept this. There is more to social dynamics than a timeinvariant process mechanistically churning out history. Humans have free will and, in exercising that freedom, sometimes change the course of history. We need to be able to discuss historical processes that are themselves constrained and modified by pivotal epiphanies that change the way people perceive reality and behave.

The events that change our understanding are registered in the social domain. I am not saying that the atom is socially constructed; it obviously corresponds to some external reality. I am saying that our understanding of what it is to be an atom (and the impact of that understanding on the process of natural science) was contingent on events that modified the scientific knowledge-base. An epiphany occurs when reality judgments change. The perfection of the light microscope, for example, was an event that alerted biologists to attributes of epidemics that had hitherto been hidden. Similarly, a chance migration event between a remote village and a distant city may open lines of communication between them that modify subsequent migration patterns. So our new science of cultural ecodynamics builds conceptual models from four categories of object. They are:

That battle was won in the first three decades of the twentieth century. Physicists and mathematicians are now satisfied that we cannot reduce all human knowledge to a

45

46



Systems - recognisable, more or less persistent collections of objects



Attributes - objects that describe other objects



Processes - objects that transform the values of attributes



Epiphanies - objects that create, destroy or transform whole categories of system by changing our understanding of their essential nature

The introduction of epiphanies into system ontology allows us to bring scientific method to bear on policy and choice. We can even try to engineer events designed to alter the course of history by changing perceptions. This work is never value-neutral - there are conflicts of interest and ethical dilemmas to resolve.

circumstances force a re-appraisal. epiphanies can sometimes be facilitated.

In practice the distinction of an epiphany from a process is socially constructed. A process describes normative dynamics where conceptual models are more or less fixed and time-invariant. In contrast an epiphany refers to adaptive dynamics - a change of conceptual models in response to a perceived threat or opportunity. If my portable computer breaks on an important trip, its essential characteristics as an individual element of a set the set of portable computers - are changed. The resulting block of plastic and metal is no longer a portable computer, but occupies a null state, however, that does not necessarily imply that the set of portable computers itself has been destroyed or changed. Clearly, then, a process can, in principle, create or annihilate an individual system without changing conceptual models. However, it is equally possible that a broken computer challenges my beliefs about the class, for example convincing me that portables are not as dependable as I had thought. That is an epiphany. A poor family that invests its savings in a portable computer may perceive this as a momentous decision - a response to opportunities or threats posed by new educational trends and new technology. Viewed on the scale of a large population of families, however, this seems more like a normative process. From time to time families buy computers. The distinction of an epiphany from a process is determined in part by prior beliefs and in part the characteristic spatio-temporal scales over which observers operate. Sometimes an epiphany occurs de novo as a creative flash of insight, but more often people encounter the new ideas through communication and discourse. Often the recipient is aware of the alternative model, but chooses to ignore it until

47

48

This

suggests

4. Valorisation All sensory experience comes to us from an arena that represents our immediate neighbourhood. Even during a transatlantic telephone conversation the information we gather comes from a physical object (a telephone) close to us. We must convert those sense data into information. I borrow a French word for this process; it is ‘valorisation’. To valorise is to enhance or give value to (some latent property) - to make manifest. We can think of valorisation as the operation of mental functions that reduce a set of sensory inputs to a set of recognisable objects. The domain of that function is perceived reality and its range is the set of possible cognitive structures. The image of reality in the cognitive domain is a formal ontology. Some of these functions re-present sense data as categories - recognisable systems. Others convert sense data into attributes, processes or epiphanies. We each have a set of valorisation functions that enables us to make sense of the world. A small mental effort is required to apprehend a bundle of sensory data and translate it into meaningful information - to recognise a coffee pot, for example. There are many valorisation functions and the ones we actually use are selected almost unconsciously to suit immediate purposes. If I walk through the kitchen with coffee in mind, I valorise coffee pots. On other occasions I may focus on the pile of dirty dishes. By an effort of remembering I can sometimes recall information I had not consciously gathered (where did I last see my keys?) but can seldom recall the totality of sensory experience. There is too much data to handle so I reject or baulk most of it. The best I can do is to valorise experiences and store the reduced summary as remembered information. The process of remembering requires us to recall those observations and select according to current need. This takes time and effort.

spare processing power to valorise the view, to note the state of the moon or the colour of the sky. Running fast over a rough, poorly lit path or crossing a busy road is more demanding. I may have to slow down and take extra care. It takes time to valorise information, but arenas have no time depth. Consequently, I must draw information from shortterm memory. In effect, valorisation occurs in a spatio-temporal envelope (the universe of discourse or, more shortly, the universe) covered by my current arena and a set of remembered arenas. The size of that universe is determined in part by the limitations of my sensory equipment (supplemented with instruments, where appropriate) and in part by the dynamism of the object itself. It is relatively easy to decide that there is a bee in the kitchen, for example, rather harder to pick it up because the creature is capable of rapid movement and unpredictable behaviour. We valorise objects in the present and the immediate past, but generalise them to the future by means of normative expectations - adjusting behaviour in the present, anticipating future experiences by extrapolation. The impression we have of instantaneous action and reaction and a smooth transition from the past into the future is a cognitive skill learnt in infancy. One can entertain a baby by hiding behind a book and popping out to shout, “Boo!” but a four-year old is less impressed. As adults we only become aware of the cognitive subterfuge when we are jolted off automatic pilot and must slow down and think. These jolts are epiphanies - objects that change our valorisation functions, causing us to see the world differently.

It is easy to valorise the path I am running along and the road beside it. If the path is smooth enough there may be

49

50

5. Systems and Ontological Problems Systematic method is neither a twentieth century invention nor a product of the ancient world, but a persistent tradition of knowledge creation that has been explored and augmented over millennia. For a hard scientist the natural way to introduce systematic ideas would be to solve a problem - demonstrating the method’s power with a pedagogic exercise. I am going to do that here, but will ignore some of the newer, more revolutionary systems ideas and introduce a few concepts from mediaeval logic. My purpose is to help you understand the limitations systems practitioners worked under five or six centuries ago. At the heart of every application of systematic method is a modelling exercise in which sensory experience is reexpressed in a formal language. Mediaeval and early modern systematists did not use processes or epiphanies they had things (I call them systems) and attributes. The values assigned to attributes can be used to allocate objects to named classes. Thus, as I write an object of the class human is sitting in the shade of an object of the class plum tree. Nick Winder (a system?) has the attribute attitude with the value: •

attitude(Nick Winder) = sitting

Similarly: •

activity(Nick Winder) = writing



location(Nick Winder) = shade(plum tree)

So the value of one of my attributes is equal to that of one of the attributes of another object (the area shaded by the plum tree). There are many objects in the shade of the plum tree (the upturned water tank on which I write, the stool I sit on, some paper, pens, a cup, …). In some respects the shade of the plum tree is a system, a little arena. That patch of shade has attributes too, one of which might be: •

occupant(shade(plum tree)) = Nick Winder

51

This is why, in earlier essays, I spoke of objects rather than maintaining a hard distinction of systems from attributes. Objects can shift from one category to another with reckless ease - attributes become systems, systems become values of attributes, and so on. Perhaps the shade of the plum tree is a system, or perhaps not. As the sun moves, so too does the patch of shade - it would be nice to invoke a process, but these haven’t been invented. As the shadows grow and the evening becomes chilly, this patch of shade will become part of the general dusk and the shade of the plum tree ceases to be recognisable. I will stop writing and prepare my dinner. So the only thing I can say about the shade of the plum tree is that it is transient - now you see it, now you don’t. The plum tree itself persists, but the shade of the plum tree does not. Perhaps the plum tree is a system; the area it shades is an attribute thereof. The list of objects shaded by my plum tree is boundless. It shades woodlice, grass stems, leaf-litter and nettles as well as my summer office. The world of sensory experience is too rich to capture in a systems model, so I must use my judgment; restricting the list to those objects that seem germane and characterising them as seems appropriate. The shade of the plum tree is a mental construct that exists in your mind and mine by negotiation and common consent. Systems modelling - viewed through contemporary eyes - is clearly an exercise in naïve set theory. First we designate some experiences as systems. Then we define sets (genera) of system called attributes and subdivide these genera into sub-sets (species) called values. The attribute ‘length’, for example, represents the genus of all systems that have a length, and the value ‘0.657’ denotes the species of system in which this particular length is manifest. Thus every named class of system can be represented without loss of generality as the intersection of a number of sets, each of which corresponds to the value of an attribute shared by all members of that class. These values are essential (i.e. necessary) for membership. Other attributes may have values that vary either through time (they are

52

dynamic) or between members of the class. They are accidental - possible but not necessary. The essence of the class is the list of values that unites it. The accidents of the class individuate members. Every unique system can be represented by the intersection of essential and accidental sets.

blooded quadruped,… Thus the name ‘cow’ (or the handful of attributes that allow me to assign that name) is the key that unlocks an information-rich knowledge base. You do not need to see the cow over the garden wall to infer the values of many of its attributes; they are logically entailed by the name.

The way we impose that structure on experience is expedient. We define and name the objects that best serve the purpose at hand. Indeed, the modelling process seems so vague and descriptive as to be no more than a technical formalisation of language development and yet I assert that it is scientific and subject to direct empirical evaluation. Each model is shared with a community of modellers whose knowledge base it represents. That model is made empirically testable by shifting back and forth between the processes of classification (modelling) and identification.

Thus systematic method requires us to invoke named classes of object and posit sufficient and necessary conditions of membership. These conditions must be such that a wider community of modellers can test them reliably, but need not be universally appreciated. Often a high degree of technical competence is required to make reliable diagnosis and the process cannot be automated.

A small subset of the essence is proposed as diagnostic or key values. These refer to attributes whose values are not only essential but also sufficient to identify a sensory experience as a member of the class. The logical relationship between key values and essential values can be expressed syllogistically: •

If key then class



If class then essence

We test this theory by using the key values to identify class members. If these members also possess the superset of essential characters, the model (theory) is sustained. If not, if some essential attributes are manifest in the wrong values, or not valorisable at all, the model is refuted. Either its essence is wrongly defined, or inappropriate key characters have been selected. To recognise this, is an epiphany that helps us realise that one or more named categories are logically incoherent. These heuristics enable people to search for is a strong (or natural) knowledge scheme in which a few, well-chosen key values unlock a large store of essential knowledge. Once I am told that an object is a cow I know it has hooves, probably has horns, is a vegetarian, and a ruminant; a warm-

53

Biologists who specialise in these methods understand that some groups are notoriously difficult. Nematode worms, orchids and mosses, for example, are tricky groups. Identifying these species calls for more than mere technical skill, one has to develop a form of cognitive intuition that some taxonomists call Jizz. Jizz is a gestalt ability that comes with practice and disciplined contemplation. It often manifests as a delightful epiphany - where once you needed a hand lens, suddenly the species leap to the eye. Jizz is very useful for the casual naturalist and indispensable for the specialist but an incomplete basis for a scientific approach. The scientific community must also define essential and key characters and check the model’s systematic underpinning. If a technically competent, wellmotivated specialist cannot distinguish the members of a class reliably using key attributes alone the class itself cannot be valorised. To persist with such a model is to evade the discipline of empirical testing. Identification is the empirical earth-wire of systematic modelling. Whenever a community invokes a class that cannot be tested this way, it slips beyond the domain of science into mysticism. Indeed, one could argue that the original purpose of systematic method was to define and defend the boundary of empirical science from mysticism. Systematic method is a way of solving ontological problems of deciding what named categories of thing might reasonably

54

be said to exist. It is not a way of deciding what things exist because the method refers to named categories. It is not even a way of deciding what categories really exist - merely a tool for determining what categories might reasonably be said to exist given our current state of knowledge. Three conditions must be satisfied for a category to be empirically defensible. The members of the class must be more or less persistent and recognisable. The key must be smaller than the essence of the class. Every candidate for membership must be decidably in or out. There is scope here for a little special pleading - fragmentary fossils may be equivocal, for example, but only because they are fragmentary. We should not cheat.

Systematic method, then, is a tool for making ontologies empirically testable by ensuring categories are unambiguously recognisable and syllogistically fecund. It provided ancient scientists with their first line of defence against mystical realists who tried to impose their beliefs on others. To exist in the world of sensory experience is not the same as to be real. Reality is that which is independent of human knowledge and belief. Existence refers to a class of objects that can be valorised in its instances. Attempts to probe the reality beyond sensory experience sometimes throw up categories whose key and essence are the same. These categories may or may not be real, but they do not exist.

To understand why this method was once used to mark the natural boundary of empirical science, think of a mystical object whose existence is not grounded in sensory experience (the soul, perhaps). We cannot observe a soul directly, but many people have come to the conclusion that every human has a mystical essence. Souls resist systematic method. You cannot observe them directly but must infer them from the existence of a human being. The key to identifying a soul in the mundane, physical world of human sense is that it resides in a human. The essence of a soul is that it underwrites that being’s humanity. The key and the essence of the class of souls, therefore, are identical. When we reduce this to a syllogism we get: •

If human then soul



If soul then human

The term ‘circular argument’ is often used rather imprecisely. Here we see that it is not so much circular as sterile, its axioms entail no corollaries beyond themselves. Sterility is not an exclusively theological property. In ecology, for example, a disturbance is sometimes defined as that which creates spatial pattern in an ecosystem. The class of disturbances, thus defined, is trivial. Its essence is also its key.

55

56

6. What’s What and Who’s Who? The form of science described in the previous essay - the science of ontology - had its origins in Aristotle’s work. It was used by late mediaeval and early modern philosophers and persists to the present day, but is emphatically not a complete description of contemporary science. I first explored ontological science as a child studying natural history and found it utterly absorbing. Since then I have encountered it again and again. I even have professional experience of this work. For a decade I worked as an archaeological scientist, creating databases, classifying, identifying and listing. A lot of geology, Quaternary research, humanistic research and forensic science functions the same way. Standards of peer review within these, the observational sciences, are notoriously patchy. Reviewers sometimes fall into the scholastic trap - setting their own authority and the opinions of experts above the empirical evidence. Scholasticism is the approach to biblical hermeneutics favoured by the Roman Catholic church and the scholastic trap is the principal reason its scientific arguments often seem so quaint and anachronistic. Perhaps I should illustrate that claim - if only to avoid the charge of Catholic-bashing. The church hierarchy has a deep-rooted culture of sexual repression and unshakeable faith in its own infallibility. This combination of realism and sexual guilt often produces bizarre theology. The Genesis story about the fall of Adam and Eve, an uncomplicated tale about the lust for God-like knowledge of good and evil, has become an allegory for sexual desire. The ovum, it seems, is infected with original sin and the mystical essence of its humanity by contact with sperm. Consequently, preventing a fertilised ovum from implanting is infanticide because the fertilised ovum is really a human being - essentially identical to you or me. This is obviously the product of an ontological power struggle between dogmatic factions in the church hierarchy - it assigns values to things rather than to processes.

57

For a modernist, there is a clear difference between preventing a fertilised ovum from implanting and dislodging a healthy, twelve-week foetus. That difference cannot be found by analysing ancient creation myths, but by studying the effects of contraception in the light of Christian values. The ethical corollaries of contraceptive technology relate to sentience and the capacity for emotional or physical suffering. The mother’s sentience and ability to suffer can be presumed. So too can that of doctors, nurses and unlicensed abortionists. However, the sentience of the foetus is equivocal because, in the very early stages of development, it lacks a functional nervous system. Modernism is focussed on corollaries - on effects rather than causes. The effects of contraception are now well understood. If condoms and clean hypodermics were freely available in all Catholic countries, we could reasonably hope for a reduced incidence of HIV and unwanted pregnancy, but cannot expect them to be eliminated this way. Forget about microscopic pores in the latex - there’s a much bigger risk. Condoms aren’t very durable and sometimes they slip off. However, people do a lot of things priests disapprove of and sex is fun. Safe sex is not completely safe, but is safer than unsafe sex. Monogamy, of course, is safer still, but isn’t always fun. More effective and humane methods of contraception would reduce the demand for infanticide. They can also enhance the health and nutritional status of poorer families by allowing parents to regulate their fertility without denying themselves sexual fulfilment. Then monogamy can be fun too. Ontological method - the science of what’s what? - has many contemporary applications, but often degenerates into power struggles focussed on personal authority, common decency and common sense. The rights of dissenting individuals are commonly eroded by the dogmatic pronouncements of experts. Middle-aged men have been publicly destroyed for sexual indiscretions. Prostitutes, adulterers and homosexuals have been imprisoned for acts of ‘depravity’ so tenuously linked to the suffering of third parties as to be victimless crimes. From time to time law-courts and

58

journalists ‘discover’ - with poorly feigned outrage - that the link between a forensic procedure or delinquent act and the crimes with which it is associated exists only in the minds of a few professional ‘experts’. Ontological method works reasonably well as long as the empirical evidence is used to test the classification, but very badly whenever the categories are imposed on the evidence negligently. Vulgar morality and tinpot experts can easily destroy whole families so we cannot depend on Jizz or common sense. The classes need a valorisable essence and key. The key must be smaller than the essence and the set of misidentified cases must be acceptably small. By this I mean that the probability of misdiagnosis must be ethically acceptable, given the impact on those directly involved in the case at hand - not nameless, nebulous third parties or the ‘common good’. Discouraging widely felt desires by pillorying individuals turns citizens into dissidents and creates markets for violent criminals. When they contend with scientists, humanists naturally appeal to their experience of science. Humanistic applications of science are based on an anachronistic, forensic model, typical of an age when you didn’t call a doctor for a cure, but to find out why you were dying. From the seventeenth century onwards many scientists became less interested in classification, description and diagnosis and more interested in processes. This was an epiphany that changed the essence of science, but that change was felt differentially in the observational and experimental sciences. Experimental scientists were not archivists cataloguing the species and genera of experience. They began to intervene at first experimentally, but later as engineers trying to make the world better in some respect. Antiquists are less easy in these progressive environments, so the balance of power between conservative and progressive tendencies shifted as scientists moved from observational to experimental work. In engineering, however, axioms again become significant, this time as a basis for executive action. Engineers are often more conservative than pure scientists.

59

Many experimental scientists and engineers simply do not understand the role of ontological method in the observational sciences and so have lost the ability to distinguish antiquism from modernism, not just at the very fine level of unconscious human actions, but also as broad philosophical and cultural tendencies. They believe humanists have built a straw man - a caricature of science to sustain their polemic. Like the humanists, they too are working with an anachronistic model of those with whom they contend - a model that belongs to the days when scholars used axioms to test the orthodoxy of value judgments and experience rather than using value judgments and experience to test the wisdom of the axioms. Modern scholars try to identify consistent associations of interest and belief that locate human actions in their wider historical contexts. Many theologians, for example, believe human language is too transient and intransigent to capture eternal verities. The task of the theologian is not to serve as guardian of ancient orthodoxy, but to look beyond the words to the revealed wisdom that inspired them. They must be prepared to re-express that wisdom in a form accessible to the current generation and, if necessary, reject ethically untenable beliefs - even those held by Nobel Prize winners, influential politicians or mediaeval saints. There are antiquists imposing their values, ontologies and authority on others in every walk of life, but intellectual communities as a whole have changed dramatically in the last century. Those changes were not simply progressive causing old approaches to give way to new ones. The pendulum-swings of intellectual fashions were attenuated in the aftermath of two World Wars, allowing conventional disciplines to diversify, at least in the capitalist democracies of the West. Consequently, one can see methodological and theoretical predispositions in contemporary disciplines that resonate across space and time to refute conventional taxonomies. This intellectual diversity - a by-product of two demographic catastrophes thirty years apart - is a precious resource that should be conserved and managed for the good of all.

60

beautiful book from an anthropological perspective that bears on similar issues.

Section 3: Space-Time and Understanding Time is why everything doesn’t happen at once. Space is why everything doesn’t happen to you. Anon. Life is carried forward in a stream of constantly changing situations. Yet every new morning is not totally surprising. Enough remains from yesterday for us to feel some degree of security. It is worth-while to name people and things and even certain types of events and situations because they are with us over periods of time or repeat themselves with some frequency. Natural language has been born out of the simple and nearby regularities we experience. It is less powerful in its ability to summarise in a few words what is complex and remote. And it scarcely helps in predicting situations to come, be they simple or complex, close or remote Torsten Hägerstrand

Orientation The geographer Torsten Hägerstrand developed a conceptual approach to space-time behaviour known as time geography. Hägerstrand described time geography as an ontology - a language and conceptual structure for describing experience. It is not a formal method, but a way of looking at the world. I will describe Hägerstrand’s ideas using a slightly more rigid terminology than his because I need to be precise enough for formal modelling work. Hägerstrand himself is not a systems modeller. Like many of his generation he felt that the early polemics of system theory were suspect and kept a little distance. However, I need to extend the time geographical ontology to describe cognitive structures. Hägerstrand anticipated this extension in the paper he wrote in 1987, which I quote at the beginning of this section, but his primary interest was in the interpretation of space-time patterns by geographers, not anthropology or sociology. Keith Basso has written a

61

Finally I introduce the theory of appreciation developed by Sir Geoffrey Vickers and make connections with Arne Naess’ ecosophy. Vickers was a system theorist with a particular interest in policy. Naess was a classically trained philosopher. The degree of convergence between these three thinkers is remarkable and largely explained in educational terms. Hägerstrand and Naess derive their sense of connectedness to the natural world from Scandinavian romanticism, a tradition I too have come to admire, albeit as an outsider. All three were simply too welleducated to be over-awed by spurious claims of novelty. Among the references consulted when writing this are: Basso KH (1996) Wisdom sits in places: landscape and language among the western Apache. University of New Mexico Press. Flood R. L. and M.C. Jackson eds (1991) Critical Systems Thinking: directed readings. Chichester: Wiley. Hägerstrand, T (1975,) Survival and Arena: on the lifehistories of individuals in relation to their geographical environment. The Monadnock. Clark University, Vol. 49. pp. 9-29. Hägerstrand, T (1985) Time geography: focus on the corporeality of man, society and environment. In: The science and praxis of complexity. The United Nations University. pp. 193-216. Hägerstrand, T (1987) On levels of understanding the impact of innovations. In Fischer M. M. and M. Sauberer (eds.). Gesellschaft, Wirtschaft, Raum: Beiträge zur modernen Wirtschafts und Sozialgeographie: festschrift fűr Karl Stigelbauer. Vienna: Arbeitskreis fűr Neue Methoden in der Regionalforschung. 46-50. Hägerstrand, T (1988) Some Unexplored Problems in the Modeling of Culture Transfer and Transformation. In Hugill, PJ and DB Dickson (eds.), The Transfer and Transformation of Ideas and Material Culture. Texas: A&M University Press.

62

Naess, A (1989) Ecology, Community and Lifestyle: outline of an Ecosophy (trans. Rothenberg, D). Cambridge: Cambridge University Press. Vickers, G (1965) The Art of Judgment: a study of Policy Making. London: Chapman and Hall. Vickers, G (1968) Value Systems and Social Processes. London: Tavistock Publications. Vickers, G (1970) Making Institutions Work. London: Associated Business Programmes. Vickers, G (1972) Freedom in a Rocking Boat. London: Pelican.

Pseudo-Systems Arctic skies are wonderful. Everyone knows about Aurora Borealis, of course, and the midnight sun, but the oblique light on either side of the winter darkness produces razorsharp shadows in a landscape sleeping under a blanket of snow that reflects orange and turquoise light from the sky. Sometimes sausage-shaped clouds several kilometres long, hang there all day - so persistent and coherent one comes to think of them as objects in their own right. Sausage-shaped clouds remain fixed relative to neighbouring hills and are related to the phenomenon of ‘mountain lee waves’. The atmosphere behaves elastically it can oscillate in the vertical direction like a plucked guitar string. In this case the air is 'plucked' when a horizontal, lowlevel wind-flow is forced upwards by the hills in its path. The oscillations are maintained by tension forces in the air itself, which increase as the air mass is displaced from its resting position. On being released to the lee-ward of the hills, the air-mass returns to its equilibrium position, compresses and bounces back. The mathematics of this simple harmonic wobble are identical for many oscillating systems, irrespective of the physical forces involved. In this case it arises from the opposing forces of Archimedean upthrust and gravitation. When lee waves form, the air is flowing horizontally and oscillating vertically, so the wave pattern is traced out to the lee-ward of the hills. As the oscillating air rises in the leewave, it depressurises and cools (like spray escaping from a pressurised can). This causes water vapour to condense or even freeze onto minute dust particles where it becomes visible. As the air begins to move downward again, it repressurises and warms (like air in a bicycle pump) causing the same water droplets (or ice crystals) to evaporate and disappear. Although the air itself is flowing over the ground, the nodes of the lee-waves remain more or less stationary and this is why the clouds themselves are so persistent. Unlike other types of clouds, which are (simply speaking) collections of

63

64

condensed water vapour drifting with the wind, lee-wave clouds remain fixed relative to the ground as the wind blows through them. Just as you never step into the same river twice, so you never look twice on the same sausage-cloud. On favourable days, lee-wave clouds form in ranks and files and the boundaries of each cloud delimit the intervals at which phase transitions occur. What leaps to the eye as an emergent, persistent, species of thing, turns out to be the localised manifestations of flows, opposing forces, wavepatterns, interactions and state changes among the host of molecules and dust particles distributed through the atmosphere. There is, I believe, a genuine analogy here between the physical and human sciences. Often what leaps to the eye as a coherent, persistent object - a social system - turns out, on close inspection to be a transient artefact of distributed state changes and fluxes. My task in the rest of this section is to translate this analogy into a strategy for modelling selforganisation, transmogrification and decay in human societies while making that model sufficiently generic to cover ecological and biological processes. The question we need to consider is: what sort of ontology should we impose on such processes? Remember, we have to decide what we will call a system, what constitutes a process, what attributes they transform and so on. Sausage-clouds, I submit, are not convincing systems in their own right but attributes of a larger atmospheric system. So is the human body a system or an attribute? What about an ecosystem? Macroscopic biological organisms retain their spatial integrity and internal differentiation for a period of a few months or years. They can be treated as systems of a recognisable species or type without much loss of insight. A few gametes may migrate from one organism to another but livers and hearts tend to stay put. However, ecological and social ‘systems’ have leaky, indefinite boundaries like clouds. When a herd of deer moves from the forest ecosystem to the parkland ecosystem, an epistemological no-man’s land is created within which scientific notions of causality no longer hold at the ‘system’ level. Operationally speaking, it might be

65

wiser to treat biological organisms as systems but reserve judgment on eco-‘systems’. Now I am not suggesting that hard and soft systems are qualitatively different. The difference between them is of degree and the distinction of system from attribute is only ever pragmatic. The newly fertilised ovum and the decomposed corpse have no organ systems. Human bodies have chemical messengers and gradients. Blood cells and so forth often migrate more or less freely between organs. Biological organisms are evidently ephemeral and imperfectly bounded over the long-term but can be treated as complete, well-bounded systems in some universes of discourse. In cultural ecodynamics, however, we are obliged to contemplate the spontaneous emergence of things which are persistent and easily apprehended using sense data, possibly classifiable into named species and genera and yet whose boundaries form, shift and dissolve through time. So the decision to treat them as a system or as a dynamic attribute of a system is pragmatic and must be made in respect of the characteristic spatio-temporal envelope in which we are working. This is why I insist that we cannot say of some thing that it really is a system, rather we impose the systemic ontology on sense experience by claiming it acts as if it were a system. Many objects seem to be aggregations of two or more types of component. Higher plants almost certainly represent a commensal relationship between a eukaryote (a cell with a definite nucleus) and a photosynthetic bacterium. It is likely this relationship emerged as part of a modified predator-prey relationship. One species was accustomed to use the other as a food source; the other resisted digestion. Soon the eukaryotes were consuming surplus foods excreted by the green bacteria and providing them with an environment in which they were able to flourish and reproduce. In other cases, the aggregation can be between an organism and a non-organism. (A virus is not a convincing organism but a virus-infected cell becomes an organism that may have an extended, independent existence).

66

Ultimately, aggregation is the only mechanism by which a category of system can emerge. The first life-forms, for example, must have arisen as aggregations of non-living components. However, speciation events (epiphanies) sometimes occur that produce two or more new species from an old one. When planktonic plant cells first settled onto the sea-bed, for example, a speciation event took place that distinguished benthic (settled) from planktonic forms. The benthos became, in effect, a new species of organism with an internal gradient. Part of its body was fixed to the substrate and the rest bathed in light and water. Benthic forms competed with each other for access to nutrients, light and gases. Under competition for resources, speciation occurred again and again and eventually produced a host of benthic algae both on marine coasts and in fresh-water.

Theatres form by aggregation and speciation too, just like plants. A group of people and material artefacts come together and interact with each other. In an undifferentiated space, freeloaders can watch the show without payment. A boundary is created that regulates the flows of humans (and other resources). In this way, the theatre develops an internal environment that differs from, but is responsive to its external environment. The internal environment maintains itself by regulating the flows of humans and resources across its boundaries. A theatre has devices for capturing resources, but it is emphatically not a static group of people or a persistent, invariant set of some sort. A theatre is a localised manifestation of migration and interaction between people and things distributed across a large block of geographical space.

Some of these algae (probably fresh water forms) in turn gave rise to the mosses, liverworts, ferns and higher plants we find on land today. Higher plants in turn speciated to form the whole range of vegetation types from minute herbs hanging onto thin, mountainous soils to the vast forest trees that dominate and sustain forest ecosystems. All these forms seem ultimately to have derived from an aggregation event (the event that brought the plastid into the cell), a speciation event (differentiating benthos from plankton) and finally a global speciation cascade among benthic forms (that produced a rich hierarchy of plant genera, families and classes).

A theatre’s human components speciate very quickly into two groups. One of them, the staff, spend longer in the theatre than others (audience). Audience humans enter in one state (richer) and leave poorer. Staff humans enter poorer and, if the theatre is viable, leave richer. Neither group ever stops being mammals or components of the biosphere. Both groups spend a large proportion of their lives as members of a family and residents of a community. Each group can be thought of as a type of resource that feeds and sustains the theatre. Each cycles through the theatre in a different way and, by their synergetic interaction therein, sustains the whole.

Lest we imagine that cultural ecodynamics deals with esoteric issues of interest only to biologists, I present a more humanistic example. A human being is always a mammal, but is not always a member of the audience at the Alhambra Theatre. Sometimes it is a member of this audience and sometimes it is not. Now the Alhambra Theatre is a thing with an observable, physical presence. It is in no way more imaginary or metaphysical than a donkey or a dung-heap. Indeed, we can, if we wish, describe a theatre as a system. Like all systems, a theatre has a state - the size and composition of its audience and the number of people sitting in its stalls or boxes vary from day to day, hour to hour.

Phrases like ‘self-organising system’ or ‘social system’ are, I suppose, too deeply embedded in contemporary systems jargon to abandon, but their uncritical use is unhelpful. To call an object a system is to assert that it presents (within some universe of discourse) as a persistent, spatially bounded object so constituted that flows across the boundary are regulated in some way. All systems can be assigned to named classes on the basis of valorisable attributes and many are persistent aggregations of subsystems. Many social ‘systems’ appear cloud-like and insubstantial in comparison.

67

68

1. Time Geography The Swedish geographer Torsten Hägerstrand has developed a parallel ontology (time geography) that can be used to describe the formation, development and decay of what one might call ‘pseudo-systems’ in cultural ecodynamics. Consider a three dimensional space. One dimension represents time and the other two are geographical dimensions. We can visualise this space as a series of superimposed maps each drawn at a different moment. Humans living in this space are subject to a number of constraints that distinguish possible from probable, improbable and impossible configurations. At close quarters proximity constraints are important. Humans cannot be precisely colocated in space-time (car crashes are often fatal). Indeed, we tend to avoid becoming almost co-located becoming troubled when neighbours fail to keep their distance. Weak proximity constraints are socially constructed. Intimate friends, people on a crowded bus and members of different ethnic groups react differently. Over larger distances proximity constraints become negligible and other behavioural constraints attract humans to each other and to pockets of resources. They form clusters or bundles in space-time. Many composite activities demand special combinations of resources. Hägerstrand calls these “pockets of local order” (POLOs). The collection of bridges, trading posts and defensive structures created to manage a river crossing, shops containing a range of goods and personnel and the office in which I keep my files, desk and computer are all POLOs. Producing POLOs often requires the co-operative efforts of many people. Some also involve structures that persist long

69

after their original function has been forgotten. Modern motorways follow the line of ancient roads; cities occupy ancient river-crossings and abandoned churches are recycled as shopping arcades. Even small POLOs can be recycled in this way. In my own archaeological research, for example, I once studied a hearth a few metres wide in a palaeolithic rock-shelter which was re-used over thousands of years, imposing a consistent structure on the distribution of bones and artefacts throughout as people prepared food and repaired hunting gear. That hearth was a POLO. Each human, while it lives, traces a smooth, continuous curve through space-time that could be simulated by a mathematical function, mapping time onto geo-spatial coordinates over the interval of time that is one life-span. It is natural, then, to invoke a field (like a magnetic field) to describe these flows. At every point in space-time we define a vector that represents potential local rate of flow. This is the field of space-time constraints or, more shortly, the ‘constraint field’. A spontaneous fission process or, perhaps a singularity (an abrupt discontinuity) in the constraint field would simulate a birth. Once it is called into existence, the infant would be expelled from the mother’s body by the strong proximity constraints that prevent people becoming co-located. Each death would correspond to a spontaneous decay process or, perhaps, another singularity. The constraint field clearly does not represent a causal structure. Deaths and births are not caused by singularities in a vector field but by hunger, love, desire, habit and the natural cycle of growth and decay. The constraint field is an effect; a succinct way of summarising space-time behaviour that has the advantage of mathematical tractability. Nonetheless it is an appealing and interesting model. Hägerstrand developed a rich word picture of space-time behaviour in which he visualised the inter-weaving life-lines in space-time as the fabric of human interaction. Humans are engaged in “projects” (Hägerstrand’s term for what I call a process) that require them to migrate through space-time to achieve proximity between themselves and other objects.

70

Imagine a home (my home, say) at four in the morning with only one person present. The constraint field determines a probability density that describes the expected location of that person - little humps over each of the sleeping places and another, much smaller, in the bathroom. Now add another person and those distributions shift again because one of the places is taken. Adding successively the four people who live there reduces our adaptive potential, demonstrating that our positions are constrained by our relationships with each other. If one of us is in the bathroom at four a.m. another will only be present in an emergency. If another gets up to use the bathroom and finds it occupied, an epiphany has occurred (a weak proximity constraint is violated) that modifies the way we were interacting with our environment. We shift from normative to adaptive behaviour. The constraints upon us will be relaxed in the aftermath of a party or if we have guests. They will vary through the diurnal cycle as we go out into the world to interact with other objects and return again. Some of these objects may be other humans (colleagues, perhaps). Others might be institutions (maybe a theatre) or even inanimate resources (a posting-box). As humans and resources move through space, their respective time-lines interweave to form a locally dense, four-dimensional fabric with resource bundles forming and dissolving as time passes. Each of these bundles is perceived in the current landscape as a POLO. Hägerstrand argued that fundamental space-time constraints (crowding constraints and maximum migration rates) and the nature of space-time processes must determine the form of this fabric. Hägerstrand’s understanding of processes allows us to consider composite processes - nested and intertwined lifelines - a spontaneous choreography of movement that weaves the fabric of human life-lines subject to space-time constraints. These constraints arise partly through the laws of physics and partly from basic biological drives, but many are socially constructed and manifest as cultural norms and the habits and routines which he calls ‘projects’.

71

2. Space, Time and Memory A landscape is a synchronous set of objects - a crosssection of space-time at right angles to time - containing the geographical sub-space itself and all the objects, animate or inanimate within it. You and I are landscape objects by this definition. Communication is an instantaneous data flow between landscape objects that changes the state of at least one of them. By analogy with Cybernetic theory, a data stream that changes the state of the recipient is information. Data that do not induce a state change are just data. All communication takes place in a landscape. If you were in my office now, you could ask what I ate for dinner yesterday, see where I am sitting and arrange to meet me tomorrow, but you could not see me eat my dinner yesterday or cycle to work tomorrow. Time and space present themselves to our senses in different ways. Sight, smell and hearing can all penetrate space over short distances, touch and taste work only on the surface of our bodies. None of our senses can penetrate time directly. Memory is an artefact of the past manifest in a contemporary landscape object. It can be localised in one person (my memory of yesterday’s dinner, for example) or distributed across several objects, some of them inanimate. Consequently, there are many ways we can elicit information about the past. We can make an effort to remember, ask neighbours, read a history book, organise an archaeological dig or win clues to the origin of the Universe by studying light arriving on Earth from deep space. However, we cannot communicate directly with the past or the future. Memory is the nearest humans have to a sixth sense that can penetrate time. As time passes, an uncertain future becomes a certain past and we humans fabricate a narrative to accommodate it. This narrative, our memory, is a history. When nothing particularly surprising happens, constructing a history is a comfortable, effortless process but sometimes unexpected events brings about a dramatic change in our circumstances.

72

Such events are often traumatic, even when they create valuable opportunities. We invoke the most implausible explanations to assemble an inexorable, deterministic chain linking the past to the present. Of all traits, the most difficult to influence are those of which we are, for the most part, unconscious. The construction of history is such a trait. It provides an impression of continuity that builds in us a tacit faith that the present is, in some sense, normal and inevitable. When our actions fail to produce the expected outcome we do not simply adapt to the new circumstances, but demand explanations for this deviation from ‘normality’. Indeed, the thrust of a great deal of policy-relevant research is to sustain normative patterns of behaviour despite a perceived challenge. The wish to avoid economic recession, for example, is often stronger than the wish to accommodate it because we have come to think of economic growth as not only desirable, but normal. Indeed, the citizens of depressed industrial towns who adjust to long-term unemployment are more likely to be vilified for their laziness than praised for their adaptability. Not everyone thinks in this normative way, however. As a child I was given a handwriting exercise:

For the want of a nail, the shoe was lost; for the want of a shoe, the horse was lost; for the want of a horse the rider was lost; for the want of the rider the battle was lost; for the want of the battle the Kingdom was lost, and all for the want of a horse-shoe nail My teacher kindly explained that this was to show us the importance of attention to detail - check the horse’s shoes and the Kingdom is safe. Yet these words still make me a little edgy because they had the opposite effect on me. They helped me understand that the future had so many unresolved contingencies no Kingdom was ever safe. As we look into the future, the impression we have of a deterministic trend evaporates and we see a host of diverging pathways each of which is called into being by a set of unresolved contingencies: what if a nail falls out of a

73

horse’s shoe? Resolving those future contingencies demands a shift of emphasis from action a posteriori (assuming past behaviours will be sustained in the future) to action a priori (the ‘what if?’ approach). The terms a priori and a posteriori conventionally refer to the reasoning methods known as deduction and induction. Reasoning a posteriori requires us to work from observed effect to inferred cause. Charles Darwin claimed to have used this method in the Origin, inferring evolution on the basis of accumulated evidence. Reasoning a priori is the process of deducing effects by the logical manipulation of those laws, as Euclid did in his geometry textbook, Elements. Clearly, in logic these reasoning methods are both indispensable and inter-connected. One creates axioms; the other converts them into conclusions for empirical evaluation. Here I extend these ideas from the domain of abstract reasoning to that of concrete action. Action a priori and a posteriori, though clearly related to the reasoning methods of the same name, represent behavioural tendencies that may become divorced from each other, sometimes with disastrous consequences. Although we may frown on ancient monarchs who justified their tyranny on the grounds that they ruled by the unshakeable will of Almighty God, our own reluctance to experiment with new behavioural norms is rooted in the same a posteriori bias as theirs. When we think of a normal landscape we think of the present (or something a bit like it) and when we think of a natural landscape, we think about our own childhood and the reminiscences of parents and grandparents. However, when we adopt an a priori stance - setting out hypotheses and confronting all those unresolved contingencies - we develop an alarming sense of how unusual and improbable is the status quo. Deep-rooted beliefs suddenly become picturesque local customs and accidents of history. We are confronted with a range of possible worlds, many of them very unfamiliar and surprising. Action a priori (adaptive action) is much more

74

demanding and disturbing than action a posteriori (normative action). It undermines our faith in norms and trends, alerting us to possible threats and opportunities. It is very unlikely, for example, that we would ever have detected the effects of greenhouse gases a posteriori by monitoring climate trends had we not been alerted to the possibility of global warming by reasoning a priori from theoretical first principles. Action a priori is, at once, a gift and a curse. It keeps society as a whole open to a new range of possibilities but damages its practitioners by undermining their sense of normality, naturalness and identity. To dwell among future contingents is to live rootless and alone, painfully aware of fragile conjunction of circumstance and consensus that underpins almost everything we depend on. It is much more comfortable to smooth out your memories, embed normative behaviours and expectations deep in your psyche and shun a priori thinkers, particularly those who advocate concrete action, as dangerous psychotics. Receptiveness to a priori thinking varies between individuals in respect of temperament, education and circumstances. In general, those whom the status quo has made powerful and those enfeebled into dependence on it are especially reluctant to contemplate adaptive change. The most successful innovators tend to be interlopers, willing and able to adapt to a wide range of possible futures. Most innovators, however, are excluded from, or barely tolerated by the communities whose adaptive potential they sustain. Reasoning a priori may destroy your credibility and social standing, making it hard for you to influence your neighbours, even if your concerns are well founded.

3. Picturing the Cognitive Process I will now present the cognitive process as a dynamic, threedimensional word-picture. We will construct it together in easy stages. A landscape is a section through space in which every person is represented by a point. Freeze time, so we can discuss the static case, and represent all the people by points in a single landscape which, itself, is a mappable plane. The only information available to any person is that valorised from an arena around his/her current location, and, of course, from memory. The shape of the arena is determined by location (we cannot see through walls, for example) and process. A person reading a paper will not be monitoring the ground twenty metres ahead, the driver of a car should. So the shape of the arena is context-specific. It is affected by the actions in which a person engages and by the constraints imposed on the senses by that person’s landscape. Visualise the arena as a bright area of variable extent around each person. Communication between people is easiest when their arenas intersect. All those people have a history which we can represent by tracing their lifelines, point by synchronous point, back through earlier landscapes. They cannot yet be traced into the future! Time is currently frozen, but our picture is not timeless, it has history. Construct old arenas behind the current one to represent memory. They will extend like a little tail-flame behind each person to illuminate the region of space-time s/he is valorising. The smooth curve that is the human lifeline extends into the past; the tail-flame is the spatio-temporal universe of discourse within which the human is valorising objects. You can now unfreeze time. As time passes we see that the memory-flame is not fixed; it can be re-formed by the effort of valorisation. The flames are coloured to represent the characteristic knowledge-filters each human is using to remember. Memory naturally tends

75

76

to adhere to the individual’s lifeline but can be re-formed and re-coloured by the effort of remembering. Perhaps someone is thinking about lunch and where to get it. She must valorise the eating-places she has found close to her office (the flame changes shape to exclude her gastronomic adventures in Paris or Barnsley and blushes a fine hungry purple). Another is trying to recall the name of a man met at a conference (memory contracted to a blur around a single, weekend and glowing a bilious green). Everything humans know is valorised in the light of these flames. This is where we find childhood nursery rhymes, remember how to behave at school, where to catch the bus, how to travel to a neighbouring city. This is where we find friends and cement relationships. When we communicate with each other, all the beliefs, memories and expectations we draw on are found in this little flame. It is for this reason that I call it a universe of discourse. A universe of discourse consists of the arena a person is currently monitoring and a set of remembered arenas valorised in the light of that flame. This memory not only changes shape and colour as we travel through life, our ability to control and direct it will vary too. In later years we may be unable to recall the names of school friends or the layout of the town before the new shopping centre was built and yet be cast back forty years by the smell of sun on cedar wood. In many tragic cases, dementia reduces memory to a grey stub a few minutes long, sometimes with a little glowing light around some relict of childhood or adolescence. Forgetting almost everything, the victim spends the last years of life trapped in a bewildering present - unable even to recognise close family and friends. The process of remembering can sometimes run off track completely. Drug abuse, ecstatic spiritual experiences and some types of physical illness may cause memory to spark away from the trajectory of one’s own life and create pseudo-memories with all the clarity of reality. Memory is not dependable, but it contains all the life-experience we can mobilise to make sense of new landscapes as we withdraw from old ones.

77

It is possible to valorise distributed memory, but that requires a special, conscious effort. Archaeologists and historians do this often, inferring the memories of dead people and artefacts from the traces preserved in the landscape or valorised from documents retrieved from library stacks. While we live, we create a lifeline that is a continuous, smooth curve exactly as I described above, but this line is surrounded, as it grows, by a tail-flame of memory anchored and abruptly truncated in the present and probing the past. As time passes these memory-flames creep forward, not destroying, but creating and extending lifelines into the future. The leading edge of each flame (an arena belonging to some time geographical actor) is perfectly synchronised with all the others. This is the rolling frontier of human knowledge where history and geography intersect. Here the past and the present are valorised and the future constructed in the minds of millions of people. It is not possible to tell precisely where each lifeline will go, of course, but we can be sure that the line, when it is created, will be a smooth curve with no perfect, sharp corners over the open interval between birth and death. The constraint field will be shaped by our natural sociability as we seek out friends and family and by the daily routine that makes us abandon them to go about business. These are the processes that weave a fabric of human life-lines in space and time. We also know that POLOs are often conserved and recycled in the landscape and it seems reasonable to assume this conservative feature will be manifest in the constraint field. Space-time constraints narrow the range of possible geographies down a little but are not enough to predict the future exactly. However, they do have some predictive power, a power that can be enhanced by considering the appreciation - the process by which humans convert sensory experience into knowledge and knowledge into action.

78

4. Appreciation Appreciation is a discursive process by which a set of explicit judgments (a conceptual model) is formalised into policy. Sir Geoffrey Vickers developed the method in the 1960s and early 70s. He, like Hägerstrand, was not very interested in computational approaches and mathematical procedures (as I am). I have made a few adjustments and simplifications to make it applicable to integrative work and link it to the new systems ontology described in Section 2, Essay 3, but the basic structure is his. Appreciative method is a way of making judgments explicit to facilitate the epiphanies that in turn determine policy. It is a complete approach to political reasoning - a paradigm, if you will - that allows practitioners to construct axioms, deduce corollaries and determine policy. Policy, as Vickers conceives it, is a set of explicit norms and tacit customs that regulates executive action. The receptivity of a person or group to revised norms and customs is an appreciative setting; it is constrained by intellectual competence, knowledge and receptivity. There are, broadly speaking, three types of judgment. Reality judgments refer to what objects actually exist, or might exist in the future. Reality judgments either relate to processes (transformations) or to products (systems or attributes that can be facilitated by human agency). Products are directly observable in some universe of discourse. Processes, however, have substantial time-depth and so are harder to valorise. They often involve establishing or sustaining relationships between people and resources. A letter on the doormat, for example, is a product; correspondence is a process. Anyone can valorise the letter, but valorising correspondence requires us to contemplate a spectrum between a weekly exchange of letters and an annual Christmas card and specify a more or less arbitrary cut-off point. Valorising processes is always difficult. As a class their key is also their essence - they are mystical objects whose existence we infer when attributes change and whose essence is that they change attributes.

79

It is relatively easy to valorise and even quantify products, but processes are ephemeral. Applied scientists and policy makers, especially those in whom the positivist distinction of science from metaphysics is culturally embedded, often compensate for this by neglecting processes. This, as we will see in the next essay, is a mistake. Operational judgments link products to processes in an implied causal relationship. They refer to elective change (choices) that constrain or shape processes. If the existence of a product makes a type of process more or less likely it may be useful as a policy instrument. For example, a bridge makes it easy to cross and re-cross a river, so building a bridge could be a policy instrument. Value judgments associate values to some product or process. Here are some specimen value judgments: •

The appreciative process is valuable insofar as the policies it produces provide direction, coherence and continuity to the executive processes they regulate.



Value judgments that refer to processes are particularly useful. They defuse ontological conflict by directing attention toward the quality rather than the quantity of human experience.

Value judgments give direction or purpose to policy. They cannot always be quantified. Value judgments with strong moral or political overtones imply equivocal reality- or operational judgments. As consensus becomes stronger, morality gives way to ethics, which, in turn, gives way to tacit common sense. The extent to which we contest reality judgments and invest them with moral significance is only partly determined by their impact on our physical well-being. An appreciative setting is more than just enlightened self-interest or costbenefit optimisation. It has to do with a sense of identity and the desire to retain the trust and understanding of neighbours.

80

5. Value Judgments Vickers believed value judgments (either tacit or explicit) to be key determinants of human behaviour. No-one would study celestial mechanics who did not think this worthwhile. Even if the motivation were purely financial, a value judgment is implied. His second point, made with equal force by Arne Naess, the author of an approach to environmental ethics known as ‘deep ecology’ or ‘ecosophy’ is that, in integrative contexts, people should reflect on the ethical and evaluative basis of their actions. Many of the actions that give our lives value consist of establishing and maintaining relations with people and things. These processes, as Hägerstrand explains, create interpretable patterns in spacetime and so form a natural link between appreciative theory, ecosophy and time geography. The distinction of processes from products frees us from the idea that every human action is directed towards a welldefined goal or product. It is found in Hägerstrand’s conception of a time geographical project and very clearly enunciated by Vickers. The purpose of driving an automobile might be to go fishing, but what is the purpose of going fishing? We have to let go of the idea that the purpose of every action is a desirable state and accept that the purpose of going fishing may be that someone likes fishing. The purpose-ridden man, Vickers complained, was an odd abstraction:

“the purpose-ridden man’s only ‘rational’ activity is to seek goals; but since each goal is attained once for all, it disappears on attainment leaving him ‘purposeless’ and incapable of rational activity unless and until he finds another.” To date we have had little success in our attempts to model purposeless behaviours because we are still using the seventeenth century ontology in which time-invariant processes drive a system towards a definite end-state. In such a world simulated humans are like busy bees, running all day to gather the necessities of life, optimising utility functions and striving continually for progress.

81

The absence of explicit goals leaves human behaviour under-determined so modellers are obliged to invoke a fixed set of purposes and shuffle actors stochastically from one normative action to another. Computer-based simulation models often contain a time-wasting ‘purpose’ - the modeller parks simulated actors in queues to wait out those intervals when the real humans they represent are doing nothing in particular. Products are important, but we cannot avoid assigning qualitative values to processes (like a disease-free infancy or relaxing with friends) or using those values to inform policy because processes are the critical determinants of human well-being. For this reason, system practitioners often recommend a qualitative approach to soft systems. Empathy can provide insights into human behaviours that cannot be explained in terms of goals and optimised utility functions, but can be explained in terms of a wish to understand and interact with our environment in a comfortable, familiar way. Once we accept that our ‘goal’ is to sustain a familiar process, human behaviour becomes fully determined without being inflexible. Everything we do can be interpreted in terms of valued processes rather than states or products. We learn through play, communicate with our neighbours, experience peer pressure and mooch aimlessly while retaining the ability to respond coherently to extreme stress. The challenge for researchers and policy makers is to develop models able to represent cultural ecodynamics as we actually experience them. The technical obstacles themselves can be overcome - it is certainly possible to construct a mathematical function that maps a process onto a process. However, to do this effectively, we need to extend the distinction of product from process in a way that allows us to talk meaningfully about changing values and beliefs. This is the reason I advocate the distinction of systems, attributes, processes and epiphanies.

82

6. Empathy and Explanation The most significant landscape features we encounter are other people. We spend a lot of time communicating with each other and negotiate knowledge thereby. We seem predisposed to understand sensory experiences, especially those associated with other people. Where other great apes build trust by grooming, humans do so by talking and listening and so negotiate a congruence of beliefs with neighbours. Our advanced communication skills are underpinned by empathy, the innate drive to explain patterns among observations as the product of an intelligent, purposeful agency. Empathy is not only the basis of language acquisition; it is an indispensable skill within the humanities. When the historian observes consistent relationships between the stated beliefs and actions of an individual, s/he begins to empathise.

Our empathic abilities have been hard-wired into us by millennia of natural selection. Without them, we would not be predisposed to language. Without language there would be no communication, no need for shared beliefs and no mathematics or formal symbolic reasoning. Empathy is a prerequisite of science. When our understanding of the material world breaks down we may let anger spill over into our social relationships. Scientists thunder at each other about the correct interpretation of equations and families bicker about maps. They would hardly risk these important relationships if it were not for their emotional investment in the empathic link with the equations or the map.

However, scientists have long understood that teleology can lead us astray. Economists who assumed that humans are all rational actors stolidly maximising financial utility functions would be unable to take account of changes in mood quite evident to professional market analysts and political commentators, for example. However, there is no escape from teleology. The pathetic fallacy (a tendency to ascribe personality and intention to inanimate objects, as when one talks of ‘cruel winds’) is actually indispensable. Every act of communication, even communication between a human and a book or traffic signal, works because the observer invests a bundle of sensory information with some significance. This is so, even when the source of information is not a human artefact. Humans routinely interpret patterns of cloud and air pressure to predict the weather, for example. It is no coincidence that we may apply the same emotive words to neighbours and skyscapes. We use the same empathic competence to interpret both.

83

84

7. Innovation Jolts Remembering our own experiences is much easier than valorising distributed memories. In distributed memory the experience is reduced at least twice. It is reduced when it is coded into another landscape object and reduced again when that object communicates with us. This manuscript, for example, does not contain my memory, but a reduction of my memory into the currency of words. If you are interested to know what I know, you must make a conscious effort of valorisation. It is easier for me to interpret the words on this paper than for you to do so. The words describe observables that are familiar to me because I have already developed the corresponding valorisation functions. For other readers, however, the words may not make sense at all because they have not had cause to develop those functions. They must either baulk this essay as meaningless sense-data, or develop interpretive filters - valorisation functions of their own - that enable them to convert it into useful information. Young people are generally more willing to do this than elders, their lack of experience and longer life expectancy mean they have more to gain from a priori than a posteriori reasoning. Throughout our early years we must build a vocabulary and knowledge - a set of shared beliefs that enables us to communicate and co-operate. There is much more to this than merely developing a dictionary-list of words and meanings. We have to develop valorisation functions and continually cross-check our understanding against that of our neighbours so as to maintain and develop effective communication. Most people switch roles as they grow in experience. We cease to be the recipients of shared knowledge and become its custodians. Our beliefs become culturally embedded and inflexible as we shift into an a posteriori mode. We lose our ability to learn new language and become intolerant of new conceptual structures. We may be embarrassed by the ingenuousness of those who challenge or even explicate our tacit beliefs and perversely inclined to impose those beliefs

85

on others as a sort of moral imperative. There are good evolutionary reasons for this. If humans remained open to new conceptual structures and beliefs throughout life, language and culture would change so fast we could not depend on them as reliable media for communication. Advanced communicators like us must be able to depend on conceptual structures because the effort of developing new valorisation functions is disruptive; it takes time and destroys the routines of life. Imagine how exhausting it would be if we had to actually think about the daily pattern of life, to plan breakfast, decide what to say when we meet the postman on the path, work out how to find a paper-clip or a hammer. Many of us get the flavour of this when we learn to drive. The actions themselves are quite simple, but are not automatic. It takes months to develop the reflexes and regain the impression of a smooth transition from past to future. Once we have done this, the process becomes a routine that scarcely engages the conscious mind at all. Routine journeys may be forgotten within minutes of arriving at the destination. This melding of unconscious knowledge with reflexes is praxis (i.e. normative behaviour). The only time we engage our conscious mind is when the process breaks down. Perhaps a dog runs across the road, a favoured roadway is blocked, a police officer stops the car. Many experience a little shock when forced to take conscious control. It may be manifest in a surge of irritation or a burst of laughter. This is the innovation jolt - a little epiphany that causes praxis to fail and forces us to think about future contingencies (i.e. reason and act a priori) and adapt to the challenges and opportunities we face. Innovation jolts make us work and many try to avoid them, especially when the source of that jolt is another person. Our empathic skills and ability to communicate enable us to build trust and a sense of communal identity. We cannot cooperate without understanding and trust. When that understanding breaks our communities become dysfunctional. We feel isolated and vulnerable. In social situations there are many things we can do to calm each other down and re-build trust; we smile, flicker our eyebrows,

86

laugh and touch each other. If the violation is too extreme to laugh off, we may exhibit shame responses and apologise. Those who refuse to do this spontaneously may be forced to do so or ostracised.

Section 4: Dynamic and Historical Problems A trap is only a trap for creatures which cannot solve the problems that it sets. Man-traps are dangerous only in relation to the limitations on what men can see and value and do. The nature of the trap is a function of the nature of the trapped. To describe one is to imply the other Geoffrey Vickers

Distinctive belief systems usual emerge in larger communities and the conflicts between them usually centre on a demand for epistemological unity. In these circumstances innovation is unlikely and normative behaviour predominates. Anyone who has studied scholarly history or watched Mode 1 silverbacks trying to pound each other into the mud will be familiar with the ritual displays that maintain epistemic unity.

Later academic history shows how tempting it is to accept the convictions of great scholars, enabling new ‘schoolmen’ and new scholastic thinking to establish itself again and again. The discovery of the uncertainty of knowledge must be made ever anew. Heiko Oberman

Innovation occurs when an information flow challenges preexisting beliefs, often enabling us to contemplate future threats or opportunities. Its effect is to jolt us away from action a posteriori (in respect of habits and familiar beliefs) and so facilitate action a priori (in respect of partially resolved future contingents). This may even lead us to reconceptualisation - new valorisation functions and new behaviours. With innovation, an information flow is the cause and the behavioural change the manifest effect. Recall that all information flows between landscape objects. All landscape objects are synchronous structures with no substantial time depth (though some have memory and knowledge). Thus the causes of innovation are products (systems or attributes) not processes. The behavioural change they effect, however, has significant time-depth - it is a more or less persistent, normative process. Adaptive behaviours create new types of process that may lead to spatial re-organisation. Sometimes this leaves traces preserved in future landscapes as a distributed memory accessible to archaeologists, biologists and geographers. The cause is often lost or forgotten, but the effect persists. This is why ethnographic analogies and uniformitarian assumptions are indispensable tools for landscape research.

87

And in much of your talking, thinking is half murdered. For thought is a bird of space that in a cage of words may indeed unfold its wings but cannot fly. Kahlil Gibran

Orientation Over the last thirty or forty years ‘modelling’ has been elevated to a profession, but the word has been pared down to exclude all but mathematical or simulation models. I have built a good few simulation models over the years but find this narrowness frustrating. A model is a tool for disciplined contemplation - a device for working out what you believe and what those beliefs imply. Many of the most useful models are non-mathematical. This section is to explain how models are used. The treatment of dimensional coherence may seem arcane, but is a very useful idea for those wishing to integrate hard and soft approaches. It resonates with Paul Feyerabend’s ideas about incommensurability. However, there are significant differences of emphasis between Feyerabend and me. I am a practitioner and a methodologist - aware that local

88

commensurability is both operationally and logically possible and can even be facilitated. Feyerabend was a philosopher and a Dadaist and argued that we shouldn’t take things too seriously. He was not much impressed by methodologists and inveighed against us writing a lot about incommensurability while seldom mentioning commensurability. This, I think, is his Achilles’ heel. You can’t anatomise science (as Feyerabend did) without killing it. His principal conclusion (the only universal methodological principle is that anything goes) though pedantically correct is useless. Universality is a red-herring. Methods are heuristics not Godgiven laws. They were developed to exploit, to locate and latterly to create domains of approximate commensurability. Yes, sometimes we set those rules aside and try something different - so what? Feyerabend, PK (1973) Against method: outline of an anarchistic theory of knowledge. London: NLB. Although the idea of a Phoenix Cycle is immediate and intuitively accessible, Phoenix Cycles are only visible when you view history from a distance, as it were. Viewed at close quarters, what you see is a process of reification - the practice of claiming that one’s own beliefs are universal truths. This is the scholastic trap.

Bishop Berkeley made his comments about fluxions as part of his introductory polemic to: Berkeley, G (1734) The Analyst; or a discourse addressed to an infidel mathematician. The role of necessity in science is explored by: Rosen, R (1989) The Roles of Necessity in Biology in Casti, J and A Karlquist (eds) Newton to Aristotle: Towards a theory of Models for Living Systems. Boston: Birkhäuser. pp 11-89 Meme theory was first proposed in: Dawkins, R (1976) The Selfish Gene London: Paladin. For work on boundary critique see: Churchman, C W (1979) The Systems Approach (Second, revised edition). New York: Delacorte Press. Midgely, G, I Munlo and M Brown (1998) The theory and practice of boundary critique: developing housing services for older people. Journal of the Operational Research Society 49, pp. 467-478. The distinction of regulation by norm from regulation by threshold avoidance is in chapter 1 of

Vickers wrote about traps in:

Vickers, G (1965) The Art of Judgment: a study of Policy Making. London: Chapman and Hall.

Vickers, G (1972) Freedom in a Rocking Boat. London: Pelican.

An easy introduction to social anthropology may be useful background reading. I like:

Heiko Oberman wrote about Luther’s attacks on modernism in:

Carrithers, M (1992) Why humans have cultures: explaining anthropology and social diversity. Oxford: Oxford University Press.

Oberman, H (1999) Luther: Man between God and the Devil. trans. Eileen Walliser-Schwartzbart. London: Fontana. p. 119. Kahlil Gibran’s quotation comes from The Prophet, a lovely book originally published in 1929 and still in print.

89

Monaghan, J and P Just (2000) Social and Cultural Anthropology, a very short introduction. Oxford: Oxford University Press.

90

top of the nearest hill, the population would be trapped - it will have optimised its bio-physical configuration.

1. The Uncertainty on a Teapot Although the skill itself is developed by practice and imitation, the ability to reason symbolically must be innate. Presumably it has been hard-wired into the human nervous system by millennia of natural selection. To believe, as Plato apparently did, that symbolic structures (like numbers) are independent of human agency, is to believe that this evolutionary process has honed our cognitive faculties to the point where our understanding corresponds, more or less, to reality. It is possible that natural selection, under certain circumstances, might punish bio-physical configurations that lead to a poor correspondence between cognitive structures and reality, but by no means probable that evolution would underwrite a global optimum. Indeed, there is circumstantial evidence that human cognition is constrained by other factors. The need to interact socially may limit our ability to abstract. Mathematicians and abstract modellers, for example, often find it difficult to communicate with others about the ideas that engage them. This social isolation can lead to frustration and loneliness and, in extreme cases, to suicide. Perhaps, if you want to find the most advanced cognitive apparatus, you should look for long-lived a-social animals with large brains. How you would communicate with one if you found it, however, I cannot imagine. We can picture the evolutionary process as a landscape, each point of which represents a bio-physical configuration and let the altitude of each point represent the goodness of fit between cognitive structures and reality. Under natural selection a population would tend to climb the nearest hill. Chance mutations might determine which hill a population trapped at the bottom of a valley would climb, but would be unlikely to catapult an individual from one hill-top to another. A mutation within a population near the top of a hill would be more likely to push its owner into the valley (where a selective penalty would be demanded). Having reached the

91

This optimum will only hold globally if the hill it has found happens to be the highest in the whole landscape. Perhaps we humans are the prisoners of our own evolutionary history, successfully colonising the foothills of reality but unable even to reach the base of the highest peak. If so, we are wise to distinguish reality as it presents to our senses (the substantive domain) from the symbolic structures we use to describe and analyse those experiences. One of those symbolic structures is the attribute - a genus of object whose species are observable values. Those values might describe a system (the car is half way down the High Street) a process (it is being driven rather fast) or an epiphany (“Arrgh! The brakes are unserviceable and the driver has lost control”) that changes the way we see the world. Every value represents a category or set - the set of cars, the set of fast drives, the set of nasty surprises and so on. Some attributes have numerical values that can reasonably be added or subtracted, some cannot. Most attributes are non-numeric. Each value (numeric or not) represents a named set or category. The value ‘red’ for example defines a set of red objects and the value ‘0.143’ represents a set of objects possessing a numerical attribute in this state. We can address the valorisation problem, without loss of generality, by considering the task of deciding whether an object is, or is not an element of a named set. The valorisation problem is solved when an observation is reduced to a value. Ideally we would like this problem to be well-posed - the value denoting inclusion or exclusion must exist and be unique in the sense that every well-trained, impartial observer will return the same value. However, this is not always possible in practice and we can obtain some useful insights by considering manifestly ill-posed circumstances. A simple way of ensuring that the valorisation problem is illposed is to ensure that the range of values does not exhaust

92

the range of categories manifest. The attribute hair colour with values blonde, brown, red and black cannot accommodate my grey hair, for example. Clearly, for the valorisation problem to be well-posed, the list of possible values we use to represent the attribute must be exhaustive. A second way of ensuring ill-posedness is to ensure that values are not mutually exclusive. Introducing the category ‘peroxide blonde’ would have this effect because a redhead can become a blonde simply by using bleach and reverse the process in a few months. If the values are not exclusive, valorising some experiences will be an ill-posed problem because the values, though they exist, are not uniquely determined in the case at hand. These problems can often be solved by arbitration. We ensure that the values chosen appear mutually exclusive and exhaustive and maintain a ‘watching brief’ for inconsistent observations. More substantial problems arise when well-trained, impartial observers cannot agree. Sometimes agreement is easy; counting pebbles, for example, is very dependable, at least if the numbers are small, but in some cases valorisation is impossible. The number of sand grains on a beach, for example, is more than 0 and less than 10100 but cannot be reduced to a unique integer by any practicable method. Ill-posed valorisation problems are common and natural scientists have developed methods for estimating the uncertainty on estimates of numerical quantities. In these cases, it is often easy to decide whether the signal to noise ratio is operationally significant. If the quantity being valorised is large and the uncertainty small, we feel confident about using them. However, handling the uncertainty on observations like: •

there is a teapot on the table

is problematic and we tend to treat such observations as self-evident facts. This, I believe, is a mistake. The uncertainty on a teapot on the table in front of me is very low, perhaps limitingly close to zero, but many models

93

require us to valorise a host of categories and the aggregate effect of all these uncertainties on the model’s signal to noise ratio may be operationally significant. Every attribute is valorised in a spatio-temporal envelope that consists of the observer’s current arena and a series of remembered arenas. This envelope is the universe of discourse within which the value exists and is unique. Thus, I can valorise the teapot on the table in a second and write •

Teapots(table; 08:59:59 - 09:00:00) = 1

If I try to valorise the same quality over 12 hours I get •

0 ≤ Teapots(table; 20:30:00 - 08:30:00) ≤ 1

because sometimes there is and sometimes there is not a teapot on the table so the value that describes the attribute Teapots is not unique; it could be 1 or it could be 0. Similarly, if I try to valorise all the teapots in North-West Europe, it is going to take a long time because I must travel everywhere and find every teapot. In the time it takes me to travel round North-West Europe and count them all, some teapots will have been broken and others manufactured. Once again the best estimate I can come up with will be uncertain because the value is not unique. Conversely, if I try to valorise the teapots on the table in a single millisecond or the teapots located in a square millimetre of the table the value does not exist because the universe I have chosen is simply too small, either temporally or spatially. Universes that are too small correspond to values that do not exist. Those that are too large correspond to values that are not unique. Between these bounds the problem of valorising teapots is well-posed. The table before me, for example, examined over an interval of a second or so gives a stable estimate, provided there are not too many teapots on it. The object that is the table itself and the attribute that describes the number of teapots it bears are dimensionally coherent, each with the other: I can valorise both with negligible uncertainty.

94

2. Mysticism and Dynamic Systems Some objects may be real and yet not exist in valorisable form. The peace of God, for example, cannot be valorised with negligible uncertainty in any universe known to me, it is a mystical object. Mysticism demands an effort of contemplation and a leap of faith or intuition. We have to develop a metaphysical Jizz that can carry us beyond the limitations of sense, logic and language. Any attempt to probe the reality beyond valorisable experience takes us into mysticism. Later mediaeval and early modern scientists, who focussed on ontological problems, set the boundaries of science to exclude this. Mediaeval science was purely empirical and largely descriptive. However, contemporary science, particularly that which addresses dynamic problems, often strays into mystical territory and sometimes even gets stranded. Some of the concepts of particle physics, for example, are so nebulous and insubstantial we might as well be discussing how many angels can dance on a pin-head. Similarly, the distinction of natural from anthropogenic (unnatural) processes in ecology is insubstantial - the key that identifies the class of anthropogenic processes is also the essence of the class. Sustainable development, too, is a mystical construct that cannot be valorised in any contemporary universe of discourse because we cannot observe unborn stakeholders. In the seventeenth century scientists learned to tackle dynamics and the boundaries of science coincidentally shifted to enclose some (but not all) mystical categories. Newtonian physics, for example, was rooted in mysticism. Newton’s well-documented interest in astrology and alchemy was not an aberration or evidence that he was a missing link connecting the mediaeval world of superstition to the modern world of scientific objectivity. Newton brought mysticism into the mainstream of science. Galileo’s law of inertia (every body stays at rest or in uniform linear motion) was derived by an effort of disciplined contemplation. We, who are familiar with these ideas, have

95

the Jizz required to appreciate a force as that which violates Galileo’s law. It is difficult to understand how un-empirical they are. Newton knew that the earth rotated around the sun and spun on its axis. A brick sitting on the ground, therefore, was neither stationary nor travelling with uniform, linear motion. Throw it up in the air, it stops, falls back and stays put. Look at the orbiting planets and you see no linear motion. It took Newton years to develop a new branch of algebra that would reconcile Galileo’s speculative metaphysics to this mass of contrary evidence. It is like defining a category of objects that stop milk being purple. The class of forces seems trivial - its key is that all forces perturb linear motion and that, coincidentally, is its essence. You cannot observe a force directly, but infer its existence by looking for deviations from stillness or uniform, linear motion. However, Newton created additional axioms about action and reaction and gravitation that converted this seemingly sterile notion into a fruitful model of physical dynamics. It was like sympathetic magic - invoking the name of a power to manipulate real objects - how could this be science? So when Bishop George Berkeley referred to fluxions (instantaneous rates of change) as “the ghosts of departed quantities” he was appealing to what, for an early modern scientist or a latter-day positivist, was a common sense idea - that science was an ontological endeavour requiring that the categories of which scientists speak be valorisable in their instances. In the later modern period (from the time of Newton onwards, say) this common sense idea was questioned. Forces, for example, are components of a formal model that belongs to the domain of symbolic reasoning. In the empirical world of early modern systematics, we would have had to acknowledge that they do not exist. Consequently, the physicist cannot claim that a billiard ball really is a Newtonian computer calculating lines of least action. Rather s/he simply asserts that, for some purposes, it acts as if it were one. This is rational mysticism - directly comparable to that employed by many theologians. Hardly any mystic seriously

96

imagines that the universe really is as the nomadic pastoralists of the Bronze Age or the wandering teachers of the Roman period imagined. Rather one understands that, when individuals open their minds to a reality greater than themselves, their lives often change as if they had experienced the peace of God. These mystical structures - the Newtonian process and the peace of God only exist outside the mind of the individual by negotiation and common consent. However, both of them have a certain explanatory power. They allow us to predict observable regularities well enough to provide a dependable basis for co-operative action, even though the objects themselves do not exist in valorisable form. In terms of explanatory power, Newtonian physics was a tour de force. The forces he invoked, coupled with other axioms, closed the system he created by forging a logical connection between the abstract, mystical domain and the domain of valorisable experience. He used the model to make predictions and tested them empirically. They turned out to be astonishingly reliable. The peace of God (an intensely personal experience) has less explanatory power than Newtonian physics, but it still has some. Within many contemplative communities, for example, disciplines of prayer and meditation are used to trigger those experiences and modify behaviour in a remarkably consistent way. To an outsider this may seem like superstitious mumbo-jumbo, but initiates (those who have practised hard enough to develop the Jizz) cheerfully echo Newton’s defence of astrology: “we have studied it, you have not”. They believe, with some justification, that they too have connected the mystical world to the domain of sensory experience. Some have the impression that their ability to do this means they have somehow touched the reality that lies beyond sense experience - an experience they share with many twentieth century physicists.

experience. Mystical culs de sac lie beyond the scientific pale. The rules of symbolic manipulation are not graved in stone. Newton created his own rules, twentieth century physicists developed new algebras and geometries to solve new problems, and so can we; but the rules must always be made explicit and open to peer review. This approach, based on the use of abstract, mathematical models, is called ‘formalism’. Historians of science often observe that the last two centuries have seen increasing dependence on formal methods, particularly in the physical sciences, but also in the social sciences and, to a lesser extent, in the humanities. Some interpret this as evidence that the sciences are moving closer to objective reality - that, as the physicist Sir James Jeans put it, the Great Architect of the Universe is a pure mathematician. If so, we must expect the trend to increased formalisation to be maintained. However, many humanists and socio-natural scientists have sensed the operation of a law of diminishing returns - that the pay-off per unit effort has declined over the last fifty years. Many of the most demanding problems in socionatural science, for example, resist formal models. If they are right, we should expect the pendulum of scientific formalism to reverse, at least in applied, socio-natural science. Time alone will tell. One observation, however, can be set beyond dispute. Hard scientists have spent fifty years trying to generalise formal methods to softer problem-domains, and been comprehensively defeated by socio-natural complexity. Given our current state of understanding, it would probably be wiser to reserve hard science methods for easy problem-domains in which reality-, value- and operational judgments are uncontested.

Explanatory power determines the boundary of later modern science. Scientists will tolerate mystical structures provided explicit rules of symbolic reasoning are respected and those mystical structures re-connect to the domain of valorisable

97

98

3. Dimensional Coherence Classical physicists believed some attributes were everywhere coherent - position and speed, for example, were defined for every object that possessed mass. No matter how small we made our universe of discourse, position and speed existed and were uniquely valorisable for every particle it contained. We now know this is not the case. There is a lower bound below which the valorisation problem is ill-posed and the uncertainty on these observations becomes operationally significant. The evidence that led to this conclusion is recondite, but the basic idea is simple. Imagine a car driven down the road at variable speed so both speed and position are dynamic. Assume there exists a universe of discourse in which the problem of valorising position is well-posed - we can obtain a dependable value for position. To valorise speed we need to know the distance travelled and for this we must subtract a remembered value (the position a short time ago) from the current position. Then we divide the distance travelled by elapsed time to compute speed. Since the car is not stationary, we know the speed cannot be zero, so the distance travelled in some time interval must be measurably greater than zero. Clearly, the two positions must be appreciably different. The universe of discourse in which position returns a unique value must, therefore, be significantly smaller (at least along the temporal axis) than that in which a dependable estimate of speed is obtained because position has actually changed while we were valorising speed. Evidently, speed and position cannot be valorised in the same universe of discourse. In a universe where position can be valorised as a unique value, speed is not defined; in a universe where speed can be valorised uniquely, position is not unique. When this is so we say that the attributes involved are dimensionally incoherent. Position is an attribute of a system (the car) but speed is an attribute of the process that changes the value of position (the process of driving the car). If we can define a universe in which a system is dimensionally coherent (all its sub-

99

systems and the values of their attributes can be valorised) any process that transforms those values must occupy a universe that is significantly larger along the temporal axis. Dynamic systems and the processes that operate on them are dimensionally incoherent. A similar argument suggests that epiphanies and processes are dimensionally incoherent. Indeed, epiphanies operate on larger time-scales than the dynamic systems whose processes they transform and processes operate on larger time scales than the values of system attributes. The effect of this is that, even in an ideal world where the ontological problem was demonstrably well-posed, the shift to system dynamics would impose an irreducible uncertainty on the result. The investigation of historical processes and epiphanies must further amplify those uncertainties. We cannot simply assume that these uncertainties are operationally insignificant - we have to check. It is possible (in my experience, commonplace) that the dimensional incoherence of many system models is so extreme that they provide no operational leverage. Every systems model postulates the existence of a spatiotemporal universe in which all the objects it contains are simultaneously manifest. If that model contains both systems and processes or processes and epiphanies we can be confident that it is dimensionally incoherent - the valorisation problem is ill-posed. However, that does not stop us theorising that the valorisation problem is almost well-posed - that the level of uncertainty is operationally insignificant. We test that theory by setting boundary conditions on the model’s predictions. The boundary conditions must be satisfied in order to justify using a given ontology of systems, attributes, processes and/or epiphanies. If boundary conditions are violated, the objects cannot be valorised within operational limits and the theory is refuted. Sometimes we express boundary conditions in terms of the future state of a system (the approach of the experimental sciences), but equally often we use the model to ‘postdict’ - to deduce one observed state from another. This a common strategy in the observational sciences.

100

An archaeological model, for example, might be constructed to predict the contents of a rubbish dump from inferences about the subsistence behaviour of extinct societies and the formation processes that determine modern patterns of refuse deposition and decay. If the model predicts the observed outcome (or comes close enough to be thought satisfactory) the theory that a universe of discourse exists within which ancient subsistence behaviour can still be valorised is sustained. If it fails to do so, then the theory cannot be defended in its current form. The connection between these ideas and that of testing propositions in logic is clear, but there are two significant differences: 1. The theory is not declared true or false - the objects either exist (can be valorised within operational limits) or do not exist. There may be two or more competing theories equally defensible within those limits or (more commonly) two or more communities arguing passionately about the merits of rival ontologies each using boundary conditions that preserve their pet theory and exclude their opponents’ offering. 2. The criteria for empirical testing are not objective, but operational and socially constructed. It is for a community to decide how close a prediction must be to be close enough. In the hard sciences this is often very easy - either the probe lands on Mars or does not. In the softer sciences it can be very demanding. Social scientists and politicians often confront problems of dimensional incoherence. In economics, for example, the fundamental unit of value is money. Money is highly dynamic; its value is altered by inflation and market fluctuations. The stocks and flows of money held by an institution can swing wildly on a daily basis. Money can only be valorised in a very narrow temporal envelope. The spatial component of the monetary universe is narrow too because the purchasing power of a currency unit varies from place to place depending on the local economic milieu.

101

Monetary value exists in a small universe of discourse. Any attempt to put a monetary value on a process, therefore, requires us to reduce the process to a stream of products each of which has a monetary value - as though the reason lovers eat together is that food is cheaper purchased in bulk. This sounds (and sometimes is) ridiculous, but for many purposes, it is a useful approach. It is indispensable, for example, in commerce, where the process under investigation is the generation and exploitation of money. However, in purely economic terms a child in the third world has less value than my own child because the amount of money that child produces or expends is rather small. To treat this as objective because it reduces a human life to a number, would be to set aside the value judgments on which western society and law is founded. This dimensional difficulty cannot be overcome by putting in a fudge-factor to remove this ‘bias’ by making the value of one child’s life equal to that of the other. That would imply that there was no difference in value between, say, the life of a generous and gifted politician and a vicious despot. Differences in the value of lives are palpable. A child enjoys opportunities and choices only partly determined by relative wealth. Experience predisposes her to certain lifeways and those predispositions impact on other people to enhance their life-experience or not. The value of a life lies in those processes. So the problem we face is not that the difference does not exist; it is that the spatio-temporal universe containing a set of human lives is so large and that of money so small. In a universe where monetary value is uniquely valorisable the process that is a human life simply does not exist in valorisable form. In a universe where lives can be valorised, monetary values are not uniquely observable. Moreover, the signal to noise ratio on the monetary value of a human life is unacceptably low because the method can give a higher value to the life of a murderous gangster than to a village full of innocent peasants. Clearly, a purely economic model cannot serve as a basis for policy-making in respect of third world poverty.

102

3. The Science of the Unreal Later modern science differed from ancient science in its ability to move from static, ontological problems to dynamics and its tolerance of abstract, mystical categories (particularly processes) provided the models re-connect to the world of valorisable observation. A formal scientific model must explain some dimensionally coherent regularity. Of course, scientific language contains many metaphors that have no explanatory power. I have already indicated some of them. Here are a few more. Meme theory - the idea that our brains are infected with cognitive viruses that determine our beliefs and ideas - persists even though all its valorisable corollaries can be deduced from alternative models. The concepts of hypnosis and self-hypnosis, among many of the metaphors of psychology are mystical objects with almost no explanatory power. Hypnosis is a metaphor adopted by nineteenth century researchers keen to explore mystical experiences, but reluctant to use mystical language. The boundary conditions on these metaphors are very lax. Even professional scientists explore mystical culs de sac from time to time. It hardly matters as long as these metaphors seem empirically and ethically neutral. Scientists are humans, after all, and cannot be expected to scrub their language obsessively to remove all traces of metaphor. Some of the most important culs de sac explored in this book are in so-called ‘General System Theory’ - the thesis that the social and natural worlds are full of (hard) systems consisting of sub-systems, attributes and time-invariant processes. General System Theory has been refuted by empirical evidence. The genus of hard systems as a whole cannot be valorised in its instances because you cannot go out into the world and say of everything you encounter: “yes, this is a hard system” or “no, it is not a hard system”. There are many equivocal cases. The genus of hard systems is ontologically indefensible - it does not exist. However, it is possible, particularly in the natural sciences and engineering, to define species of hard system that can be valorised - the set of solar systems, for example, or the

103

set of transistor radios. We can develop general models of these species, predict their behaviour and sometimes manipulate them to achieve certain effects. Moreover, the methods we apply to one set of hard systems are often generalizable to another. Indeed, a latter-day Platonist could argue that the species of hard system exist (they can be valorised in their instances) but that the genus of hard systems, although real, does not exist. It is a formal, mathematical category of objects with attributes, processes and possibly sub-systems to which certain abstract methods can be applied. By this conception General System Theory is an exercise in rational metaphysics. Although natural scientists and engineers may be offended by my suggestion that they deal in metaphysics and mystical objects, many of them are motivated by a desire to understand reality and so, in the strict definition of the word, are undoubtedly engaged in metaphysical research. Indeed, methodologically speaking, there are many advantages to the metaphysical approach. The assertion that a system or process is real is a theory constructed purely ‘for the sake of argument’. That theory can usually be made empirically testable and will stand or fall on its valorisable corollaries. If the categories were real and correctly characterised, then the corollaries of the theory will necessarily be true. If, however, the categories were not real (perhaps, like ‘credit ratings’ or ‘good humour’ they are socially constructed) then Jonah’s law intervenes. The objects may be characterised correctly and yet the corollaries will only possibly be true. Soft science is logically weaker than hard science, not because soft scientists are incompetent, but because the assumption of reality is never justified and so cannot even be defended as a working hypothesis. Cultural ecodynamics is the science of the unreal (i.e. socially constructed) world in which the epiphanies that change our knowledge of socio-natural systems often change the systems to which that knowledge refers. We cannot reduce the work to a logical puzzle using hypothetico-deductive method because there are no

104

necessary corollaries (by Jonah’s law). Moreover, we must keep the ethical implications of those reality judgments under continual review because the assertion that a system, attribute or process exists is itself a form of intervention that can disadvantage stakeholders by changing their social milieu. Classical, hard science method is as poorly equipped to handle possible cause and elective change, as mediaeval science was to handle dynamics. We need to embrace new conceptual structures and change the way we do business. Later modern thinkers pushed the boundaries of science to include objects like processes that did not exist in the domain of sensory experience. Perhaps we can do the same again - this time to accommodate objects that appear to exist but which are obviously not real. If I am right to believe that early modern science, by excluding abstract, mystical structures, dealt with an ontological subset of later modern science, it seems natural to explore the possibility of extending this process of generalisation. It would be unfortunate indeed if we could only apply science to the domain of human knowledge and belief by discarding the manifest advantages of ontological and dynamic approaches. The new science of cultural ecodynamics must combine the most useful insights from the ontological research of late mediaeval and early modern periods with the powerful explanatory tools used by later modern metaphysical scientists, while accommodating the self-referential codynamics of socially constructed experience. This is not going to be easy, but the potential benefits are enormous.

4. Hard Science, Soft Science Conventional (hard science) approaches to dynamics focus on the idea of attributes whose values are transformed by a time-invariant process. The set of dynamic values manifest in some universe describes the system’s state. The process is the cause of change and the new state is effect. To natural scientists this is just common sense, but to a soft scientist it is less familiar. In social systems we often have to deal with epiphanies that transform the essence of a category of system (as calculus transformed physics and the credit card transformed commerce). Just as every observation of a process is inferred by comparing observations of states made at different times, so every observation of an epiphany requires us to compare the essential attributes of systems before and after. To do this we have to locate the system in an appropriate universe of discourse. This suggests that conventional hard-science and softscience models may occupy different universes of discourse. We may not be able simply to stick them together, because many of the objects manifest in one sub-model will not be valorisable in the spatio-temporal universe occupied by the other. If we want to integrate the hard and soft sciences, we need to think about dimensional coherence on a case by case basis. As I explained earlier, the question we must address is not: are hard and soft systems dimensionally incoherent? but: is the level of uncertainty engendered by this dimensional incoherence operationally significant? In many cases, the answer is an emphatic ‘yes!’ and attempts to cross the hard-soft boundary are frustrated. Hard science deals with systems that are ontologically robust; many of their attributes and processes are invariant under human transformation. This makes it easy to negotiate a consensus about what the system is and how to describe and measure it. Hard science method is exogenous, often using experimental or observational methods to investigate systems from the outside. The hard sciences lend themselves to engineering and technological applications

105

106

directed towards well-specified products. Hard scientists often use abstract, formal models that require the definitions of systems, attributes and processes to be time-invariant. Soft science deals with systems that are transient artefacts of the predispositions and actions of biological organisms (including humans). Consensus is harder to achieve and ontological judgments may change in the course of a research project in ways that are inimical to hard science method. Soft science method is endogenous, qualitative and heuristic. Participant observation is commonly used to understand systems from the inside. When the soft sciences are applied, their natural focus is on synergy, interaction and the relationship between epiphanies and processes. They are indispensable for managing social and natural lifesupport systems, where our continued well-being requires us to service the fragile consensus that sustains access to essential resources. Soft science provides few options for deploying abstract, formal models because the systems, attributes and processes they describe are highly subjective and certainly not time-invariant. In general, hard science approaches to dynamics define a system, assign a set of attributes to it and invoke a timeinvariant process that transforms the values of those attributes (the system state). This means that modelling the transformation of a church into a shopping mall, for example, requires us to invoke a single process capable of sustaining two or more types of configuration. These metastable systems are such that the basins of attraction representing a place of worship and a shopping mall are logically entailed in the process itself. The only thing that would have prevented a nineteenth century curate recognising the existence of the second basin would be his failure to appreciate the true nature of the system. In principle the shopping mall could have been foreseen. In the soft sciences, however, the attribute space that characterises the system is dynamic. Attributes are created, modified and annihilated through time. The trigger for this change is an epiphany that creates a qualitatively new type of system. Note that the epiphany and the class of systems it

107

transforms must be valorised in different universes. Nineteenth century biology, for example, saw the perfection of the light microscope and the possibility of long-distance voyages of exploration. If we contrast biology both before and after these innovations, we can valorise epiphanies that gradually changed the way biologists do business. These changes, in turn, laid the foundations of biogeography, ecology and epidemiology and paved the way for biochemistry and genetics. If we consider biology before these developments, we valorise one set of systems, attributes and processes. If we consider biology after, we see another, quite distinctive configuration. The epiphanies do not exist in the two smaller universes. The distinction of hard from soft science defines two coherent and persistent types of knowledge community. This is why archaeology and geography departments continually split across the fault-line between humanistic and biophysical approaches. One community focuses on natural systems and objects that would exist even if we weren’t thinking about them. In these, process is cause and state is effect. The other focuses on artificial systems whose very existence is socially constructed. In these, epiphany is cause and system (not just the system’s states, the essence of a whole class of system) is the effect. As we saw in the previous essay, causal mechanisms are fundamentally different in the hard and soft sciences. In classical applications of mathematical dynamics (Newtonian physics, for example) one has to deal with deterministic change and necessary cause. In the social sciences, however, one has to deal with possible cause because of the effects of Jonah’s law. The exploitation of cereals in the early Holocene, for example, made it possible to develop agriculture and urban civilisation, but did not make it inevitable. Cultural ecodynamics demands a new approach to empirical testing. Here poor agreement with empirical data does not necessarily refute a theory and goodness of fit can, under some circumstances, actually invalidate it by suggesting it is it too sensitive to particular accidents of history.

108

5. Reality Judgments in Hard and Soft Science Contemporary system method requires us to resolve experience into three categories, systems, attributes and processes. Processes transform accidental (i.e. inessential) values. To these I have added a fourth category, the epiphanies that transform the essence of a class of systems. Resolving experience into those categories is not objective, but pragmatic. By that I do not mean it is a work of fiction; it is a genuine judgment based on experience and purpose. If I wanted to study the mechanics of a particle, for example, I would define the particle as a thing, its mass and position as attributes of that thing and its velocity as a process that transforms its position. If, however, I were studying thermodynamics I would define an assembly of particles as a thing, give it attributes like temperature, pressure and entropy and define processes that transform them. Common sense suggests a relationship between the position, mass and velocity of the particles and the pressure, temperature and velocity of the ensemble of particles, but common sense may steer us false. You cannot measure the temperature, pressure and entropy of a single hydrogen molecule. Indeed, you cannot even measure its position and velocity at the same time without building operationally significant levels of uncertainty into the model.

are isolated trees and drawing a line that delimits the forest from the alpine pasture is much harder - in some cases, impossible. Over a decade or a century individual trees may die or grow and the tree-line may move. Beyond a constrained range of universes the tree-line cannot be modelled as a dimensionally coherent, recognisable object. Sometimes, when the objects we are dealing with can be represented by numerical attributes of a recognisable system, we can express the uncertainties quantitatively and use statistical arguments to determine whether the uncertainty on those observations is operationally significant. However, in most cases this is impracticable. The uncertainty on an observation of a market crash, for example, cannot always be operationalised in quantitative form. Indeed, in many socio-natural contexts these differences of perception among observers may be very significant because different people react differently to the same evidence. We now turn to these types of circumstance.

This problem recurs in every science. You can build a very good model of forest succession using a pair of callipers to measure diameter at breast height, but you cannot get the grasses into this model because they don’t grow to breast height. You can model grassland in terms of percentage cover, but forests tend to have several storeys of vegetation and these ‘percentages’ commonly exceed 100. Defensible reality judgments have a spatio-temporal universe over which they are defined within an acceptable level of uncertainty. Over smaller scales the objects simply do not exist. Over larger scales they are not uniquely defined because the values of some of their attributes may change. We can observe a forest from a helicopter or a satellite and draw the tree-line on a map. On foot, however, all we see

109

110

6. Dimensional Coherence and Boundaries Many hard scientists resist the idea that reality judgments are informed by their own values. They confound the notion of value (that which gives direction and purpose to action) with morality and ethics. Value judgments only have an ethical component if they are contested. The soft sciences, however, deal with socially constructed experiences and consequently must be prepared to resolve arguments about the ethical implications of reality judgments. Integrative research must reconcile hard and soft viewpoints and the idea that hard science is value-neutral often gets in the way. To give an extreme example, suppose I wanted to explore the link between tobacco and lung cancer. I might partition the population into those who did and did not smoke and those who did and did not contract lung cancer. We can valorise smoking and lung cancer rates as numbers. These are good natural science observables. There are other factors that could contribute, of course, but I bound the system so as to exclude them and simply investigate the relationship between these valued (i.e. interesting) factors. It is possible, of course, that smoking is autocorrelated with educational background, income and employment. If I were to widen the system boundary to include those factors too, the problem would become more complex. Is it poor education that causes smoking, or is it the culture of people with low incomes that causes them to reject educational opportunities? Should doctors urge their patients not to smoke or governments to provide better education? How do you make education better (i.e. more effective) in this respect? Is the government ethically justified in over-ruling cultural predispositions by forcing young people into an educational system they despise? Is it possible to do this? We cannot design a study of this sort by reducing everything to a handful of numbers and saying that these are real and value-neutral while cultural background and ethics are socially constructed. If we want to stop people dying prematurely, we have to pull our heads out of the hole and accept that socially constructed experiences exist too.

111

An effective approach to these dilemmas, based on the work of C West Churchman is the method of boundary critique. The reality judgments that characterise a model system not only determine the focus of research, they can create social exclusion by putting some stakeholders beyond the pale, as it were. Thus boundary judgments can reify the tacit values of researchers, dividing a socio-natural domain into ‘sacred’ and ‘profane’ subsets analogous to the distinction of deserving from undeserving poor that George Bernard Shaw put into the mouth of his creation Alfred Doolittle. The guiding principle of boundary critique is to be as inclusive as possible and maintain a watching brief on all reality judgments. If their effect is to violate ethical norms, practitioners of boundary critique sound a warning. However, this continued scrutiny can be highly disruptive and that disruption can also create social exclusion. Executive action takes time and continual re-evaluation is disruptive and expensive. We need a way of monitoring reality judgments that does not paralyse executive action. Vickers was interested in system regulation and this led him to make a useful distinction. Consider a driving game in which the player tries to keep a car on a simulated racetrack. When speeds are low the player concentrates on staying close to the middle of the track. This is regulation by norm. As the road twists and the simulated car accelerates, the driver struggles to keep the car inside the white lines. This is regulation by threshold avoidance. Regulation by threshold avoidance is rather useful. It can be used to make connections between boundary conditions and dynamic processes. The idea of boundary conditions originated in the study of differential equations where some terms must be designated parameters (fixed quantities) to render a set of equations solvable. The connection with physical boundaries is clear in domains like thermodynamics, physiology or climate modelling where they usually refer to energy and information flows. The trick is to explore the stability of solutions by means of thought experiments in which the values of those parameters are perturbed or allowed to vary within thresholds set by the

112

boundary conditions. It is an old systems joke that the first thing you do after turning a variable into a parameter is to vary it. Yet the method is very powerful. I have tried to generalise it to softer science contexts by suggesting we can impose boundary conditions on all reality judgments. This replaces the logical switch (every proposition is objectively true or false) with a socially constructed test that determines whether the signal to noise ratio is high enough for the model to be defensible. The theory we are investigating is that there exists an accessible universe of discourse within which all the objects we have invoked are simultaneously observable, subject to an operationally acceptable level of uncertainty. If our experience satisfies the boundary conditions, the reality judgments are sustained. If it does not they must be revised. To borrow the classical textbook example, if I believe that all swans are white, that the object at my feet is a swan and that it is black, I can be sure that at least one of my reality judgments is suspect in the context of this particular model. There exists no universe of discourse in which the set of all swans, the swan at my feet and blackness (as I understand them) can be valorised simultaneously. In the case of swans, the judgment is ethically neutral, but in more taxing circumstances there may be ethical fallout. Competitive, sustainable growth, for example, is highly valued by European governments. The policies we build in respect of this create conflicts of interest within society. A substantial group of environmentalists believe competitive, sustainable growth simply does not exist and that to pretend that it does is to aggravate environmental degradation. We cannot reconcile these reality judgments and neither can we arrest all executive action while people settle their differences. By then it will be too late. So we negotiate boundary conditions - thresholds for policy compliance and system health - that can be used to monitor the executive process. If they are violated, we need to ask ourselves whether the object ‘competitive, sustainable growth’ can actually be valorised. We have to re-think. Regulation by threshold avoidance, although a lousy way to

113

drive a car, is a very useful managerial and scientific tool because it allows normative action to proceed while keeping the appreciative system open to unforeseen and possibly undesirable consequences. Where research has potential impacts on livelihoods and individual well-being, a continuing process of appreciation is needed to ensure that reality judgments remain dimensionally coherent and ethically defensible. If boundary conditions are lax, the beliefs to which they relate are resistant to change. If they are stringent, the beliefs are provisional and easily refuted. Many beliefs are tacit. Tacit beliefs with lax boundary conditions are culturally embedded; they represent judgements we never question. Any attempt to challenge the cultural beliefs and habits of a community is usually met with derision, hostility or embarrassment. (Soft) science is purposeful knowledge creation in which boundary conditions on some beliefs are explicitly resolved into two groups. Beliefs with lax boundary conditions relate to axioms and, in most cases, to the formal rules of symbolic manipulation. Beliefs with stringent boundary conditions relate to conjectures. We use empirical evidence to test the dimensional coherence of the conjectures, given the assumption that the axioms are true and our analytical methods valid. Postscript: Before we leave this topic, there is one philosophical issue I would like to touch. Thomas Kuhn’s work on paradigm shifts and the Phoenix Cycle in science led him to distinguish normal from revolutionary science. Normal science conserves axioms to focus on conjectures. Revolutionary science abandons or modifies axioms. From time to time scientists and humanists argue about whether this epiphany or that (perhaps the discovery of quantum mechanics or the publication of the Darwin-Wallace paper) was, or was not, a scientific revolution.

114

The Kuhnian paradigm shift, like any other observable object, has its characteristic spatio-temporal universe. It is relatively easy to see if we stand far enough away from individual agency. However, it is much harder to sustain on close examination because, at the individual level, axioms are resistant to change by definition. My own experience of integrative research suggests that innovation, viewed at close hand, seldom presents as a revolution but as a schism. An innovation jolt causes one group (perhaps even one individual) to re-classify an axiom as a conjecture. Once the necessary conceptual adjustment is made, both groups continue doing normal science as defined by their respective beliefs and divergence occurs quite gently. Those involved in research from day to day are much more sensitive to continuity than to change1. The paradigm shift can only be valorised with hindsight as we recognise that, after an initial period of debate, one group succeeded spectacularly whilst the other dwindled into obscurity. By the time people start to proclaim the revolution, the innovation that triggered it is long past and someone is trying to consolidate a competitive advantage by reifying the new ontology. Eric Higgs was right; science is very like selling soap, and words like ‘new’, ‘improved’, ‘progress’, ‘inter-disciplinary’ and ‘revolutionary’ are over-worked.

This is why students and early stage researchers should keep a personal research notebook. By reviewing that record at the end of a project one can valorise old beliefs and compare them with new ones. This, in turn, builds confidence in the research process as a tool for creating new knowledge. 1

115

7. Systems Modelling A model is a reduction of a substantive domain onto a symbolic domain. The marks on this paper are not my thoughts but a reduction of those thoughts onto the symbolic domain of language and a further reduction of that language onto the symbolic domain of letters. You cannot apprehend my thoughts directly, nor yet even hear my voice and watch my gestures, but we can communicate. Every model is contingent on knowledge (shared beliefs). At least three types of belief are required: beliefs about reality, beliefs about values and operational beliefs. In formal models some of these beliefs are based on an explicit effort of judgment. Reality judgements refer to the systems, attributes, processes and epiphanies that exist (or could exist) in the substantive domain. Value judgements refer to aims, social norms, desires and purposes. Operational judgments relate to the actions and instruments we use to promote valued processes. These judgments collectively determine the model’s ontology and, in conjunction with sensory limitations, the configuration of the arena from which information is valorised. A human’s interaction with an arena is modelled in the diagram overleaf. Each arrow represents a reduction. The arrow reducing the arena onto the substantive domain represents the input of sensory data. The substantive domain is not necessarily an objectively real category of thing. It can be the algebraic domain of a pure mathematician or the metaphysical domain of a theologian. You can think of it as the ontological focus of the model – an artefact of that person’s values, if you will. These data must be filtered into the symbolic domain as meaningful information. Just as the substantive domain is not necessarily concrete, so the symbolic domain is not necessarily abstract. A tally-stick used for counting sheep will do. However, in every case there are rules of symbolic reasoning to determine how objects in the abstract domain (words, numbers, notches on the stick) can be manipulated. These rules fix the range of manipulations that are possible,

116

but never determine the way the symbolic domain will actually be manipulated, as the rules of grammar do not determine what we use them to say.

Among those value judgements, of course, is the desire to interact socially and to achieve congruence between belief systems. As our roles and beliefs change, we adopt new responses and valorisation functions and abandon old ones so that our desire to interact may be expressed in new responses to similar situations. Sometimes the most obvious action from one perspective, from another is socially or operationally inappropriate. The symbolic domain filters the information and feeds it into the box labelled ‘Action’ where it becomes manifest as a response - we answer a question, move a chair, feed the child,… These actions change the arena and those changes are re-valorised and re-processed.

The symbolic domain is manipulated purposefully, and the process of manipulation is determined, in part, by value, reality and operational judgements. To solve an equation, for example, we must accept that the constraints of symbolic reasoning exist (we cannot ignore them), that the solution itself is intrinsically valuable and have a strategy for exploiting the rules of algebra to obtain the required result. That strategy usually involves the use of logical symmetry. The information is articulated with prior knowledge and reexpressed in a more readily interpretable form. For example, the observation that a child is crying may be articulated with the knowledge that hungry children often cry, and that crying denotes unhappiness. A coherent response follows immediately from one’s values.

117

Every arrow on this diagram represents a reduction, an act of simplification that fine-tunes normative behaviour into the sequence of half-reflexes that are praxis. Note, however, the dotted box labelled ‘Boundary Conditions’ that maintains a watching brief on information flowing from the substantive to the symbolic domain. If observations in the arena do not conform to expectations implicit in the model, the boundary conditions are violated. A warning message is transmitted that the model is dimensionally incoherent. This is an innovation jolt - the epiphany that switches from normative to adaptive behaviour. Replacing a model need not be traumatic - perhaps there are no mushrooms to pick, so you adapt by looking for blackberries. However, it always requires a decisive effort of re-configuration. If an appropriate model already exists in the memory of the person involved, it can be put in place immediately as s/he switches from Plan A to Plan B. However, if no such model exists, a substantial effort of reconceptualisation may be required. S/he may need to develop new valorisation functions, form new reality-, valueand operational judgements, undertake an onerous process of symbolic manipulation and then test the result by a process of trial and error until a satisfactory programme of action is found. This is the time when social relations become strained and co-operation is compromised.

118

The amount of resistance to an innovation jolt is partly determined by the availability of a Plan B, and partly by the recipient’s appreciative setting, particularly age and social capital. People in marginal circumstances sometimes endure dreadful hardship rather than innovate under duress because their survival is determined by communal solidarity. At the other end of the spectrum, community elders whose high status is underpinned by culturally embedded beliefs also resist innovation jolts. Receptivity is determined by how much you value what you stand to loose.

Section 5: Reasoning a fortiori Primitive man was able to hunt and fish as he required without impoverishing or modifying his environment. He imposed no undue pressures on the natural communities of which he formed a part. Civilized man, however, from the very outset effected changes disproportionate to his stature and his needs. He altered the vegetation by means of fire and by other means so that he could maintain stocks of animals and plants capable of supporting his ever increasing numbers. DJ Crisp If we take in our hand any volume; of divinity or school metaphysics, for instance; let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames: for it can contain nothing but sophistry and illusion. David Hume Probability pertains to opinion, where there is no clear concept of evidence. Hence ‘probability’ had to mean something other than evidential support. It indicated approval or acceptability by intelligent people. Ian Hacking

Orientation Contemporary science has two creation myths. The Renaissance myth sees it emerging from the revival of sceptical humanism in the fifteenth century. The Enlightenment myth links it to the Protestant Reformation and it is this myth that equates science with mathematics and value-neutrality. In general, the Enlightenment myth is more prevalent in the Protestant countries of north-west Europe and in the U.S. where anti-metaphysical polemic is part of national identity. The Renaissance myth has more currency among cultural Catholics. The anthropologist Ernst Gellner probably had this pattern in mind when he joked about the ‘Franco-Prussian axis’ of social science theory.

119

120

The Enlightenment myth places the origin of the modern period in the seventeenth century, neatly excluding all mediaeval thinkers - ancients and moderns alike. The Renaissance myth sets it three centuries earlier, thereby making room for some late mediaeval thinkers. Neither is convincing, nor yet completely baseless. The early and late modern approaches to science are very different. Peter Abelard, writing in the twelfth century, would have sympathised with the early modern Ockham but Ockham would not have understood Newton or Galileo. It is hard to argue for continuity of thought between them. However, the implication that every mediaeval thinker was a dogmatic realist or, like Roger Bacon, a genius centuries ahead of his time is crass. The historical evidence is seldom as clear as methodologists (including yours truly) imply. Creation myths are not mere cultural archetypes we can nod to whenever we want to motivate an uncritical audience they are often the source of the tacit beliefs that create integrative problems. We, who are committed to knowledge integration, must either set them aside or admit defeat. My first two quotations are chosen to illustrate the way scientific creation myths emerge. It is perhaps unfair to single Crisp out because this is only one of dozens, perhaps hundreds of similar passages. However, it illustrates the way scientists ignore historical evidence in their polemics. Crisp probably knew nothing about ‘primitive man’ that could not be gleaned from pollen maps and certainly did not expect to be pilloried by an environmental archaeologist: Crisp, DJ. (1962) in Crisp, DJ (ed.) Grazing in Terrestrial and Marine Environments. Oxford: British Ecological Society. The Scottish philosopher David Hume was an author of the Enlightenment myth. This is a famous passage from: Hume, D (1748) An Enquiry Understanding. (Section 5, Part 1)

Concerning

Human

The Third is from my source for the history of probability: Hacking, I (1975) The Emergence of Probability. Cambridge: Cambridge University Press. p.22

121

I have tried to avoid mathematics, but here and in Section 8 I push the envelope. Here I introduce some mathematical notation, but derive almost all the results by looking at pictures. It only contains the elementary stuff children are taught by about thirteen - please persevere. I am building bridges between hard and soft sciences. You need not go deep into unfamiliar territory, but it is necessary to nip across the bridge at least once and have a good look at the view. You can come straight back if you don’t like it. Sir Karl Popper would dispute this section. He distinguished the probability associated with a proposition from that associated with an event - implying they were of different logical type. The event is real, i.e. independent of human knowledge and belief. Propositions belong to the domain of knowledge. Although I admire Popper’s work, this is a weak attempt to shore up the nineteenth century idea that science is an exploration of objective reality. Scientific observations are recorded as propositions (data) which, in turn, imply the existence of at least one socially constructed category. There are no events in a database and without data one cannot valorise the probability. Assuming that the event exists, it and the probability occupy different universes and cannot be valorised together - they are incommensurable. Popper, K (1959) The Logic of Scientific Discovery. London: Hutchinson. My favourite probability textbook is short and sweet: Arthurs, AM (1965) Probability Theory. London: Routledge and Kegan Paul. The physicist John Polkinghorne has written some interesting introductions to Quantum Mechanics. See: Polkinghone J (1986) The Quantum World. Princeton: Princeton Science Library. My source for quantum theory is a set of course notes taken by G D Kaiser and Ruth M Williams: Dirac, PAM (1969) The Basic Ideas of Quantum Mechanics. Center for Theoretical Studies. University of Miami Coral Gables, Florida.

122

degree of one’s belief in the proposition itself. By positing logical relationships between propositions it might be possible to revise the initial probability estimates in the light of evidence.

1. Strong and Weak Beliefs In conventional logic the statements: •

Next Wednesday I will win the lottery



Next Wednesday I will be home late from work

have equivalent status. Neither is necessarily true nor yet necessarily false - both are possibly true. Yet their value as a basis for co-operative action is clearly different because one seems ‘stronger’ than the other. Reasoning from the relative strength of possibly true propositions is called ‘reasoning a fortiori’. As a student I was taught that reasoning a fortiori was first placed on a rigorous footing by the early probabilists.

Thus, in the early modern period, the strength of a proposition (that some category existed) was expressed in terms of the relationship between the key and essence of the category and the ‘naturalness’ of its boundaries. In the later modern period some scientists abandoned this method in favour of a quantitative approach. This was a tremendous leap forward, especially in the natural sciences, where the categories were already robust and no further mileage could be got out of the old methods. The evidence of progress is not so clear in the humanities as we shall see.

The word ‘probability’ comes from the same root as ‘probity’, which refers to high principles, ideals and moral authority. Before the seventeenth century, it was said, the only way of handling problems in a fortiori reasoning was to appeal to high moral principles and dogmatic authority. In the seventeenth century, however, these problems were placed on a value-neutral footing by the development of a formal calculus of probabilities. A faint whiff of the Enlightenment myth clings to this story. As we saw in Section 2, Essay 5, early modern thinkers had their own way of reasoning a fortiori - the methods of ontological science. They were just as keen to distance science from dogmatic realism as later modern thinkers, but their method was to pay special attention to the boundaries and definitions of named species and genera. They gauged the strength of a proposition by determining what categories it was predicated on and checking whether those categories were informative and unambiguously valorisable. Perhaps the simplest way to explain the contrast is that the development of algebraic method not only facilitated the shift from static, ontological science to dynamics, but also shifted the focus of a fortiori reasoning by providing numerical methods for valorising the strength of a proposition. The newer method was to take the categories as given and choose a number between 0 and 1 to represent the rational

123

124

propositions 1 and 2 are congruent - and we can show their logical equivalence using:

2. Probability Methods I begin by drawing closed

geometric features (I am going to use circles, but you could used squares, triangles, ellipsoids, or any closed figure). Each of these figures represents a proposition and the area of each figure is made equal to the probability associated with the proposition it represents. So the probability associated with proposition 1 will be the area of the circle labelled ‘Proposition 1’ in the figure to the left. Let’s extend this trick by adding a second proposition, but we let the circles intersect so that the probability associated with proposition 1 and 2 is simply the area of the overlap or intersection of two figures. On this diagram we can see the probability of 1 and 2, that of propositions 1 or 2, that of 1 but not 2 and that of 2 but not 1. All these probabilities are expressible as the sums or differences of the areas of these geometric figures. Some of the fundamental theorems of probability theory can be deduced directly from diagrams such as these. Now let’s develop some algebraic notation. Suppose we make P(x) represent the area of figure x (and the probability associated with proposition x) you should be able to see from my diagram that: i

P(1 OR 2) = P(1) + P(2) – P(1 AND 2)

ii

P(1 AND NOT 2) = P(1) – P(1 AND 2)

We can set conditions for equivalence of propositions. if iii

determining

the

logical

iv

P(1)≡P(2)2

Their probabilities are not only equal in magnitude; the propositions themselves are co-located on the diagram. There is nothing terribly exciting in these ideas, but they give the flavour of how probabilistic method works in the simplest cases - you use addition and subtraction of areas. These methods are very simple, mathematically rigorous and not much use at all. We need some additional insights to make them applicable and those insights came from the mathematical treatment of games of chance in the 1660s. Gamblers deal with odds (the ratio of successes to failures) and likelihoods (the ratio of successes to trials). The likelihood that a card, selected at random from a deck of 52, will be (say) a 3 of clubs is 1/52 because that is the fraction of the deck that satisfies the criterion of ‘success’: •

This is a three of clubs

The single insight that made probability theory useful was the equation of likelihoods (the ratio of the number of events that satisfy a proposition to the total number of possible events) with probabilities (a number between 0 and 1 that represents someone’s assessment of a proposition’s strength). We need a little more notation to explain how this works. Let the vertical line ‘|’ signify ‘GIVEN’. Thus P(3 of clubs|card drawn) is read ‘the probability of a 3 of clubs given that a card is drawn’. Such terms are called ‘conditional probabilities’ because their value is rationally dependent on another proposition. If you look at the diagram to the left you should

P(1) = P(2) = P(1 AND 2) = P(1 OR 2) 2

125

The symbol ‘≡’ here indicates the equivalence of sets.

126

be able to satisfy yourself that P(1|2) can reasonably be defined as the ratio of two areas thus:

the probability of proposition 1 conditional on the existence of a universe.

v

The two probabilities are equal if, and only if P(universe) = 1 and we are absolutely certain a universe of discourse exists. Thus I define as the universal proposition the following:

P(1|2) = P(1 AND 2)/P(2)

We have now established an analogy between likelihoods and probabilities, at least in respect of propositions 1 and 2. Note, however, that we are no longer adding and subtracting areas; we are forming ratios and this creates problems with the picture because it is no longer obvious where we should locate the proposition corresponding to P(1|2) on the picture. We have an equation between likelihoods and probabilities, but our graphical representation has broken down. The reason for this is that every time we compute a conditional probability we re-bound the system of interest to us. Thus P(i|2) tells us what proportion P(2) is overlain by P(i): proposition 2 becomes our system of interest. P(j|1) tells us what proportion of P(1) is overlain by P(j). Proposition 1 is now the focus of interest. We can only compare these numbers directly with each other if P(1) ≡ P(2) that is, if proposition 1 and proposition 2 are congruent and both numbers are defined against the same frame of reference.



and mark it as an axiom by giving it a probability of 1 whence, for any proposition, i vii

viii

A universe of discourse exists in which probability methods are useful

Consider the conditional probability: P(1|universe) = P(1 AND universe)/ P(universe) = P(1)/P(universe) We know P(universe) is less than or equal to 1, so the unconditional probability, P(1), must be less than or equal to

127

P(i|universe) ≡ P(i)

The universal proposition is a very weak constraint - an affirmation that there is something real beyond sense experience AND the method itself is applicable. It underwrites and, in effect, defines dimensional coherence.

Clearly, P(universe) - the area of the rectangle in which the two circles are embedded - cannot be greater than 1, because no probability can possibly be greater than 1. However P(universe) could possibly be less than 1. vi

P(i|universe) = P(i)

the unconditional probability and the probability conditional on universe are identical so there is no reason why we should not assume congruence:

Look again at the figure and this time pay special attention to the rectangle labelled ‘universe’. The area of this figure is the probability we associate with the undemanding proposition: •

A universe of discourse exists in which probability methods are useful

128

3. Bayesian Inference Conditional probabilities are very useful when reasoning a fortiori because we can use them to represent the relative strengths of certain theories given a body of empirical evidence. I am going to show you one way of doing this. Suppose we make an observation and express it as a proposition called OBS. Suppose further that there exists a theory represented by some proposition (proposition 1). Then, by definition: i

P(OBS|1) = P(OBS AND 1) / P(1)

where P(1) is the probability of proposition 1 before we made the observation - the prior probability. Similarly, ii

P(1|OBS) = P(OBS AND 1) / P(OBS)

The term P(OBS AND 1) is shared by both equations. Rearranging both gives: iii

P(1) * P(OBS|1) = P(OBS) * P(1|OBS)

i.e. iv

P(1|OBS) = P(1) * P(OBS|1)/P(OBS)

P(1|OBS) is called the posterior probability - the probability associated with proposition 1 conditional on the observation we have just made. The Devil is in the detail, of course. It is easy to write down a term like P(OBS|1) but not nearly so easy to write down an explicit function that gives us a value. If you want to know how to solve such problems, you will have to read a statistics textbook. Here I simply state that such problems can often be solved, though not always easily.

Each of these propositions is conditional on OBS, the observation, so they are dimensionally coherent, each with all the others. These equations give us a constant multiple of the probabilities associated with these propositions given the evidence that has come to light. This is Bayes’ theorem named after the Reverend Thomas Bayes who discovered it in the eighteenth century. Bayes’ theorem itself is quite uncontentious, but the way it is sometimes used has caused considerable debate. A simple approach is to start off with some prior estimate of the probability associated with each of the exclusive propositions. These are used to parameterise the terms P(1), P(2), …,P(n) in Bayes’ equations. Then we calculate the conditional probabilities on the right hand side of Bayes’ equations. These give us a constant multiple of the posterior probabilities, P(1|OBS), P(2|OBS), …,P(n|OBS) which represent the probabilities of statements 1, 2, …, n given the observation, OBS. Our problem now is to make them conditional on the universal proposition. The simplest way through this difficulty is to restrict ourselves to a universe in which one and only one of the candidate propositions is necessarily true. Then, for any pair of propositions, i and j vii viii

1 = P(1) + P(2) +, …,+ P(n)

This allows us to partition the set ‘universe’ into n arbitrary regions, each of which corresponds to the unconditional probability that one of the propositions is true. Every point in OBS can be assigned to one, and only one of these regions. Therefore:

Suppose we have 2 or more propositions (i.e. alternative theories) 1, 2, …, n. We can write:

ix

v

P(2|OBS) = P(2) * P(OBS|2)/P(OBS)

x

vi

P(3|OBS) = P(3) * P(OBS|2)/P(OBS)

P(1|OBS)+,…,+P(n|OBS) = 1

because

and so on. 129

P(i AND j) = 0

and

130

P(1 AND OBS) +,…,+P(n AND OBS) = P(OBS)

so we multiply each of the n values of P(i) * P(OBS|i) by a constant that normalises their sum to 1 and take these as our revised estimates of P(1), …, P(n). Note that these revised estimates are conditional on OBS, the observation. Each time we use this method, we re-bound the system of interest to focus on a smaller universe of discourse in which a string of observations has been made. Bayes’ equations tell us nothing about the unconditional probabilities and give us no probability estimate that can be applied to circumstances where an observation cannot be made. If our observables are dimensionally incoherent, this may be operationally significant. Before we leave this topic, I should note that many probabilists avoid the use of Bayes’ theorem. They estimate probabilities directly by repeated observation - the ‘frequentist’ approach. However, frequentists also make probabilities conditional on observation so everything I will say about Bayesian method applies to frequentist method with equal force.

However, most Bayeseanists defend themselves as follows: Suppose, for the sake of argument that the probabilities themselves are objectively real. Then our prior probabilities are clearly guesses. However, the method itself would still be useful because it is not sensitive to small errors in the specification of prior probabilities. Each time we revise our probabilities in the light of new evidence, the difference between our (guesstimated) posterior probabilities P(1|OBS) and the ‘true’ unconditional probability P(1) is attenuated as a result of the way posterior probabilities are estimated. Eventually the discrepancies between estimate and reality would dwindle to nothing and our estimated posterior probabilities will converge on the ‘true’ value (provided the true values are themselves spacetime invariant, or very nearly so).

The principal objection made by frequentists is that Bayesian probabilities are subjective. We have to guess the values of prior probabilities and have no empirical basis for those guesses. Thus frequentists tend to favour a realist position assuming that probabilities are objectively real and must be estimated directly from empirical data. There are, broadly speaking, two answers to this objection. The first and, from my perspective, simplest response is to refute the realist premise. Probabilities are mental constructs that depend for their existence on a theory that equates probabilities (degrees of belief) with likelihoods. Although this theory is useful (Bayesian and frequentist methods often work very well in practice) it cannot be made empirically testable - it is accepted or rejected as an act of faith. Therefore probabilities are not independent of human knowledge and belief.

131

132

4. Many Worlds The diagrams I used to motivate my description of probability look like the Venn diagrams that illustrate set theory and most probabilists develop these concepts from a settheoretical standpoint. They argue, for example, that these diagrams represent sets of possible ‘events’ - the atoms of probability, if you will. This works well for likelihoods in games of chance and contexts where one can actually invoke populations of valorisable events, but is not nearly as general as applied mathematicians often imagine. Consider two mutually exclusive sets (sheep and goats, say). No sheep can be a goat; no goat can be a sheep. So the intersection (AND) of those two sets is empty. When we draw a Venn diagram the two sets do not intersect at all and we can write: i

sheep AND goats ≡ Ø

So we have an algebraic equation here in which the operator AND transforms the sets ‘sheep’ and ‘goats’ into an empty or null set (Ø). The null set acts a little like zero in more familiar algebras in the sense that: ii

sheep OR Ø ≡ sheep

iii

sheep AND Ø ≡ Ø

The only way we can keep our algebra of sets coherent is if we allow that: •

Every set has the null set, Ø, as one of its subsets

Set algebra is a formal method of symbolic reasoning and we need this proposition to ensure that whenever we AND or OR two sets the result is always a set (just as whenever we add or multiply two numbers, the result is a number). If the null set did not exist, was not a set or was not a subset of every set, it would be much harder to develop set algebra. Innocent set operations could dump us into a domain of nonsets (as division by zero dumps us into the domain of nonnumbers). We would have to keep checking the result to make sure we were still in the domain of sets. The null set ‘closes’ set algebra so that any operation using AND or OR can be trusted to yield a set provided it is only ever applied to objects that are themselves sets. This is not just a nerdy bit of mathematics; it has an impact on scientific practice. Suppose we treat probabilities as sets and my probability diagrams as Venn diagrams. Then every infinitesimal point on my diagram is an atom of probability and every atom of probability must be associated with a proposition (by definition) so ‘universe’ has become a set the set of all existential propositions.

Here the operation OR is a little like adding. The operation AND is analogous to multiplication3. Pay particular attention to the last expression. In general, if I write:

By an existential proposition I simply mean a proposition that describes a possible ‘event’ and whose probability is constrained in such a way that we can use set theory.

iv

There could be infinitely many existential propositions (one for every point in universe, perhaps) or there could be a finite number of existential propositions - each linked, as it were, to a set of contiguous points in universe whose area is the probability associated with it. Each event describes a possible world - the world in which some propositions are true. When we draw a loop on the Venn diagram we enclose a set of possible worlds that are essentially similar in some respect. The area of that set is the probability associated with an over-arching proposition that captures the essence of the set of events it bounds.

A AND B ≡ C

then the set C (the intersection of A and B) is simultaneously a subset of A and of B - it is wholly contained in both of its parent sets. This suggests that the empty set is a subset of the set of sheep and of the set of goats. Indeed, it must be so.

If the null set, Ø, serves as a zero in set algebra, the universal set acts like 1 in the sense that: sheep OR universe ≡ universe; sheep AND universe ≡ sheep. 3

133

134

So to treat probabilities as likelihoods is to assume, in effect, that the probability all swans are white is simply the ratio of the areas assigned to possible worlds (events) in which all swans are white and the area assigned to the total number of worlds (or the number of worlds containing swans). Of course we cannot valorise many worlds in any accessible universe. We can only valorise certain categories of object in the one universe currently accessible to sense and memory. The ‘many worlds conjecture’ is an innocent piece of speculative metaphysics (very useful for devising science fiction plots) but has no empirical basis or testable corollaries - moreover, it actually imposes a very severe constraint on the practice of science. Consider the non sequitur: 1. All snarks are boojums AND no snarks are boojums This is logically incoherent and un-valorisable. Using the set theoretical representation of probability theory we would have to conclude that: v

P(1) = P(Ø ) = 0 4

Proposition 1 is utterly incredible and must have a probability of zero - the probability associated with a null event. That’s just common sense, isn’t it?

There is no contradiction here because every set contains the null set as one of its subsets. Don’t worry about how to draw this on a probability diagram because you don’t have to. There are no probabilities in this argument; snark is just a set. If we want to represent this situation probabilistically we need to formulate a proposition and assign a probability to it. By symmetry viii, Proposition 1 (‘All snarks are boojums AND no snarks are boojums’) is logically equivalent to ‘snarks do not exist’. So it is natural to ask, what is the probability that snarks do not exist (i.e. that boojums are not boojums)? Well, quite simply, we don’t know. It could be zero (snarks definitely exist) it could be 1 (snarks definitely don’t exist) it could be somewhere between. All we can say is: ix

v

We may take some object, for example a duster, and put it down at one end of the desk. Considering the duster as a dynamical system, it is in a certain state. If we put it at the other end of the desk it is in another state. According to quantum mechanics we should be able to superpose the two states. What state results? We might argue that the duster is in some intermediate position, but that is quite wrong. The only way we can understand this superposed state is to say that the duster is partly at one end of the desk and partly at the other.

boojum AND NOT boojum ≡ Ø

Now suppose there is no such thing as a snark. Then snark is empty too: vii

snark ≡ Ø

Whence we obtain the remarkable symmetry: viii

4

snark ≡ boojum AND NOT boojum (if and only if snark ≡ Ø)

An alternative way of describing superposition is to say that the desk-duster system only exists in the form of one of its valorisable states.

That P(Ø ) = 0 follows from the rule A OR Ø = A, whence: P(A OR Ø) = P(A).

135

P(1) = 0

and the difference leaps to the eye. In physics these non sequiturs are sometimes called ‘superpositions of states’. Paul Dirac, one of the pioneers of quantum mechanics, described them thus:

So it is, but as I have argued throughout this book, common sense is undependable. Forget about probabilities for a minute and consider the set of boojums itself. The intersection of boojum and not boojum is the null set: vi

0 ≤ P(1) ≤ 1

compare this with the result we got by restricting ourselves to existential propositions:

136

Perhaps it would help to consider a more familiar example. If the economy has been growing for a while, the probability economists and commentators associate with the superposition: •

There is a recession coming AND no recession is coming (in the next three months)

will be zero. Either they think a recession imminent, or they don’t, but they certainly don’t think both will happen at once. That would be absurd. Or would it? From time to time the economy slows. Indicators become mixed. Analysts and politicians bicker like children about definitions. People try to ‘talk the economy up’ (or down). Anyone who has lived through a few of these pantomimes soon learns that expert opinion is almost useless here because the class of incipient recessions, real or not, cannot be valorised in its instances. When a dependable observation can be made (usually with the wisdom of hindsight) it will transpire that there was an incipient recession or that there was not. Like Dirac’s duster, the system is subject to an inherent logical constraint that restricts the range of possible states. However, all we can say when the crisis of confidence strikes is that the state simply does not exist in valorisable form. Moreover, the act of trying to valorise an incipient recession is itself an intervention in the economic system. It is for this reason that many legal codes forbid the publication of opinion polls in the run-up to an election. Insignificant drifts of opinion, consolidated by the opinions of professional experts and commentators and propagated by the media of mass communication can easily change the course of history. In these policy hot-spots it is not the experts that fail us, but the ontologies to which they are committed. Those failed ontologies are represented in the present by a superposition of possible states - a non sequitur. The difference between probabilistic methods based on set theory and those I advocate here is that, in the former, all probabilities are conditional on observation. In effect, one assumes that valorisation is possible on demand - as soon

137

as we choose to look, the superposition will obligingly resolve itself into one of its constituent states. It is hard to see why this should be so. A sub-atomic particle, for example, can only interact with a sensor when it enters the sensor’s arena. Its location is functionally dependent on time so we just have to wait. However, that is the way setbased probability works. In the soft sciences, however, we must wait for propitious circumstances and then make what observations we can. While we are waiting to make an observation, the superposition and the system’s state are logically equivalent by symmetry viii above. The physicist Erwin Schrödinger likened the superposition to a cat shut in a box with a poison capsule activated by radioactive decay. When we open the box and look, the cat is either alive or dead. In the soft sciences, however, the cat is not in a box that can be opened and closed at will; it is at large in a huge forest. Before we can observe anything, therefore, we must find the cat. So, at any time we have three states to consider: 1. We find the cat alive. 2. We find the cat dead of poison. 3. We cannot find the cat, or the cat is dead but it could have been a fox, or the cat is alive but the poison capsule is missing, or we find a cat but we aren’t sure it’s the right cat, or we have been looking so long the cat may have died of old age, or … There is no need to invoke a lottery-like random process here. System dynamics are determined by causal rules we do not understand so we use probability methods to predict the outcome and test those beliefs with observations when there is something to observe. Most of the time the system’s existential state is represented by the empty set. The cat cannot be valorised. To assign a zero probability to state number 3 is to prevent oneself using the method to distinguish existence from reality. Everything that exists (i.e. can be valorised) is presumed real and everything real is presumed to exist in valorisable form. To use set theory, therefore, is to neglect 138

the ontological problem and all the results we obtain are conditional on observation. The non set-theoretical approach allows for the possibility that reality may not exist in valorisable form and that that which appears to exist may not actually be real. We can assign a non-zero probability to proposition 3 and so address ontological problems. Albert Einstein created work for generations of exegesists when he said, that God does not play dice. I don’t know exactly what he meant either, but in my opinion, socionatural scientists do not need to play dice in order to use probability method. Instead we accept the possibility of an underlying causal mechanism that is real (independent of knowledge and belief) but does not exist in valorisable form. The states we observe undoubtedly do exist, but cannot be predicted with the force of necessity so we use probability method to determine rational degree of belief in propositions that do predict observations. We have known for decades that any proposition (even a non sequitur) is trivially applicable to the null set. It seems natural, then, to use a non sequitur to represent the null or empty state and develop models that assign a non-zero probability to that state of dimensional incoherence. To do so is simply to assert that dimensional incoherence is possible and, under certain circumstances, probable (i.e. rationally believable). Sometimes we do not know what a system’s state is. If we insist on using a gambler’s understanding of probability in the socio-natural sciences we can only consider states that exist in valorisable form. We cannot countenance the superposition as a real (but unobservable) state - a state of ontological indeterminacy - and assign it a non-zero probability. We become trapped by conventional boundary judgments, unable to probe or even contemplate the reality beyond the curtain of valorisable experience.

139

5. Ockham’s Razor Let us put some flesh on these bare bones by taking a textbook example originally used by Aristotle. Suppose we want to assign a probability to the proposition: 1. If it rains today there will be a sea-battle tomorrow and decide to ignore the problem of dimensional incoherence. Proposition 1 is predicated on two categories: rain (or NOT rain) today and sea-battle (or NOT sea-battle) tomorrow. Then, by implication our model has two axioms: •

There exists a universe of discourse in which probability methods are useful

λ

There exists a universe in which the problems of valorising rain (or NOT rain) today and a sea-battle (or NOT sea-battle) tomorrow are both well-posed.

Proposition λ is axiomatic (because we are ignoring dimensional incoherence) so P(λ) =1. The only way we can fit this axiom into a probability diagram is to make it congruent with the universal proposition, i.e. i

λ ≡ universe

These propositions are essentially identical, whence: λ

There exists a universe of discourse in which probability methods are useful AND a set of 2 mutually exclusive and exhaustive propositions in which the problems of valorising rain (or NOT rain) today and a sea-battle (or NOT sea-battle) tomorrow are both well-posed.

has a probability of 1 and probabilities are conditional on λ. Now that is a very strong assumption. If our faith in this proposition is misplaced - if we are sometimes unable to valorise rain today or sea-battles tomorrow, our ontology is not universal. There will be a logical non sequitur that refutes this ontology to which we have erroneously applied a probability of zero. Thus, if the axiom of exclusiveness and exhaustiveness is sometimes violated, we will consistently over-estimate the value of P(λ) and, by implication,

140

ii

P(1| λ) >= P(1)

The method will give biased estimates - consistently overestimating unconditional probabilities. The only way we can know that this small world bias exists is by thinking very carefully about the problem of valorisation (and the non sequiturs that represent circumstances under which those categories cannot be valorised). If we restrict ourselves to the methods of later modern science, or to set-theoretical derivations of probability method, we are going to miss it. Whenever we turn a proposition into an axiom, all the categories on which it is predicated must be unambiguously valorisable in the universe of discourse. The more categories we give essential status in this way, the more tightly constrained is our universe of discourse. This is why scientists like reduced models in which axioms are predicated on a few categories - they correspond to weakly constrained universes of discourse in which causal mechanisms are relatively simple. Problems become wellposed or almost so and the solutions to those problems fairly stable subject to boundary conditions. Which brings me rather neatly to the aphorism known as Ockham’s Razor:

Pluralitas non est ponenda sine necessitate Ockham’s razor is conventionally interpreted as meaning that, if two explanations are equally sound, we should always favour the simplest. Yet that is not what the razor says. The words (taken from Ockham’s Quodlibets) simply mean: “plurality should not be posited without need”. This is good advice for a scientist because the more categories we posit, the more susceptible our model is to small world bias. Ockham’s razor was not Ockham’s at all. Other mediaeval thinkers apparently shared this view. It is sometimes represented as: “entities should not be multiplied unnecessarily” but if Ockham used this form the documentary evidence did not survive. Moreover, he did not come to it by an abstract analysis of probability theory; he used intuition and judgment - he smelt it, as it were, and he liked it.

141

Athough Ockham would have had a hard time understanding Newton, my impression is that they shared this intuition. Newton didn’t posit plurality unnecessarily either and the categories on which his model was founded could be valorised almost everywhere. Here, at last, I find a thread of continuity between early and later modern science. Ockham was a reductionist. We have now encountered ‘reductionism’ in two distinct contexts. In this essay I use ‘reductionism’ to describe the tendency to build reduced models of reality predicated on the existence of a small number of categories. This tendency unites Euclid, Ockham, Galileo, Newton and almost every competent scientific modeller from the time of Newton to the present day. However, in Section 2, Essay 3 I also used it to describe the reductionist thesis, that every socio-natural process could be reduced to Newton’s laws of motion. These two forms of reductionism are different. The first form of reductionism is almost an aesthetic imperative, perfectly captured by the twentieth century acronym ‘K.I.S.S.’ (Keep It Simple, Stupid). The more categories we build into a model, the more complex and fussy is its core behaviour - K.I.S.S. However, from time to time modellers find axiom systems predicated on a few categories that somehow ‘fit together’ - they are dimensionally coherent, each with all the others, over a wide range of universes. Euclidean geometry is a case in point, Newtonian mechanics another. Indeed, it is arguable that every knowledge community is predicated on a core set of valorisable categories. History, for example, deals with the hermeneutic analysis of texts that refer to the past, archaeology with free-standing monuments and traces of human activity preserved in the lithosphere, economics deals with monetary value and utility, chemistry with elements, atoms and molecules, geography with mappable spaces and so on. In practice there is no limit to the scale of the structure we can build onto these fundamental categories. A geographer, for example, might smile at my suggestion that geography deals with mappable spaces; pointing to one colleague who is, to all intents and

142

purposes, a sociologist and another who works as a theoretical ecologist. Geography is not a unitary discipline, but a paradigm - a set of categories and axioms of astonishing generality. To sense (and take possession of) a paradigm is to absorb the culture of the community and to develop a set of tacit norms and valorisation functions that help make sense of the world. This can create serious problems for integrative research as Roger Seaton has observed. Seaton has spent many years engaged in integrative studies and observed that knowledge communities tend to englobe each other. A mathematician, recognising a redundancy and logic in social behaviour, comes to believe that cultural processes can be modelled mathematically. A sociologist, observing that the norms and conventions of applied mathematics are socially constructed, believes mathematical modelling is essentially a cultural process. In the cramped subspaces formed at the intersection of the two views, the breadth and richness of both disciplines are so constrained that each becomes a crude parody of itself. This is also (rather confusingly) called ‘reductionism’. Although Ockham was a reductionist in the K.I.S.S sense of the word, he was not a reductionist in the englobing sense. Ockham did not imagine there were fundamental categories on which all useful knowledge can be predicated, he just liked to keep his models simple. Many contemporary humanists, though implacable in their opposition to the K.I.S.S. form of reduction are incorrigible englobers claiming that human action is a ‘text’ that must be deconstructed using hermeneutic method, or that science is merely a form of ritualised discourse. The two forms of reductionism do not invariably go together. Caveat lector! I should emphasise that this essay is methodological rather than historical or philosophical in focus. I am explaining these distinctions because they make a difference to the practice of science. Every exercise in a fortiori reasoning is contingent on a model and implies the existence of a universe within which those objects can be valorised. The probabilities we obtain from quantitative analysis are all

143

conditional on that universe. Hard scientists, following Ockham’s razor, reduce plurality and concentrate on the most generic and easily valorisable categories. However, such a model, of necessity, ignores universes within which some or all of those categories cannot be valorised. Yet those universes may be very significant to the problem at hand. All we can do is set boundary conditions on the model expressed in terms of compliance rates and indicators of system health and sound the alarm vigorously if those conditions are violated. Failure to do this often leads to disappointment, system collapse and fruitless recriminations. For example, a study of the relationship between investment in scientific research and economic growth may suggest growth is higher in countries that invest more in scientific research. So public money is used to encourage students to study science and fund research, but the payoff in terms of economic growth is much smaller than expected. Soon the recriminations begin: ‘Those blasted schoolteachers are not encouraging young people into the hard sciences and professional academics are not providing educational services of the requisite quality. We must audit them and enforce compliance!’ Well, perhaps not. Perhaps politicians and researchers have foolishly neglected the contraction of reality that all reduced models imply. After all, the results of this model only hold in a universe where ‘investment in scientific research’ and ‘economic growth’ can be valorised. That is, in the universe occupied by national and supra-national agencies and explored by politicians and their advisors. When exploring that universe, we naturally apply Ockham’s razor, reducing plurality, making axioms explicit and calculating probabilities conditional on those assumptions. But human activities often take place in universes where these quantities cannot be valorised and we cannot take Ockham’s razor and prune those activities out because they may be critical determinants of success or failure. What is the impact of investment in research on Aurélie Larsson’s father’s uncle who may (or may not) pay her university fees? We may need to influence Aurélie’s choices (and those of

144

others like her) and so must operate in a richer universe of discourse than anything a reduced model can describe.

Section 6: Integrative Research

Whenever we generalise from reduced, hard-science models to a richer, integrative universe of discourse, probabilities (our faith in certain propositions) should shrink in proportion to that extension. The probabilities we estimate may even be swamped and their rank ordering changed by small world bias. Small wonder, then, that policy interventions often have unforeseen and undesirable consequences.

It is well known how to set up laboratory sites to pursue scientific investigations of various kinds. We also know how to build teams around … individuals of exceptional talent… What we do not know so well is how to manage the art of facilitating efficient communication between such nuclei as well as between the other equally important elements that one finds in Mode 2. Gibbons et al.

Later modern scientists, particularly those interested in physics, negotiated a universe of discourse within which many of their ontological problems were solved. They were working with objects that seem to be independent of human agency and belief. This ontological stability, which remained the basis of the physics paradigm until the early twentieth century, was a prerequisite of the use of algebra. Their early successes made them arrogant and they began to englobe other disciplines and dismiss the concerns of humanists and early modern scientists as sophistry and illusion. In the nineteenth century hard scientists became aggressive colonists of softer problem-domains, but here ontological problems are an order of magnitude more complex and observations cannot be made to order. Yet many of those early social scientists held ontological and hermeneutic method in antipathy because they could not englobe it without confronting metaphysical dilemmas. Ontological method challenged their lucrative status as the arbiters of objective truth and the masters of ‘positive evidence’. To allow this arrogant denial of complexity to go unchallenged into the twenty-first century would be ethically and methodologically indefensible. We have a choice, then. Either we drive hard scientists (and the politicians and the policy-makers who fund them) out with a stick or we reeducate them to operate in domains where plurality is a spontaneous by-product of system dynamics.

145

...I would argue that it is incorrect of the anti-scientists to attribute the ills of the world to science as such. They arise much more from the misapplication of science under the influence of a basically inadequate social philosophy, which puts too much stress...on material goods. Conrad H Waddington This concern with questions about rather than in the disciplines of social science is an indication not that social science attracts dilettantes, but rather that exceptionally difficult problems arise when the methods developed for investigating the natural world which exists outside ourselves are applied to the social phenomena of which we are part. Peter Checkland

Orientation I suppose most people are familiar with the parable of the frog in a bucket hung over a candle. The water warms so slowly the difference is almost imperceptible from one minute to the next, so the frog never makes up its mind to leave and gets cooked. That makes it an a posteriori frog, reasoning from effects to causes and baffled into inactivity by equivocal evidence. An a priori frog, noticing it was uncomfortably warm, might speculate that temperatures were rising, but would not push the argument back - asking “am I sure it is getting warmer?” It would push the argument forward - asking, “what if I am right?” The answer, obtained by deduction from possible cause to contingent effect, is clear: the a priori frog would jump.

146

From a froggy perspective the a priori frog is just as likely to be wrong as the a posteriori frog. The a priori frog risks dehydration and unknown dangers - the a posteriori frog risks getting cooked; they pay their money and take their chance. We humans are often in more complex situations because we could blow the candle out. There are fat a posteriori frogs who eat the insects attracted by the light. There are thin a posteriori frogs who aren’t going to do anything unless the fat ones give them permission. Both groups animadvert on the a priori frogs who, they claim, are scaremongers. “Where is the scientific proof?” they croak, and call their expert witnesses. Meanwhile the a priori frogs disturb the peace by campaigning for action, harassing the fat frogs and even trying to put the candle out. They must be restrained, of course, because this runs contrary to common sense and violates the natural order of things. I am not saying you must blow the candle out; neither am I telling you to let it burn. I merely observe the irony. Our cleverness and sociality - the crowning glory of our species create integrative problems that mean we often have less freedom than frogs in a bucket who, at least, can act without first negotiating consensus. The quotations are from: Gibbons, M, C Limoges, H Nowotny, S Schwartzman, P Scott, and M Trow (1994) The New Production of Knowledge: The Dynamics of Science & Research in Contemporary Societies. London: Sage. p. 162.

An internet search on resilience will soon take you to the site of the resilience alliance which publishes a very successful on-line journal. www.ecologyandsociety.org. The seminal work on resilience is: Holling, CS (1973) Resilience and stability of ecological systems. Annual Review of Ecology and Systematics. 4: 123. A lot of excellent work on the Adaptive Management of Natural Resources came from this source. Here are two books, the first an easy read, the second a little more technical: Lee, K (1993) Compass and Gyroscope: integrating science and politics for the environment. Washington: Island Press. Walters, C (1986) Adaptive Management of Renewable Resources. London: Collier Macmillan. Hägerstrand wrote about the spread of innovations in: Hägerstrand, T (1988) Some Unexplored Problems in The Modeling of Culture Transfer and Transformation in Hugill, PJ and DB Dickson (eds.), The transfer and Transformation of Ideas and Material Culture. Texas: A&M University Press. This popular introduction to Gödel’s theorem is well-known and fun but most people lose the plot before finishing: Hofstadter, D (1980) Gödel, Escher, Bach, an Eternal Golden Braid. London: Penguin books.

Waddington, CH (1977) Tools for Thought. St Albans: Paladin

The theoretical biologist Robert Rosen wrote a great deal about what he called non-computational complexity. I only met him twice but those meetings had a tremendous influence on me. See:

Checkland, P (1993) Systems Thinking, Systems Practice. Chichester: Wiley. p. 67.

Rosen, R (2000) Essays on Life Itself. New York: Columbia University Press.

An alternative approach to the frog\bucket problem which is now old enough for us to be apply foresight and hindsight is in:

Gregory Chaitin has also written some fascinating papers on the subject. I like:

Handy, C (1989) The Age of Unreason. London: Arrow Books.

147

Chaitin, G. J. 1982. Gödel’s Theorem and Information. International Journal of Theoretical Physics 22 pp 941-954

148

Any good textbook of computer science contains a section on the limits of computability. The ideas about convection also come from a textbook: Nicolis, G and I Prigogine (1989) Exploring Complexity: an Introduction. New York: Freeman.

1. Professional Knowledge Creation Integrative research is the process of creating knowledge across the boundaries of epistemic communities. It is usually undertaken by and for professionals and so can assume a high degree of motivation and competence. Multi-disciplinary research is a knowledge patchwork in which each community accepts the beliefs of the others uncritically or ignores them. Unified research requires a shared knowledge base for the whole group. Integrative research occupies the middle ground - an expedient alliance of epistemic communities, valuing disciplined diversity but minded to cooperate if possible. Recall that a group’s appreciative setting is determined by intellectual competence, knowledge and receptivity. Intellectual competence is highly variable, but can be changed within limits by education and training. There are instruments that bear on knowledge and receptivity too, but they pull in different directions. Multi-disciplinary approaches change the appreciative settings of participants by making unfamiliar knowledge available to each. Many people believe sustained multidisciplinary initiatives create a flow of innovations but the historical evidence is unclear. Driving disciplines together produces sustained culture-shocks that reduce receptivity and cause schism. If there are strong incentives to persevere (financial subsidies, perhaps) the result is either a unified knowledge system or a schism. The fusion of biology and chemistry illustrates this process perfectly. As botany departments queued up to reinvent themselves as schools of molecular biology, systematics, ecology and environmental science gradually fell off the curriculum. They tend now to be taught in environmental research institutes, schools of geography or Quaternary science. Similar stories can be told in many study domains; the collapse of geography into human and physical subsets, the schism between pre- and proto-historians in archaeology and the divergence of hard- and soft- system approaches all

149

150

suggest that no matter how generous the funding incentives are, multi-disciplinary meta-sciences seldom thrive as such. The analogy with biological dynamics is striking. Just as intensified exploitation of an ecosystem can reduce resilience by destroying biodiversity and fragmenting habitats, so intensified knowledge production can reduce adaptive potential by destroying epistemic diversity and fragmenting communities. Indeed, funding initiatives often create conflicts between the basic human need for effective communication and the centralised demand for constant adaptive change. These conflicts are usually de-fused by polite obfuscation. Any minor adjustment is hailed as an innovation when, in fact, it is at most a slightly new variation on a very old theme. The pretence of adaptation is one of the commonest defence mechanisms of any agency paralysed by a centralised demand for unremitting revolution. The agency simply meets its innovation quotas by pretending it is already adapting as fast as any institution can. This behaviour is correlated with historical pendulum-swings between a centralised command economy for knowledge services and a distributed demand economy. Universities emerged in the twelfth century as a spontaneous response to demands for educational services that could not be met by monastic schools. Cathedral schools were established to which peripatetic teachers and students were drawn. The universities were scholarly trades unions that negotiated contracts between teachers and students and kept order. As the need to regulate them became apparent they were granted charters, given privileges and placed under constraints to audit quality and keep the peace between town and gown. By the early fourteenth century centralised control had turned the universities into systems for accrediting orthodoxy. A similar story can be told for the last fifty years. A burgeoning demand for educational services has been regulated and gradually converted into a command economy by successive governments and supra-national agencies. As the number of students passing through has increased,

151

accreditation and orthodoxy have once again become prime concerns, but the direction in which latter-day scholars are driven by centralised demand is different. Mediaeval scholars committed to adaptive change had to defend themselves from the charge of heresy by claiming they were re-discovering timeless truths. Latter-day inquisitors are more likely to accuse scholars of failing to innovate, so academics now defend normative behaviour by claiming they are really ‘innovating’ all the time. Recent academic literature on post-modernism, postfeminism, new archaeology, new geography, new systematics, post-structuralism and the rest; the silly marketing speak of Post-War science and the changing demands of funding agencies, are perfectly rational responses to the unsustainable command economy for knowledge that demands endless innovation without any clear indication of what innovation is, or why we might want so much of it. Professional academics who prosper in contemporary markets are those who degrade the distinction of adaptive from normative dynamics, write impenetrable prose and market every piddling change of emphasis as an earthshattering revolution. This nonsense takes the heat out of socio-political conflict but it also prevents academics explaining the simplicity and antiquity of great ideas As the volume of scholarly and scientific literature has grown and the incentive to write clearly has been eroded, the impact of any one publication has become negligible. The twentieth century ‘information explosion’ is not, as the beancounters pretend, evidence of progress, but of a stagnant, centrally driven knowledge economy that has gradually paralysed our universities. The secondary effect of this process has been a shift from rolling research programmes to project-based research. A project is a short-lived alliance of researchers with a startdate and end-date and a deliverable. The growth of projectbased multi-disciplinary initiatives (the Mode 2 revolution) is an artefact of post-war ‘big science’, accelerated by graduate unemployment in the late 70s. It caused a dramatic shift in

152

the balance between teaching and research in universities which must now accommodate two different business patterns in a single administrative structure. Mode 1 business (teaching and scholarship) tends to be focussed in a single discipline and institution. A large proportion of the costs are fixed, the business cycle is slow and job security good. Mode 2 business is run by institutional consortia and demands multi-disciplinary input. It has a rapid business cycle, variable costs and lousy job security. Few institutions can accommodate both business cycles successfully. Multi-disciplinary revolutions are transient perturbations nudges that change the course of history, but carry within themselves the seeds of their own destruction. Differential funding and career opportunities create tensions between the soft and hard sciences, tenured and non-tenured staff, pure and applied subjects, Mode 1 and Mode 2. People are torn by conflicts of interests, by loyalty to the research team(s) of which they are members, to their employers, their academic peers and funding agencies. These tensions create epistemic factions that prime the scholastic trap. As they compete for resources and status, researchers represent the tension between doubt and certainty as conflicts between progressive and reactionary tendencies. Soon the whole programme stands on the authority of a single epistemic community, often protected by institutional or national statute. New disciplines replace the old ones. Institutions are purged of those whose work is seen as anachronistic or peripheral. Epistemic diversity is reduced and the scholastic trap is sprung.

2. Why Study Integrative Research? I have already explained that to study the scholastic trap is to study the human condition and argued that the distinction of hard and soft science is emic, it characterises knowledge communities that persist and re-assert themselves across generations. For a sociologist interested in understanding the human condition, or a policy maker wishing to understand the whole class of integrative problems, this is sufficient reason to study integrative research. It poses some fascinating managerial problems too and a manager or management scientist might find the subject interesting for that reason. Finally, there are important ethical considerations. A research team is a manageable microcosm of the wider world it is constituted to study. The team comes together to create and integrate knowledge that helps it co-operate in a complex socio-natural domain. That domain already contains other people (residents, if you will) themselves creating and integrating useful knowledge. The process of research and the subject being researched are qualitatively similar. However, researchers often leave at the end of the project; residents, one hopes, will not be forced to do so. Researchers have (or should have) well-structured aims and flexible knowledge bases; they can tune their knowledge to serve those aims. Residents often face weakly structured problems and may be unable to compromise without violating their own sense of identity or risking social exclusion. Their aims, broadly speaking, are to continue doing what they are already doing while achieving (or maintaining) a desired quality of life. Researchers in applied or policy-relevant projects are there to reflect on the consequences of human behaviour and maybe help people search for new, more sustainable lifeways. This is a tremendous responsibility. Researchers are willing participants in a process that helps determine the course of history. In a democratic society they should not have a disproportionate role.

153

154

Research is often directed across epistemic boundaries and one of the key tasks of a research co-ordinator is to coax colleagues into a more reflexive mode. Many academics do not enjoy this. Just as motorists sometimes curse and lean on the horn when other road users force them to take conscious control, so academics sometimes become openly aggressive when tacit beliefs are challenged. This may be acceptable in a pure research setting, but raises ethical dilemmas when livelihoods are at stake. Tensions within the team may amplify stresses experienced outside. Some fragile rural ecosystems and deprived urban neighbourhoods have been so disrupted by academics, residents have become openly hostile. All human beings, whether they are conscious of it or not, are making history. Human knowledge (the shared beliefs of a community) determines human behaviours, which, in turn, impact on the biophysical environment creating opportunities and threats to which humans respond by re-negotiating knowledge. These cultural ecodynamics are at once an important research domain and a valuable source of managerial insights. Indeed, we can turn this self-referentiality to good advantage - learning a lot about cultural ecodynamics without braving the ethical challenges of sociological or anthropological fieldwork simply by studying academic knowledge creation, both in the past and the present. A strong case can be made for equipping large research projects with a skilled participant observer whose task is to assist in the process of knowledge creation and prepare a research report on the process. Where a biologist may use a laboratory or greenhouse to test theories in a way that would be ethically indefensible in the field we can test theories on ourselves.

155

3. Knowledge and Information Humans negotiate knowledge by communicating with each other. We seem to be programmed to try to understand sensory experiences, especially those associated with other people, using our highly developed empathic skills to negotiate a congruence of beliefs with our neighbours. Young people are more receptive to unfamiliar beliefs than older people so linguistic structures are conserved and transmitted between generations. However, errors are made, ideas are sometimes rejected or communicated imperfectly so knowledge is not static. Human beliefs are conditioned by basic human needs. Certain types of situation recur, and people can be expected to gravitate towards similar beliefs in those situations. These common interests create consistent associations of personal interest and shared belief. If circumstances are persistent, linguistic and epistemological structures will be conserved from generation to generation. However, if they disappear and then return, the same structure will be expressed in very different terms. These epistemological resonances (ideas) can indeed be apprehended by disciplined contemplation. They represent regions of dimensional coherence universes of discourse - that are discovered and rediscovered in successive generations. However they are not divine templates, independent of human belief, but natural responses to recurrent types of circumstance. Knowledge is shared belief. My personal knowledge consists of all the beliefs I share with myself. Yours is defined similarly. Our (collective) knowledge consists of all the beliefs we share. Our beliefs are shaped by our experiences. As your beliefs or mine change, the knowledge we share reforms at the intersection of our respective belief-sets. This is so by definition and can be extended to communities of three or more people without loss of generality. If beliefs diverge, trust can break down and social exclusion results. This is a common experience among the children of blue-collar workers in higher education, for example.

156

Observations are sensory experiences articulated with prior knowledge. A lot of sensory stimuli are systematically ignored or baulked. Sometimes we baulk observations because we do not know what to make of them, or they challenge cherished beliefs. Often we baulk experience that seems insignificant. Information, as I have already indicated, consists of observations that change the knowledge state. Some information is consistent with pre-existing beliefs and simply adds knowledge to the store. Other information challenges pre-existing beliefs leading to an epiphany. Information that challenges beliefs in this way and is propagated through a substantial part of society often leads to widespread innovation - new perceptual structures and adaptive behaviour. In the political science literature, the word ‘innovation’ is often used to represent socio-economic change. National and supra-national investment in research is often funded because people believe activities that add new knowledge can be expected to enhance economic performance. The evidence does not support this view, as debates in the U.S. about the future of space research and, in Europe, about the low level of economic spin-off from successive Framework Programs have highlighted. Rapid economic development seems invariably to follow innovation, and innovation often arises from research. However, many research projects do not innovate and most innovations have no economic spin-off. It is possible to manage research projects so as to maximize the probability of innovation and, once an opportunity has been recognized, to design technical projects that exploit it. However, it is often impossible to move from innovation to exploitation on demand. It takes time for new beliefs to be communicated to and assimilated by an influential group of people. Notwithstanding the protests of politicians and scientists with a vested interest, the dynamic linkages are weak. Innovation implies relative novelty. The idea need not be absolutely new. What matters is that the information triggers an epiphany that changes the recipient’s mind-set.

157

Swamping the recipient with data dulls the senses and may actually reduce the likelihood of innovation. The distinction is that between So What? and Aha! When I tell you that scientists have found a blade of grass growing between two paving stones, you may baulk this as a useless or trivial communication. So what? If, for some reason, you were interested in these paving stones, you might note the presence of grass but it would hardly qualify as an epiphany. However, if I were to tell you that scientists had found a blade of grass growing on Mars (and you believed me) your beliefs about life on Mars might be challenged. The moment that challenges belief is seldom repeated in a simple way because knowledge is dynamic: yesterday’s Aha! is tomorrow’s So what? There are many epiphanies in the first decade of life, rather fewer in the seventh. As we become elders of the communities that sustain us, our beliefs cease to be monitored and knowledge becomes fixed. We baulk experience that might lead to innovation and switch from a learning- into a teaching mode. Teachers often try to challenge beliefs, but the graveyard slot in the early afternoon, often finds students unreceptive. Half of them are asleep; the rest are daydreaming (technically awake but operating with too many cognitive filters off-line to make sense of anything). We complain of course, but suspect that adult creativity is nourished by such childish joys as watching the summer grass grow, or dust spinning in sunlight. If humans couldn’t lower their cognitive filters from time to time we would be trapped by our beliefs and unable to observe anything that did not make sense immediately. Many people can testify that childhood experiences often lie dormant in memory until ‘the penny drops’ and we make sense of them as adults. These epiphanies are not always traumatic (repressed memories in the Freudian sense) they are merely uninterpreted observations. It is often helpful to contemplate a spectrum of beliefs from culture to theory. Cultural Beliefs are so deeply engrained in

158

us that we do not monitor them. In practice, culture is best identified negatively in terms of the things we do not think of doing and the observations we are not capable of making. When we receive information that challenges cultural beliefs, we may cringe with embarrassment or even respond defensively because culture defines our identity. People who regularly challenge cultural beliefs are always mistrusted and often disliked. Cultures are usually closed to information. Creedal Beliefs are deeply embedded but explicit. Information that challenges creedal beliefs is often baulked. Theories are a weaker type of belief, created in provisional form and monitored for coherence, consistency and utility. We often depend on theories in our daily work while recognizing that they are not secure. Information that challenges a theory is less likely to elicit a hostile response. People are open to information that challenges theories. Beliefs enable us to make sense of our experiences and cooperate with others by forming coherent knowledge communities. Different knowledge communities often have logically irreconcilable belief systems and members vary in their openness to external information by age and temperament. Integrative research requires representatives of two or more communities to co-operate, but they can only do so if they are prepared to make temporary compromises and learn from each other. The people best qualified to represent a knowledge community are usually mature, but mature people are not necessarily those best equipped to learn and compromise.

4. Values and Empirical Testability When we define model objects (systems, attributes, processes and epiphanies) we split the world into a wellunderstood core (the model and its corollaries) and an illunderstood periphery. We do not deny the existence of factors beyond our understanding and control, we simply reduce them to a source of exogenous noise that we cannot, or cannot be bothered to consider. We handle these uncertainties by negotiating boundary conditions, operational constraints within which the problem at hand appears well-posed. Any solution we propose to the problem is contingent on those boundary conditions. For example, we may agree to predict the behaviour of an economic sector subject to the assumption that buyers make decisions that optimise cost-benefit ratios. Of course, people do not always do this, so we set boundary conditions. If, in practice, buyers make decisions so far from those that optimise cost-benefit that the boundary conditions are violated, or the model predicts an outcome that appears inconsistent with observations, we must abandon or modify it. However, as long as the boundary conditions are satisfied, we may feel justified in using the model as a first approximation to our beliefs about market behaviour. Of course, my boundary conditions and yours may not coincide. Perhaps cost-benefit optimisation theory is culturally embedded in me, or I believe the solution is unlikely to be effected by slightly sub-optimal behaviour, or am more sanguine about the mismatch between expectation and observation or deleterious impacts on some stakeholders. If so, I may continue using the theory long after you abandon it - we have different beliefs and values. One can easily see how Kuhn’s paradigm shifts are represented in system theory. Boundary conditions are set by (an often tacit) consensus, but the knowledge community is not static: people reconsider their beliefs, die or retire. As a result, consensus may change quite rapidly; levels of uncertainty that were previously considered operationally acceptable are now thought unacceptable.

159

160

Established beliefs and values may be abandoned rather quickly. This happened in the nineteenth century when the boundary conditions of biblical creationism shifted and people accepted that humans had been on the planet for much longer than had previously been thought and when slaves were re-classified as stakeholders and the process of emancipation began. Both paradigm shifts were contingent on changing values. During re-conceptualisation rival camps defended their values as matters of ontological fact and the debate polarised society. When boundary conditions are violated, emergent information forces people to innovate. This conception of emergence demands a definition. A phenomenon is emergent if it is not logically entailed by the model we are using to tackle a problem. Having fixed the concept of emergence, the notion of complexity follows naturally. A complex world is one in which emergence is possible. In a complex world, boundary conditions are explicit and actively monitored, so that knowledge domains are open and innovation can occur. The relationship between complexity, innovation and the openness of a knowledge domain is significant. When a system receives information that violates boundary conditions, it innovates. Since boundary conditions are socially constructed, it follows that complexity must be socially constructed too. When a system is declared complex, the knowledge community that owns or investigates it has taken a sceptical position in respect of its own beliefs. To the best of my knowledge and belief, the only communities that do this routinely are those committed to rational methods. Modernists (rational sceptics) seem to have an affinity with complexity. In recent years applied mathematicians and natural scientists have often confounded emergence with selforganization. Self-organization occurs when micro-scale behaviour (at atomic or cellular levels, for example) produces spatial or temporal patterns on a macro-scale. Some patterns are emergent; the precise form of a snowflake, for example, is unpredictable. Others, like the

161

sound waves caused by air vibrating in a tube, are highly predictable. The latter are autopoietic (literally ‘self-writing’) patterns. Some emergent patterns are simple. Bénard or convection cells, for example, are just circulating blobs of liquid. Autopoietic structures like the human body can be very intricate indeed. Complexity and intricacy are not autocorrelated. The physicist Ernest Rutherford (a reductionist of the englobing variety) used to joke that all science was physics or stamp-collecting - all problems could (in theory, not in practice) be solved in terms of Newton’s laws. However, by the time Rutherford retired (1937) developments in quantum mechanics had forced physicists to acknowledge emergence. Problems involving the position and momentum of small particles were ill-posed; their solution, if it existed at all, was certainly not unique. Biologists, philosophers and historians had certainly been aware of emergence in the mid nineteenth century but the idea didn’t really catch on until the late 1920s and 30s. Jonah’s law and Gödel’s incompleteness theorem to which it is related are manifestations of a new consensus. Every finite axiom-set is too weak to deduce the truth or falsity of all propositions. Indeed, problem-domains as stable and dimensionally coherent as Newtonian physics are exceptional. We can no longer think of science as a puzzle, laying down axioms by induction from empirical observation and using deduction to join up the dots to make a picture. The dots are the picture - a picture that swirls and re-forms as knowledge changes. Science has become an odyssey - we navigate against the fixed stars of logic and mathematics, not merely manipulating objects and speculating about causes, but influencing the fabric of existence. Engineers, physicists and chemists prefer quiescent regions where the laws of cause and effect are clear. Biologists and social scientists usually hover close to dynamic regions, studying complexity on the margins, as it were. But humanists and socio-natural scientists fare through the most dynamic regions of all, trying

162

to understand and even facilitate processes in the reactorcore of history. Here value judgments play a pivotal role.

5. Probabilities and interpretation A system can be represented without loss of generality by a diagram that defines a set of propositions. The propositions entail the existence of sub-systems. The diagram can be interpreted in the probabilistic sense of Section 5. Each closed curve represents a reality judgment that imposes a systemic ontology on experience. The small inner circles define reduced problem-domains (sub-systems) within which levels of dimensional coherence are high. However, their probability is rather small - reflecting the belief that policy interventions based on this specialised knowledge alone will be subject to a small-world bias and would probably lead to unforeseen outcomes. Each sub-system invokes classes that can be valorised in a range of universes. Since those ranges do not overlap, the sub-systems do not intersect. Each knowledge domain is dimensionally incoherent with the others. As we move up the system hierarchy the probability increases (we are taking a more holistic view) but the level of dimensional coherence decreases. We have to work much harder to solve integrative problems at these levels. This probabilistic representation still has a universal proposition. The rectangular boundary represents the proposition that a universe exists within which the method is defensible and has a probability of 1. However, the universe is no longer congruent with any of the axioms of its constituent systems. So the effect of small world bias can be represented by the size (i.e. probability) of systems and sub-systems. This diagrammatic convention immediately suggests a tradeoff between probability (rational degree of belief) and small

163

164

world bias. If we bound the system too widely, we bite off more complexity than we can chew because the integrative problems cannot be resolved. If we bound it too narrowly, however, our model is simplistic and we must expect to fail through small-world bias. Now let us add arrows to this diagram that indicate axes of interaction between systems and sub-systems. Every complex system is open to perturbation across its boundaries. Within the system all behaviours are deterministic, and all objects are dimensionally coherent. All uncertainty is subsumed into the information flow perturbing the system from outside. Some of the data flowing along those lines cannot be interpreted by the recipient, which must either baulk it or develop valorisation functions to convert it into information. Each of these arrows, therefore, represents a reduction or simplification. Simple diagrams can often be used to represent complex systems. Complexity (in the sense I use the word here) implies that a system is open i.e. subject to external perturbation, not that there are lots of rings and lines on the diagram. Emergent behaviours can be investigated by manipulating boundary conditions to see how sensitive the core solution is to external contingencies. Computer simulations and ‘what if’ scenaria are often very useful here. This approach sometimes enables us to specify problems that are almost well-posed - that is, a problem whose solution exists and is unique, subject to boundary conditions. We neglect the trade-off between probability and tractability at our collective peril. In recent years many environmental scientists (including yours truly) have been so alarmed by reductionist models (of the K.I.S.S. type) they have argued for integrated decision-support systems. However, after fifty years of failed attempts to generalise system theory, we must expect these decision-support systems, particularly those that represent human agency and choice, to fall foul of dimensional incoherence. To use them as test-beds for policy without imposing explicit boundary conditions would be an act of gross irresponsibility. Yet policy-relevant researchers and policy makers do this all the time.

165

6. Science as the Rational Pursuit of Knowledge Almost all pre-war applications of system theory assumed systems were objectively real and their behaviour timeinvariant. However, after WWII, many system theoreticians became interested in managing human activity systems and ecosystems - soft systems. Soft systems are not real because humans are part of them. If we did not exist, they would not exist. Human beliefs are key determinants of human actions; and human actions, in turn, determine many of the properties of our natural and physical environment. The banking system, for example, is critically dependent on human consensus. If a group of economists constructs a model that changes the way people think about banks, the banks themselves may change to reflect this consensus. The term ‘soft system’ is by Peter Checkland to describe human activity systems. Checkland is the prime mover behind an approach known as ‘action research’, which I have found very helpful. In this book, however, I generalise the idea of a soft system to include a range of systemsbased approaches to socio-natural behaviour. The principal impact of soft system work has been to shift attention from the use of mathematical models to multiple perceptions of reality. Soft systems ideas have had more impact on the policy-relevant and management sciences than on the natural sciences and engineering. The systems movement is no longer a coherent methodological tendency. A schism is forming with one group becoming increasingly preoccupied with non-linear dynamical systems and the other more qualitative and discursive in approach. Many people are actively trying to bridge that gap, but extreme positions seem to be irreconcilable. All system theorists would accept that the diagram from the previous essay (reproduced below) could represent a system of some sort, but hard and soft theorists have different conceptions of it. In hard system theory, it is an abstract representation of an objectively real system that is independent of human knowledge and belief. In soft system

166

theory, however, it is a concrete representation (a picture) of an abstract system - our knowledge or shared beliefs about the world. For a soft scientist, the planets, moons and star we think of when we speak of the ‘solar system’ are probably real, certainly exist but are emphatically not a system: a (soft) system is an epistemological construct. Soft system theory is self-referential and so abandons the comfortable boundaries of nineteenth century reductionism. When systems were real and independent of human beliefs, scientists could present their work as a materialistic quest for timeless, objective truth. Once we acknowledge that all systems are knowledge systems, however, the scientific subject and the ontological object are dynamically interdependent. Hard and soft methods are both part of an appreciative process that spins the web of beliefs that sustain us, yet they feel very different. In hard science the appreciative process is suspended while scientists analyse a fixed belief system - a theory. Their aim is to still the babble of debate and use mathematical methods to solve an almost wellposed problem (AWPP). In soft science, the discursive process continues unchecked and the state of knowledge changes while the work is in progress. Many soft scientists understand that systems become dysfunctional when multiple perceptions of reality and conflicts of interest are ignored or suppressed. They are skilled at helping people move from an ill-specified sense of unease to a more coherent understanding of socio-natural constraints and, occasionally, to negotiate a common purpose and specify an AWPP.

167

These approaches are clearly complementary; each has its particular strengths and weaknesses. However, if hard or soft methods become culturally embedded, researchers end up believing that colleagues are either perverse or irrational and co-operation becomes impossible. Many hard scientists are hostile to the idea of social construction and argue persuasively that mastery of soft system method would not make them more effective in practice. They deal with a reduced problem-set in which the assumption of reality causes no practical difficulties. Humans can launch space probes or mix chemicals, but we cannot change the gravitational constant or the reactive properties of hydrogen. Jonah’s law suggests hard scientists may reasonably make predictions without worrying much about the social construction of knowledge. Conversely, many soft scientists mistrust hard science method and see no need to understand it. They can only justify reality assumptions in situations where people operate under stress and are obliged to adopt a common purpose (an integrative research project with a tight deadline, for example). Such circumstances are rare. Indeed, humanists sometimes find themselves in situations so fluid that simply to have a researcher taking notes in the corner can change the course of history. They are not going to use hard science method in these contexts and believe anyone who does must either be cynical or grossly incompetent. The case for a truly general systems approach is only strong in integrative, policy-relevant research where natural and human scientists join forces to work across intellectual boundaries and we have to deal with small-world bias. The minimum requirement for participation is an acceptance that science is not, as naïve realists believe, a personal quest for positive, objective facts. Science is a social activity; the rational quest for useful knowledge. Under this definition the domain of cultural ecodynamics is remarkably broad. There is plenty of room here for mathematical case studies and participant observation. Only very extreme positions are excluded. Within this domain it is

168

possible to recognise three knowledge communities; each distinguished in terms of their approach to AWPPs. 1. Reductionists (of the K.I.S.S. variety) believe research is the process of moving from a specified AWPP towards a definitive solution. Some engineers, neo-classical economists and technologists belong to this reductionist genus. 2. Constructionists are theoreticians for whom research is a device for moving towards a defensible AWPP. Many engineers, neo-classical economists and technologists find this process tedious and do not wish to participate; though some social or political scientists and systems thinkers find it stimulating. 3. Deconstructionists believe that any attempt to formulate an AWPP is ethically or intellectually problematic. The researcher’s task is to provide evidence and criticize or comment on interpretations, not to synthesize or generalize. Many critical humanists, emancipatory system theorists and scientific empiricists fall into this category. The three communities can be recognized as coherent associations of interest and beliefs that transcend linguistic and temporal boundaries. They are philosophical resonances in the fabric of history that have been discovered and rediscovered many times. Critical humanists and scientific empiricists, for example, use different words to express their beliefs, but the roles they adopt in research teams are strikingly similar and often based on similar values. Each community has special strengths. Constructionists formulate new AWPPs and develop new ideas about the way the world works. Reductionists implement; they convert the AWPP into a procedure for exploiting the new knowledge. Deconstructionists evaluate; by resisting generalization, they remain alert to the dangers of oversimplification and continually monitor boundary conditions.

dynamic and historical - characterised in sections 2 and 4. All three engage in each of the three problem-domains, but the balance of influence varies between them. Reductionists and deconstructionists often tackle ontological problems, the former creating classifications, the latter testing and critically evaluating them. However, there is little work for the constructionist in the static, empirical science of ontology and this group is commonly marginalized. In practice this means that constructionists are perennial outsiders in scientific approaches to the humanities, geology, systematic biology and other observational sciences predicated on the forensic model. Similarly, reductionists and constructionists often work together on dynamic problems. The constructionists develop formal models and reductionists use them to solve problems. Deconstructionists, by and large, are seen as an unwelcome source of extraneous noise and often excluded from the process. Economics, physics, quantitative sociology and much of policy-relevant science are predicated on the Later Modern conception of scientific prediction in a world where the rightness of conventional ontology is simply presumed. All three communities are needed to tackle historical problems, though the cultural tensions between them create substantial difficulties for managers and regulators. Ontological problems cannot be ignored because many of the objects of interest are socially constructed. Consequently, we need deconstructionists to keep a watching brief on boundary conditions and sound a warning when indicators of compliance and system health enter the danger zone. However, many natural processes are dynamic and mechanistic. It would be perverse to ignore the advances of quantitative science and engineering where these methods can be applied. We need reductionists and constructionists too.

Note, however, that these three communities are not isomorphic with the three classes of problem - ontological,

169

170

Section 7: Managing and Regulating Innovation Qui fit, Maecenas, ut nemo, quam sibi sortem seu ratio dederit seu fors obiecerit, illa contentus vivat, laudet diversa sequentis? How does it happen, Maecenas, that no-one is contented with his life, whether he planned it for himself or fate flung him into it, but that he praises those who follow different paths? Horace Satires bk. 1.1.1 The name of the song is called 'Haddock's Eyes.' Oh, so that's the name of the song, is it? Alice said, trying to feel interested. No, you don't understand, the Knight said, looking a little vexed. That's what the name is called. The name really is 'The Aged Aged Man.' Then I ought to have said 'That's what the song is called'? Alice corrected herself. No, you oughtn't: that's another thing. The song is called 'Ways and Means': but that’s only what it's called, you know! Well what is the song then? said Alice, who was by this time completely bewildered. I was coming to that, the Knight said. The song really is 'Asitting On a Gate': and the tune's my own invention. Lewis Carroll Through the Looking-Glass. … managers assume that existing systems constitute “the natural order” of things, and infer that such order is relatively homogenous from local to global scales. They expect that what exists now will persist. Holling and Sanderson

Orientation Academics, particularly humanists and empirical scientists, are sometimes a bit sniffy about management, but as soon as you abandon the lone-scholar, pure research model for a Mode 2, big science, applied research project, the management and regulation of the research process become ethical imperatives. In many university settings the purpose of management is seen as maximising cost-benefit and maintaining auditable records. This is important work, but not management - it is technical administration. There is a lot of work for well-trained humanists and empirical scientists in policy-relevant research but here management is not something imposed on unwilling researchers by the bean-counters, it is an integral component of the research process. This section distinguishes management from regulation and provides practical advice for both. Essentially, management has to be fast and responsive, regulation has to be stable. They work in different spatio-temporal universes. Knowledge System Theory (KST) is a managerial model that builds on the ideas already presented and can be used to design, manage and regulate integrative research. I have found it useful in practice. The first two quotations are well known. The third is from a useful attempt to relate ideas about cycles in ecosystems to models of gradual and catastrophic change in human institutions. It also gives readers unfamiliar with the resilience literature a quick introduction to the key ideas and seminal literature. Whereas my book deals with the Phoenix Cycle from the humanistic side, Holling and Sanderson start in ecology and work across. The ‘managers’ they refer to are managing natural resources. I suspect they are thinking of regulators rather than managers - their paper does not maintain a distinction: Holling, CS and S Sanderson (1996) Dynamics of (dis)harmony in ecological systems. in Hanna, SS, K Folke and K-G Mäler eds (1996) Rights to Nature: Ecological,

171

172

Economic, Cultural and Political Principles of Institutions for the Environment. Washington: Island Press. pp 51-85. The following references cover all the other topics raised here. If you only have time to read one article, make it the first chapter of Rosenhead and Mingers which contains a good review. Ackoff, RL (1979) The Future of Operational Research is past. In Flood R L and M.C Jackson eds (1999) Critical Systems Thinking: directed readings. Chichester: Wiley 4158 Beer, S (1979) The Heart of Enterprise. Chichester: Wiley. Rosenhead, J and J Mingers (2001) Rational analysis for a problematic world revisited: problem structuring methods for complexity, uncertainty and conflict. New York: Wiley. Finally, the view that universities are revenue creators was expressed by a politician: Hoeven, M van der (2004) Quoted in ‘Politicians and researchers look for direction at the ‘crossroads’ of the university debate.’ Cordis Focus 245, 1.

1. Regulation and Management Every integrative team is Janus-headed. One face points outwards, participating in the appreciative process that sets its policy environment. The other points inwards and is responsible for executive action. The extrovert competence can be thought of as a regulator accountable to external stakeholders. The introvert competence is project management responsible for the timely, lawful, efficient delivery of a product. In an institutional setting the regulator would be called something grand like: ‘Board of Directors’ or ‘Senate’, but projects are transient, heterarchical consortia and regulators usually consist of a steering group with representatives of external stakeholders. The contracts on which projects run are drawn up between employers and funding agencies in respect of intellectual property. Researchers are effectively technicians with no financial stake in those products. If the authors of the proposal also receive a salary they may even be forbidden to claim authorship and have to find a surrogate ‘author’ to front their own work and, in effect, to take credit for their creative effort. Yet the products of integrative research are usually ideas (technically outside the intellectual property legislation). Intellectual authority and the dissemination of ideas are prerequisites of a successful research career. There have even been occasions when the contractor who wrote a successful research bid was not offered renewal of contract and the budget devolved to the sleeping partner. Unsurprisingly, many researchers resent this bind, especially if their contracts are insecure and their employers inept. A skills haemorrhage among contractors, coupled with pressure to reduce costs by shedding the most expensive (and experienced) personnel, drives chronic over-delegation. Tasks requiring professional judgment and experience are commonly handed to PhD students. Many contractors quit after their first baptism of fire, further depleting morale and undermining efficiency. In this hostile environment contractors need an exit strategy and many develop a mental picture of the epistemic communities to which they owe primary allegiance. They usually have family

173

174

commitments too. Though the products of research may be bought and sold, these unacknowledged stakeholders exert a strong influence on the research process. Regulators work round these conflicts, managers work with them. The trick is to understand and facilitate individual exit strategies so goodwill is maintained beyond the end of the project and foster a blame-free environment that encourages people to experiment with new ideas and take risks. Regulatory bodies sometimes contain people who are also required to manage. Those who combine these roles well usually have experience as contractors and remember which role they are playing at a given time. Failure to insulate the team from the concerns of external stakeholders, or to understand the executive process, is very disruptive. It is like hiring contractors to paint a house blue and stopping them half way to ask whether pink would be more popular with the neighbours, or if they would prefer to throw the brushes away and put paint on with a pointed stick. However, there are limits beyond which even the most enlightened regulator cannot go. Policy-relevant research has ethical implications that must be monitored. It is expensive and often funded by agencies that are constitutionally obliged to audit quality and value for money. A balance must be struck that reconciles the needs of managers to the demand for proper scrutiny and audit. Sadly, funding agencies, governments and universities are driven by a growing fear of litigation that undermines any attempt to promote a blame-free environment and obscures the distinction of regulation from management. This is true both in Mode 1 institutions (universities) and in Mode 2 and is as disruptive for tenured teachers as for research contractors. Defensive auditing procedures reduce managerial ‘wriggleroom’, increase costs, destroy morale, delay completion, frustrate innovation and aggravate problems of staff retention, particularly among experienced staff and those with marketable skills. It does not have to be this way, but, in my experience, usually is.

175

2. Knowledge System Theory A knowledge system (k-system) is a formal map of the knowledge communities contributing to a research activity. The prefix k- indicates that these systems are not real things, but domains of study requiring expert knowledge. This can be a great liberator in integrative research. Many people who question the reality of classical systems participate freely when it is explained that k-systems are domains of expertise and interest. Sometimes a k-system represents the knowledge domain of a research project and we populate it by recruiting people to represent interested knowledge communities. They are subdivided into k-sub-systems, each of which represents a recognisable theme or program of work. These k-subsystems are often called workpackages. Some people serve on more than one workpackage; others specialize. Workpackages are sometimes subdivided thematically into k-sub-sub-systems or workgroups. Workgroups are typically small and rather coherent in belief and purpose. As one moves down the hierarchy, the level of dimensional coherence increases, but the probability associated with its beliefs is decreased by small-world bias. Recall that knowledge is shared belief. This means that as one moves up the system hierarchy from the workgroup to the project, the number of people working together increases but the amount of knowledge decreases. K-systems are like the biblical tower of Babel. The higher we climb, the harder it is to communicate because dimensional coherence is reduced. The bigger the system, the less we know. As we move down the hierarchy dimensional coherence (and knowledge) increases, but the probability that a policy implemented in respect of that knowledge will solve the problem at hand is reduced. Scientists can never construct a God-like omni-science, but, with careful design and thought, can sometimes get a useful picture of the world. However, every time one adds a level to the integrative hierarchy, or increases the size of a k-system, one makes the work of integration harder.

176

Clever research design can help, but there are structural constraints that we ignore at our own risk. In practice, people have to accept some discipline to work in groups. Students and policy makers often speak confidently about the ‘obvious’ advantages of inter-disciplinary research and believe that high fliers are those who avoid getting corralled in narrow knowledge domains. These melting-pot ideas do not work very well in practice because knowledge communities are logically unconnected. You actually need distinctive perspectives to get a clear understanding of cultural ecodynamics on many spatial and temporal scales. If you really need to see across knowledge boundaries, you can do so and what you see can change the world, but you do not fly; you build. Knowledge systems are founded on trust, mutual respect and common purpose. If they are well designed people can move from one vantage point to another, speculating, theorizing, transmitting and receiving information. This does not mean that researchers must discard all the specialist knowledge that cannot be transmitted across community boundaries; it merely requires them to think carefully about the signal to noise ratio when communicating with colleagues. You must take possession of (and responsibility for) your own specialism and communicate sparingly, especially in contexts where dimensional incoherence is operationally significant. In recent years it has become fashionable to assume that the most useful research into cultural ecodynamics is always integrative. However, both theory and practice suggest this is not so. Governance structures are k-systems too. As their spatial extent increases, their ability to valorise objects changes. They usually compensate for this by becoming sectorialised. As one moves up the hierarchy from local to supra-national, then, one encounters agencies that are constitutionally incapable of responding to integrative science. In practice, the deepest knowledge hierarchies are usually those constructed by small teams working to inform policy on a micro- or meso-scale, Local and regional policy makers are able to valorise a wider range of objects than their national or supra-national counterparts.

177

Position in the knowledge hierarchy should not be equated with status or merit, though there is little doubt that inexperienced researchers should spend most of their time working near the bottom. Younger researchers need to learn and this is the level at which knowledge pictures are richest and information flows very easily. A person cannot contribute meaningfully to integration who has not accepted the discipline of becoming well-educated and this is the best place to get that education. By well-educated I do not mean holding a lot of data, but having absorbed the core beliefs of a knowledge community and spent enough time applying that knowledge to have developed a mature intellectual position. Researchers need that education before they can serve effectively as ambassadors of a knowledge tradition. This is especially true if the work involves non-academic out-reach. If you claim to know nothing, but seem to have opinions about everything, stakeholders are less likely to take you seriously. The task is to construct a strategic alliance that capitalizes on perceived strengths without compromising the rational coherence of the whole. In practice, this means that people must be organized pragmatically and everyone must expect to pick up a reasonable share of the most difficult tasks. Rich information flows must be possible (and needed) within a workpackage or workgroup, but relatively poor and infrequent information flows suffice across boundaries. What is information for one person, to another may be dimensionally incoherent noise. An engineer designing a sewage treatment plant, for example, may be irritated by questions about the social construction of meaning. Restrict information flows so that the only messages passing are potentially interesting to the recipient. Failure to constrain information flows stifles innovation by forcing recipients to baulk unwanted communication. Stafford Beer’s viable systems theory provides many insights into this process. Once information constraints are in place, people can focus on tasks involving easy communication. Endless culture shocks are exhausting and the irritation they engender in a research team may de-stabilize the systems it is to study.

178

However, researchers must come together to plan and integrate. An effective way of managing this is to arrange small ‘milestone meetings’ for non-routine information flow. The heart of this process is an appreciative cycle in which deconstructionists gather and interpret data, constructionists develop theories and specify AWPPs and reductionists convert AWPPs into policy options. In practice, of course, there are usually many appreciative cycles running in parallel. As soon as a policy option is chosen and implemented it becomes necessary to monitor it for unforeseen consequences - the deconstructionists come back in again. If boundary conditions are violated, the constructionists must re-think and this, in turn, creates more work for the reductionists. The project itself may have a start date and an end date, but the management of our cultural and natural life-support systems demands a repeating cycle (sometimes called ‘double-loop learning’). It is not a oncefor-all task. The art of managing integrative research consists of bringing all these cycles into phase for the milestone meetings. Focus the meeting on a product (an annual report, say, or a joint publication) not on a process. Keep the group small (ideally no more than seven people). Exclude the press, spectators, professional facilitators, academic figureheads and anyone else not actively involved in the work of knowledge creation. Though one might think of innovation as a response to stress, people who feel their beliefs are threatened find themselves in a bind: under pressure to abandon the knowledge community that sustains them. They usually withdraw to their cultural high ground and lay down a defensive barrage of rhetoric and common sense. Innovation takes time, demands trust and common purpose. Arrange informal social events, but avoid extravagant hospitality. Facilitate breakout meetings but never lose sight of your goal. Your aim is to help people feel safe enough to initiate focussed discourse across knowledge boundaries, not to demolish the boundaries themselves.

179

3. Regulatory Noise Abatement Funding agencies focus almost all their attention on products rather than processes, they monitor deliverables, budgets and deadlines and their intervention in the process of research is usually directed towards these. However, from time to time policy norms change and these epiphanies disrupt the research process by changing the values and norms to which researchers must respond. A project’s policy environment is created through the regulator - the outward facing facility responsible for the project’s relationship with academic peers and external stakeholders. The policy environment is often set by an explicit negotiation between external stakeholders and consortium members. External stakeholders sometimes need access to the whole team, but should only influence the project through the regulator. It is the regulator’s task to provide advice and authorise or veto decisions that have not been delegated under the terms of the contract. Any external agency that disrupts the research process may be held responsible for failure to meet the terms of the contract arising through that action. Consequently, regulators and external stakeholders should not intervene in any legitimate managerial decision unless they are prepared to accept liability for that action under the contract. This is widely understood both by regulators and (especially) by managers, but there appears to be no formal model of the continuing appreciative process that monitors the impact of a project on external stakeholders that can distinguish management from regulation. Every project is based on an effort of abstraction in which a substantive domain representing reality- and operational judgments is mapped on to a symbolic domain of words or numbers analogous to it in some significant (i.e. valuable) respect. This analogy, as we have seen, is a model. Information gathered from the socio-natural arena in which the team works is fed into the model. The rules of symbolic manipulation are used to negotiate executive actions. These rules always stand outside the model itself. Hard scientists

180

often interpret this lozenge on my diagram (below) as containing rules of logic and mathematics, but the way these rules are applied is determined by policy. Information is transmitted from the model to the domain of executive action, that impacts on the arena, which responds by passing information back to the model. This feedback loop is important. Arena contains external stakeholders and their response to executive action may be unexpected. The team delivers the required product by managing this recursive learning process effectively.

Policy consists of norms that bound the research domain by presenting some reality judgments as axioms. This is operationally necessary. It is certainly possible to debate the value-, reality- and operational judgments that underpin fishery policy, for example, but if the group breaks off half way through to argue about the social construction of a kippered herring, the project may collapse. However a model that becomes culturally embedded can cause problems as the older consensus creates unforeseen difficulties. It is equally necessary, therefore, to keep reality-, operationaland value judgments under continuous review. The engagement of stakeholders and pressure groups means that this process can easily become very noisy and dealing with it can over-stretch the managerial competence of the team. Excessive noise must be filtered out. Boundary

181

conditions (pre-agreed indicators of policy compliance, system behaviour and system health) must be set. If these conditions are satisfied, the boundary judgments can be sustained. Checks and balances are needed to ensure boundary conditions are neither too lax nor too sensitive. If the boundary conditions are violated, the model has lost dimensional coherence. The regulator must then re-evaluate the project’s conceptual basis and negotiate new beliefs. The group involved should be small and non-confrontational but must involve external stakeholders as well as consortium members. The re-started appreciative process may simply negotiate new boundary conditions, but it could innovate, producing a completely new model. In a blame-free environment, killing a project is only justified in cases of incompetence or irretrievable collapse complexity happens. Killing a model, however, should be encouraged and, if innovation is genuinely valued, openly rewarded. Best scientific practice requires theories to be empirically testable, but a rift yawns between best and common practice. Far too many failed models are conserved usually because regulators have invested too much in them and will not write that investment off. Macro-economic predictions, for example, commonly fail and senior politicians lose elections as a result, but the appreciative setting of politicians and economists is so rigid the models have no explicit boundary conditions. These models are so deeply embedded in political culture as to appear immortal. Economic boom-bust cycles and the pendulum-swings of political fortunes are artefacts of democratic rhetoric have trapped Western society in an erratic, short-period Phoenix Cycle. Almost every disagreement is reduced to a clash of ontologies. Even our policy-relevant scientists have more faith in their own theories than in the empirical evidence and defend them against all comers. We (the citizens) could be better served, but this would require experts to assume a sceptical position in respect of their own beliefs. That won’t happen until we assert that an unshakeable faith in the rightness of one’s own beliefs - the hallmark of political highfliers in every age - is actually proof of incompetence.

182

4. The Funding Agency’s Appreciative Setting Sometimes integrative research innovates, but the funding agency itself is unreceptive to the new conceptual structure. You already have all the conceptual tools needed to understand this process, so I will use these pages to describe my own experience of it. I moved from treating integrative research as a process to studying integrative systems in their own right with EPPM (Environmental Perception and Policy Making) a tiny project belonging to the third European Framework Programme (FP3) with a budget of about 250,000 ECU. EPPM involved pure mathematicians, archaeologists, agronomists, policy makers, technologists, sociologists and anthropologists. EPPM provided me with many personal epiphanies. I became convinced that logically irreconcilable knowledge communities could make contact without open conflict and still retain their epistemic distinctiveness. I also realised that some really important lessons are not learned by scholarship and mastery of technical methods, but by the form of cultural osmosis that is ‘participant observation’. This process could not be directed, but it could be facilitated. The trick was to build trust and persuade everyone involved that s/he was not just a delegate, but a participant observer. You don’t have to worry about ‘bias’ (whatever that is) in an environment where people care about each other and work in a reflexive way. Judged by academic and managerial standards EPPM was a great success, producing reports in the grey literature, several books and a raft of papers. Each of the regional settings we selected for study contained infrastructural programmes that ultimately failed to deliver. Relatively large investments had polarised local populations into ‘authentic knowing’ and ‘authentic being’ subsets. The former won differential access to resources, but the programmes disrupted the lives of the latter. These conflicts of interest could not be resolved. EPPM concluded that policy makers should expect central funding initiatives and infrastructural programmes to have damaging and unforeseen consequences in small rural communities.

183

I imagined our paymasters would be grateful to know that unique, local problems call for unique, local solutions, but I was wrong. Governance structures become increasingly sectorialised as one ascends the political hierarchy. This fixes their appreciative setting, making them unreceptive to trans-sectorial initiatives. This is why, despite a mountain of advice to the contrary, national and supra-national agencies continue to ignore small-world bias and propose regulatory, technical or infra-structural responses to complex policy problems. These are the only instruments they have and they reject advice that suggests they should use them sparingly. If necessary they will change research policy and pile project onto project until they get the advice they want. Under FP3 and FP4 small integrative projects focussed on revising conceptual structures were still possible, though they had little impact on policy. FP5, however, embraced a new value judgment: the purpose of FP research was not to change the appreciative setting in Brussels, but to deliver solutions to technical problems. We were to find ‘end-users’ for research products (typically managers) and solve their problems. FP5 in its original formulation was based on flawed reality judgments. Research projects solve problems superbly well, but regional managers deal with processes far too dynamic to be reduced to a series of problem specifications and put in abeyance for three years while researchers deliberate. As the operations researcher Russell Ackoff explained, “managers do not solve problems, they manage messes”. At the end of FP5 a great consultation exercise was mobilised by DG Research, which expected to receive expressions of interest from commercial end users. It did not. The majority of expressions came from researchers and research institutions wanting to study something. There was, however, one exception to this rule. The field of nanotechnology, it seems, is entering a focussed phase of R&D and has a lusty appetite for publicly funded research projects. The architects of FP6 consolidated the Mode 2 revolution with new policy instruments - “Networks of Excellence” and

184

“Integrated Projects”. These are very large research initiatives (five to ten times as large as FP5 projects) geared towards a well-defined problem-set rather than a single deliverable. Clearly some will be persistent institutions focussed on ‘innovation’, which, in EU terms means creating knowledge to sell, either as services (education or contract research) or through technological spin-off. Integrated Projects can even mount calls and fund projects.

Section 8: Deducing the Limits of Logic Being is Believing Patterns on maps or patterns on pages driven by jostling patterns in time. The hassle, the dancing, the joys and the rages make places of spaces; fill papers with rhyme.

The managerial problems of the Mode 2 revolution have now been resolved by bringing it into the establishment. The new instruments are spatially distributed QUANGOs - nominally independent, but centrally funded and regulated. After a flurry of multi-disciplinary initiatives, internal conflicts will force them either to purge dissenting views or tolerate schism, as the universities did before them.

But this is not complex, schoolboys compute Pythagorean doubling of harmonic strings, ease lovely cadences out of a flute, self-organised effortlessly as it sings.

These institutions with their new disciplinary structures will soon jostle the old ones in a zero-sum contest. The likely impact of this on universities is well understood. As the future chairperson of the European Competitiveness Council warned:

Complexity’s not something scientists measure, but the promise of something they do not yet know. This field is complex - here lies hidden treasure! Harrow the land so new knowledge can grow. Boundaries may follow local terrains tracing out ridges, or rivers or bights, but most also represent property claims, fencing out foreigners, safeguarding rights.

“It should be made crystal clear to universities that their role should be to help turn knowledge into revenue. This is what the funding is for.” The institutions that thrive under this regime will doubtless market themselves as the multi-disciplinary hotbed of the knowledge revolution (at least until the next revolution). Soon the bean counters will impose draconian auditing procedures on them that stifle innovation and reduce epistemic diversity. Meanwhile there is knowledge to sell not that old-fashioned rubbish about socio-natural complexity - authentic knowledge - Come on down!

185

A traveller wandering round a home range secure in her tenure, completely at rest must tread rather softly in foreign domains, allowed in on sufferance, always a guest. To open a gate or to take down a wall is to open your heart and your mind to surprise. Be careful; your neighbours may well be appalled. Outlandish flowers can dazzle their eyes.

186

They’ll tell you: “This field of knowledge is closed. Emergent species officially banned! Put that boundary back, for no seed that blows may trouble the thickets that grow on our land.”

Orientation I dislike mind maps and chapter summaries, and I planned my book by writing the poem above and drafting the Thematic Glossary that forms its postscript. The poem summarises the ideas about the history, philosophy, time geography, management and regulation of knowledge. The glossary summarises the whole volume, including this section (a qualitative proof that human knowledge systems are necessarily unconnected) and the next, which suggests practical policy initiatives for sustaining epistemic diversity and with it the adaptive potential of society as a whole.

People communicate through shared belief It isn’t in God, but in knowledge we trust. “See the world our way or you’ll come to grief humans are sociable or they are dust.” Silly idealists grubbing out hedges: “Open the landscape to sunshine and air!” Plant palm trees on mountaintops, dahlias on ledges to shrivel in storms of contempt and despair.

Section 8 may seem a little alarming at first glance, but demands no more mathematics than we learned in elementary school. To understand the distinction of α- from β-complexity is to have an intuitive appreciation of the relative strengths and weaknesses of hard and soft science methods.

Post-modern homilies, preached by a youth, reasoning unreason to soften your mind; silverback bellowing permanent truth Academe discoursing through its behind.

Section 8 is the hinge that connects hard and soft system theory. It consolidates ground already covered by proving that the word ‘complexity’ applied to knowledge dynamics and mechanistic dynamics refers to slightly different things. I call them α- and β-complexity. In effect it proves that realism is logically incoherent because humans can innovate - can create new beliefs that were absolutely unpredictable and which may lead to a shift from normative to adaptive behaviour that changes the course of history.

Those landscapes exist - like it or no They’re real to the people upon them, who thrive in big fields or small fields, with high hedge or low. People and knowledge both staying alive.

It also marks a change in the tenor of my book. The first seven sections deal with reality- and operational judgments (describing the categories of cultural ecodynamics, their management and regulation). The last two sections deal more with value judgments and advocacy. Value-neutral approaches to applied socio-natural science do not exist. Ross Ashby and Ludwig Von Bertalanffy are two of the most interesting of the early systems thinkers to read. Both had a significant impact on humanistic thinking forty years ago. Indeed, if I were to recommend three early systems thinkers to review, it would be these two and the economist Kenneth Boulding. The diversity of views they represent is so great 187

188

that any attempt to criticise system theory as a unitary movement must be disarmed. Ashby, R (1957) An introduction to Cybernetics. London: Chapman and Hall. Bertalanffy, L Von (1968) General System Theory: Foundations, Development, Applications. New York: George Braziller inc. Boulding, K (1978) Ecodynamics: a new theory of Societal Evolution. London: Sage. Sir Charles Lyell influenced Darwin through his work Principles of Geology published in 1859. The ideas about Marx and Spencer were part of the Aquadapt project (www.aquadapt.net) and extended in a paper jointly written with Brian McIntosh and Paul Jeffrey on co-evolution: Winder N, B McIntosh and P Jeffrey (in press) The Origin, Diagnostic Attributes and Practical Application of CoEvolutionary Theory. Ecological Economics

This is an active research interest at the Santa Fe Institute for Complex Systems. It is a good idea to keep an eye on their page (www.santafe.edu) though some of the material is rather technical. The idea of a wicked problem is due to Rittel and Webber. It sounds like (and is) a snappy sound-bite, but the paper itself has genuine value because it resonates with the experience of many practitioners and collects ideas together that would otherwise occupy the reader for months. The basic idea is that wicked problems are not merely ill-posed; they are so illstructured we can find no universe of discourse in which an operationally useful model can be agreed without excluding some stakeholders: Rittel HWJ and MM Webber (1973) Dilemmas in a General Theory of Planning. Policy Studies. 4, pp. 155-169.

The idea of a complex adaptive system is as old as system theory. It is clearly present in Bertalanffy’s work, for example. Individual-based approaches are surprisingly old too, they are implied by Hägerstrand’s time geography, and computational models date from the early 70s - long before the current fad for agent-based simulation. However, some of the more recent work is very exciting. Railsbeck reviews some of it and his paper is a good place to start: Railsbeck, SF (2001) Concepts from complex adaptive systems as a framework for individual-based modeling. Ecological Modelling 139 pp. 47-62. Waddington’s COWDuNG reference is in Waddington, CH (1977) Tools for Thought: St Albans: Paladin. The archaeological work I mention in this section is found in: Kohler, T and G Gummerman eds (1999) Dynamics in Human and Primate Societies. Oxford, Oxford University Press.

189

190

1. Are Humans Machines? Ross Ashby’s introduction to cybernetics defined a machine in terms of the transformation of an input into an output: •

The transformation must be single-valued; it must map every input onto a unique output.



The transformation must be closed in the sense that the outputs form a (possibly complete) subset of the set of inputs.

This definition equates a machine with what mathematicians call a ‘dynamical system’, a mapping of a set onto itself. It requires a little clarification. The transformation of a number onto its square root is not single-valued, because every non-zero number has two square roots, one positive and the other negative. Moreover, a single-valued transformation can apply to a state of two or more values. The requirement of single-valuedness is that the output be uniquely determined by the transformation and the input. The dimensionality of the output is immaterial. Ashby’s conception of closure, which I call α-closure, is a constraint that ensures the deducibility of the output. α-closure requires that there exists a set (the state space) whose elements are all the distinguishable states the machine can display. Its effect is to place a logical boundary around the state space so that the machines can never transform a state into a non-state. You may recall from section 2 that every state space contains a null state - a state of non-existence - that allows a process to annihilate or create a system. So the requirement of α-closure does not seem particularly demanding. Nonetheless it is indispensable in classical, hard system theory because without it we cannot predict system behaviour, even in principle. α-closure is a prerequisite of computability. A second conception of system closure is associated, among other early system-thinkers, with the work of Ludwig von Bertalanffy and with almost all subsequent work on complex and chaotic dynamics. The idea is to embed an αclosed core in an arena capable of perturbing it. Boundary 191

conditions are imposed that constrain the perturbations. The behaviour of the machine at the core is determined by its interaction with its arena mediated by those constraints. These β-open systems can be linked together. The synergy established between linked systems can generate selforganising behaviours in which pattern emerges spontaneously at a macro-level through micro-level interaction. Connecting an α-closed core to its arena to produce a βopen system is such a powerful method modellers often forget the empirical evidence that individual human beings can innovate, i.e. can create qualitatively new knowledge and new categories that underwrite qualitatively new types of behaviour. Many sociologists and anthropologists believe that, if a system is defined as a machine with an α-closed, single-valued transformation at its core, there is no such thing as a social system. The empirical evidence that supports this assertion is persuasive, but the clinching argument comes from pure mathematics. If a machine could simulate human knowledge dynamics, we would be able to invoke a single-valued transformation of a set of knowledge states onto itself. Each machine would know about certain types (i.e. named sets) of object. Objects can be observable (dogs, blueness, bricks), abstract (numbers, sets, algebraic operators) or socially constructed (purposes, credit-ratings, value-judgments). Knowledge, by this conception, consists of what Ockham would call ‘plurality’ and cyberneticists call ‘variety’. It is the list of categories the machine can distinguish. The set formed by the union of that list of sets defines the machine’s knowledge state. From time to time the machine receives information that challenges its knowledge and must revise its beliefs to accommodate that information. Since the transformation is α-closed, each knowledge state must be an element of a (possibly infinite) set of all knowable sets which, by analogy with the concept of a state space, I will here call K, the knowledge space. Every innovation is represented by a single-valued transformation of one knowledge state (element of K) into another.

192

We can, if we wish, define regions of K containing elements (each representing a set) that are similar in some respect. Each of these regions is a subset of K that, coincidentally, corresponds to a unique element of K, a list of sets. We can use this self-referentiality to deduce some of Ks properties. For example, we can define a region NOT V in K that only contains elements representing sets of things that are not vegetables. A set of sets is not a set of vegetables, so NOT V must contain an element that represents itself. Conversely, the set of sets of vegetables does not contain an element that represents itself. Clearly, K must contain two types of region (subset): •

Regions containing elements themselves (containers).

that

represent



Regions that do not contain representations of themselves (excluders).

unbounded. The course of history will always be logically underdetermined (i.e. uncomputable) in the present. Note that the derivation of Jonah’s law I offered earlier (Section 1, essay 7) was contingent on the proposition that humans could act to change the course of history. It showed the future was possibly unknowable in the present. Russell’s paradox takes that argument one stage further. It is always possible to innovate - responding to perceived threats or opportunities by creating new conceptual structures and qualitatively new ways of behaving. Knowledge dynamics are α-complex and, because of this, future knowledge configurations are necessarily uncomputable in the present we cannot predict the course of history because knowledge dynamics are not mechanistic.

Any region of K must either be an excluder or a container. So we ask an obvious question: Is the region of K that contains all the sets of excluders an excluder? If we answer ‘yes’, then the set of excluders contains an element that is an includer. This would contradict the definition of the set. However, if we answer ‘no’, the set of excluders is a container and so must contain an element to represent itself. This is Russell’s Paradox. Bertrand Russell used it to prove that the set of all possible sets does not exist because any attempt to impose a boundary on it leads to logical inconsistency - it is α-open, i.e. logically unbounded. This does not imply that innovative actors cannot be simulated. Indeed, it is almost certain that innovation can be simulated. However, it does mean that attempts to simulate human innovation will be of little predictive value. Moreover, the products of innovation cannot be predicted from the behaviour of an α-closed, single-valued transformation because social systems not only generate ‘plurality’ spontaneously, their capacity to do so is logically

193

194

2. Loose Ends Essay 1 gave Jonah’s law the force of necessity. This essay is to do the same for the assertion, made in Section 5 Essay 4, that probability methods for use in cultural ecodynamics cannot be placed on a set-theoretical footing. Suppose, as a working hypothesis, that a probabilistic universe is a set. The diagrams I used in Section 5 (and below) become Venn diagrams and each point within universe represents an infinitesimal atom of probability. Every atom of probability must be associated with a proposition (a possible world or ‘event’) so universe has become a set - the set of all existential propositions (that is of propositions that represent events and are constrained to allow set theory to be applied to probability method). Thus the circle corresponding to Proposition 1 in my diagram bounds the set of propositions essentially indistinguishable from Proposition 1. They vary only in respect of accidental clauses. Every existential proposition in universe is predicated on the possible existence of named sets and can, therefore, be mapped onto one, and only one element of K (the set of all knowable sets).

However, this is not the case here. Every element of K represents a number of possible sets and those sets can be referred to in an existential proposition (the assertion that they exist, for example). So the image of universe in K is the whole of K whence, we obtain the lethal symmetry: i

But we have to bound universe in order to place the calculus of probabilities onto a set-theoretical basis. Generic probabilistic universes cannot be sets by Russell’s paradox. This doesn’t stop you using set theory to derive probabilistic methods in games of chance where finite populations of events can easily be defined, but it does stop you placing more general probabilistic methods on a settheoretical footing.

It is possible, of course, that many propositions map onto the same element of K, but each unique proposition must map onto one and only one element. We say that there exists a mathematical function (call it ƒ) whose domain of operation is the whole of universe and whose range is part or all of K. We can think of ƒ as a filter or lens that produces an image of universe in K. Represent that image as ƒ(universe). Now this is perfectly reasonable, even though K is unbounded (and hence not a set) provided ƒ(universe), the image of universe in K, is itself a set with a definite boundary. For this to be so ƒ(universe) must be strictly smaller than K in the sense that some elements of K are excluded from the image, ƒ(universe). 195

ƒ(universe)=K

So universe is at least as large, and possibly larger (i.e. contains more elements) than K, the set of all knowable sets. But K, as we have seen, is logically unbounded - it is not a set. So universe, the set of all existential propositions, must be unbounded too.

196

inevitable and some advocated action to stimulate that revolution as a serious policy option.

3. Marx and Spencer It is hard to take an interest in politics or history without speculating about what the future holds. No serious scientist imagines every detail of history can be predicted Newtonian determinism is dead - but most people accept that the course of history in its broadest outlines can be understood, predicted and managed. You cannot predict every local detail and accident of history, of course, but it should be possible to characterise broad trends and cycles and understand obvious pitfalls. I did precisely that in the introduction to this book when I indicated that renaissance events leading to multiple epiphanies often followed demographic catastrophes. Such regularities often suggest policy instruments. If, for example, I were willing to endure the consequences, I might try to engineer a renaissance by precipitating a demographic catastrophe. I hope I have made it clear that the price of this is too high to contemplate, but there are plenty in the world who disagree with me and point to the work of Karl Marx to justify their actions. Marx believed social change was driven by competition to control the means and modes of production. This led to the commoditisation of human labour, slavery, the development of class structures, the emergence of capitalism and, eventually, to proletarian revolution. Marxian theory postulated a limited type of Phoenix Cycle in which competition between individuals and groups inevitably led to revolution. Later commentators called this model dialectical materialism. The Russian Revolution and the First World War meant that social theories really mattered and political pressures were imposed on evolutionary thinkers. Marx’s theories originally left a little wriggle-room for choice and accidents of history, but his later writings and the work of exegesists gradually simplified the model. The contingencies and qualifications were swept away and political Marxism became deterministic and tramlined. Marxists felt revolution was

197

When people move from using the past to predict the future to using the study of history to drive the course of history they slide along the spectrum that separates scepticism from realism. Twentieth century Marxists began to act as if Marx’s theories were self-evident, universal truths. Dialectical materialism became the dogmatic basis of the Russian and Far-Eastern Communist Revolutions. Like all dogmatic movements it ignored the effects of small world bias and was eventually forced to suppress dissent, both manifest and fictive. The Stalinist purges, the Great Leap Forward and the atrocities of the Khmer Rouge all grew from political action based on deterministic theories of social change. Those who were murdered in these purges were so-called counter-revolutionaries who questioned (or were accused of questioning) Marxist theory by trying to frustrate unstoppable processes. It was rather like throwing people in jail for defying the law of gravity. This is why I think the price of using revolution as a policy tool too high and the reification of knowledge too dangerous to countenance. Of course, there were rival sociological theories throughout the nineteenth and twentieth centuries. Those of Sir Herbert Spencer, for example, have been at least as influential as Marx’s. Spencer was an evolutionist and one of the founders of sociology. He adopted the Darwinian model of evolution by natural selection but argued that the future of history was fixed. His model blended perfectly with nineteenth century imperialism. Spencer saw a gradual progression from more or less a-social, primitive humans, through groups of increasing size and heterogeneity to a fully developed modern society. Not only did humans evolve, societies evolved too and warfare was the principle driver. In a book titled Principles of Sociology he explained his ideas thus:

“…in the struggle for existence among societies, the survival of the fittest is the survival of those in which the power of military cooperation is greatest, and military cooperation is that primary kind of cooperation which prepares the way for other kinds. So that this formation of larger societies by the union of smaller ones

198

in war, and this destruction or absorption of smaller ununited societies by the united, larger ones, is an inevitable process through which the varieties of men most adapted for social life supplant the less adapted varieties” Spencer was working with a short chronology for human life and had no way of knowing that sedentism was a relatively recent development in human societies and imperialism even newer. Little was then known about the social lives of great apes and Spencer had no reason to assume that human ancestors were obligate social animals. Had he known, he would have had to explain over twenty millennia of apparent stasis and take more account of chance and contingency, but he didn’t. Spencer’s generalisation from the experience of practising sociology in an imperial age led to the view that aggregation, internal differentiation and natural selection for sociological ‘fitness’ were inevitable. Natural selection was the engine driving an evolutionary machine that built empires. The replacement of the weak by the strong and of gibbering asocial apes by well-bred gentlemen seemed to him a corollary of natural laws as inescapable as the law of gravity. Although Spencer himself embraced the Darwinian model of evolution by natural selection his ideas about the inevitability of progress have been pervasive. Whenever you see a science fiction film in which some creature spontaneously ‘evolves’ into a higher life-form, you encounter the cultural legacy of Sir Herbert Spencer. Spencerian theory has been tremendously influential, especially in the Germanic countries and here again the effects of small world bias have been ignored. It was Spencer, not Darwin, who coined the phrase Survival of the Fittest. Spencerian theory provides the perfect justification for cultural imperialism. In North America, where positivist ideas about the value-neutrality of science are influential, the influence of Spencerian theory has been reified as universal, self-evident truth. This dogmatism was manifest in the imprisonment and oppression of leftists, not just revolutionaries, but Trades Unionists and other un-American dissenters who dispute Spencerian ideas about the inevitability of progress and Imperial expansion. 199

However, we should not demonise North Americans. The ability of dogmatic religious sects to manipulate its supposedly secular educational system, its institutionalised religious extremism, its tendency to demonise the advocates of alternative political models and the level of institutional racism manifest until the later 1960s have been mitigated by a legal system that has sustained civil and political diversity and emancipated many unacknowledged stakeholders. To see what might have been had Americans been less committed to personal liberty we need look no further than the history of European fascism. Spencer’s ideas were adopted enthusiastically by the Third Reich and used to justify its policy of genocide in the mid twentieth century, for example. Spencer’s deterministic theories have visited great evil on the world in regions as far apart as South Africa, Australasia and North America. Caucasian racism is often grounded in Spencerian theory. The evidence of history is sufficiently clear to persuade any rational person that realistic interpretations of predictive theories should be avoided on pragmatic and ethical grounds. It hardly matters whether the beliefs being reified are those of western sociologists, Middle-Eastern prophets or Far-Eastern revolutionaries - epistemic realism is always bad news for someone. However, societies keep falling into the scholastic trap - ignoring small world bias and adopting ever more extreme measures to protect deterministic theories from empirical refutation. In generation after generation of politicians we find the same mistakes being made and new demographic catastrophes piled onto the ruins old ones - we learn from history that we do not learn from history. It is very improbable that the individuals involved in the reification process are all irrational, perverse or wicked. Presumably the only conscious decision being made is that direct action is the only way to right wrongs and solve problems. While sceptics are writing books about the social construction of knowledge and advising caution, the men and women of action are taking action with the tacit consent of the population at large. Society drifts into a more realistic

200

configuration as government and opposition become part of our cultural landscape. Gradually, habit becomes culturally embedded and quite bizarre behaviours come to seem quite ‘normal’. Indeed, one of the great ironies of political rhetoric is that anti-realists (sceptics) who question the ontological stability of conventional beliefs are commonly dismissed as ‘unrealistic’ (a word that implies we are naïve) or ‘idealistic’ (a word that describes one who sets ideas above evidence). Yet it is the realists, not the sceptics who are so cock-sure of their own beliefs they give them priority over experience. It is the realists who predict causal relationships and then get bounced into punitive action to sustain those beliefs in the face of empirical refutation. Dogmatic realism is the bane of all civilised societies. It matters not whether you are rightist, leftist, religious or secular, hawk or dove. If you are ethically inclusive (i.e. sensitive to the needs of others, even those with whom you disagree) and pragmatic there will be room for negotiation and co-existence. It is not naïve to learn the lessons of history, to monitor your own beliefs for coherence and abandon those that do not work.

4. Realism and Russell’s Paradox In general, politicians are temperamentally wedded to the idea that society can be driven like an automobile and all the ills of society derive from the fact that the person at the wheel doesn’t really understand its modus operandi. A politician who publicly admits that s/he really does not know what the impact of a given policy will be is unlikely to win a popular vote. But any other position is rationally untenable. To understand and control society it is necessary to be able to predict how people will respond to certain actions. The ability to predict presupposes a degree of determinism (by Jonah’s law) which in turn suggests that humans cannot regulate them. The assumption of determinism, realistically interpreted, leads to cultural imperialism and oppression. People are commonly ridiculed, excluded, punished or committed to mental hospitals for questioning what C.H. Waddington used to call COWDuNG (the COnventional Wisdom of the DomiNant Group). Politicians (conservative or revolutionary) usually start their careers expecting to improve the lives of their constituents. Their education may have been unorthodox, but they must be aware of (conservative or revolutionary) regimes a little like theirs that went badly wrong. They are motivated by a greater emergency - either genuine injustice or a threat to the values they hold most dear. It is very easy to persuade yourself that wicked acts are perpetrated by wicked people. We are not wicked, are we? It will not happen to us. Yet Russell’s paradox suggests it can happen to anybody. No matter how kind and good we may be, no matter how robust our knowledge or strong the consensus, knowledge dynamics remain α-open. Sooner or later someone will see the world differently and contend with us. If we have made the mistake of reifying our beliefs, conflicts over ontology are almost inevitable. As those reified beliefs become more hotly contested, knowledge communities retreat to their moral high ground

201

202

and conflicts about ontology flare up as a full-blown clash of ideologies. Revolutionaries and conservatives have more in common with each other than either would like to admit. Both are realists, acting as if they know the truth. Both claim their own brand of COWDuNG is superior. They are dogmatic idealists who have fallen into the scholastic trap and want to drag the rest of society after them. Recall that each trap is a function of the nature of those it traps. To understand the scholastic trap is to understand human nature. As humans change the way they exploit natural resources and interact with each other, these behaviours become culturally embedded norms and their ability to sustain them is perceived as a basic human need. These ‘needs’ often bear no proximate relationship to physical well-being and people who cannot satisfy them can still breed and rear young. Nonetheless, the idea that they are genuine needs is reinforced by an emerging consensus. As people adopt these norms, the communities of which they are members undergo a political and economic reorganisation that gives the process inertia. We may not need estate agencies, gas-guzzling cars, plastic lemon-squeezers or electronic foot-baths, but once the demand for them is locked in we cannot dispense with them without causing serious hardship to those engaged in their production and distribution and perceived hardship to consumers. The process of cultural lock-in acts as a sociocultural ratchet that drives irreversible, and often unsustainable behaviour in cultural ecodynamics. The conspicuous consumption and highly developed work ethic we see in some developed countries cannot be explained purely in terms of proximate natural selection, or even the understandable wish for a comfortable life; there are cultural processes at work here. We cannot abolish those processes without abolishing human nature. Indeed, how would societies function without them? Our ability to co-operate at a distance is underpinned by cultural lock-in. It hardly matters whether we are hunters

203

trying to bring down a mammoth, farmers trying to bring in the harvest or researchers trying to solve a problem of applied science. We need the sense of trust that comes from knowing we share values and beliefs with others. That sense of trust is built through discourse and the effect of discourse is to reify beliefs, converting them into culturally embedded knowledge. A simple way to evade the scholastic trap is to understand that knowledge systems are α-open. All our beliefs, even the ones so deeply embedded we are unaware of them, are open to question. The use of violence to impose those beliefs on others is never justified. The only just war is that which defends the rights of dissenters and sustains intellectual and behavioural diversity. The only grounds for criticising the beliefs held by another are that they are logically incoherent - either the axioms themselves cannot be reconciled with each other or empirical observations cannot be reconciled to boundary conditions. These are the fundamental principles of modernism and we abandon them at our peril. Although I fear the rise of ethnic and ideological intolerance, these conspicuous manifestations of dogmatism are widely understood and met by a vocal popular opposition. My greatest fear is post-modern complacency - the sense that these things belong to ancient history and cannot happen again. Scandinavians, for example, have a touching faith in the objectivity of empirical data and allow huge databases to be collected by their governments. I have worked with these data, which contain the life histories, educational achievements, income, addresses, births, marriages, divorces and domiciliary location of millions of citizens over decades. It would be the work of a few hours to link them to medical records, bank accounts or psychometric tests undertaken as part of military service or DNA data from medical records. If the pendulum of political fashions were to swing a little, this naïve realism could easily lead to the establishment of eugenics programmes even more

204

enthusiastic than those which, up to the 60s, saw women sterilised for not being very intelligent or looking like a gypsy.

All over Europe and North America government scrutiny of ordinary citizens is increasing. It seems we have to be protected from wicked people who use the internet to abuse children or threaten the peace. We must have cameras in every shopping mall to protect innocents from crime. It is hard to oppose such common sense arguments except by noting that, in an α-open knowledge domain, new moralities and new ‘crimes’ can and will be invented. The so-called ‘war against terrorism’, for example, has seen people imprisoned without trial on suspicion of sympathising with terrorist organisations. At what point does protection from crime become a ratchet that drives state-sponsored oppression? This is an ontological problem and, as such, has no universal solution. Moreover, we cannot arrest the dynamic process that leads us again and again to the mouth of the scholastic trap. Creating (and reifying) knowledge is what people do. But we can act now to establish tolerant principles that will constrain authoritarian politicians in the future and use them to set explicit boundary conditions on socio-political theories. The case for doing this is not merely libertarian, if we fail to do so, we may face an ecological catastrophe later. One of the reasons epiphanies often follow demographic catastrophes is that a society whose institutions are in disarray is less able to suppress innovation. Epistemic diversity is a key determinant of a society’s adaptive potential. Those minority communities provide refugia within which unfashionable ideas can be sustained. If we want to break this Phoenix Cycle, therefore, we need institutions that can innovate and accommodate epistemic diversity even when consensus levels are high.

205

206

5. Philosophical Subtleties? Every perceived object - be it a system, attribute, process or epiphany, is a mental construct. We have no direct knowledge of external reality. As I write those words my heart sinks a little. I can almost see the battle-lines being drawn for another academic bloodletting. Ritual debates about whether electrons existed before we ran critical experiments bore the pants off me. I do not deny reality or universal truth; I merely assert that they are unknowable. Science is a social activity - the rational pursuit of knowledge. Trying to appreciate the reality beyond sense experience is a personal quest and the insights we win thereby are much harder to communicate. Scientific method bears on an explicit, communicable consensus. We must impose structure on the world and negotiate a universe of discourse to valorise these objects within acceptable levels of uncertainty. Only then can we apply the methods of symbolic reasoning to test those beliefs and work out their corollaries. Each universe corresponds to a local knowledge system. There exists no finite, universal model from which all knowledge can be derived. Any attempt to invoke one leads to paradox and incoherence. Kurt Gödel, Bertrand Russell, Alonzo Church and Alan Turing deduced the limits of logic independently in the early twentieth century. Shortly afterwards, quantum theory was developed into its contemporary form, the topologist LEJ Brouwer began experimenting with his intuitionalist approach to mathematics and humanists began writing about the unpredictability of historical systems. Dozens of scientists, mathematicians and humanists - all dealing with logical unconnectedness and drawing similar conclusions - what a remarkable coincidence! We cannot even present this as a uniquely twentieth century discovery. Philosophers in the twelfth century and again in the later fourteenth also explored the limits of deducibility. One cluster of coincidences in the history of western thought is remarkable, but three similar clusters centuries apart

207

stretch credulity beyond breaking point. These are the products of societies at a similar point in the Phoenix Cycle each coming out of a demographic catastrophe and preparing to abandon old certainties. Enlightenment thinkers like Hume laid down a sharp distinction of physics from metaphysics as a response to the horror of religious war and the use of biblical hermeneutics to justify such warfare. Whether they were universally right or wrong is immaterial. This was a laudable ethical judgment - a decision to avoid theology and politics. However, the period of stability that followed brought an industrial revolution with strong incentives for scientists to dabble. The scientific colonisation of politically sensitive problemdomains accelerated with the positivist agenda of the nineteenth century. Sociology, economics and anthropology were established and biologists abandoned taxonomy for evolution. However, scientists did not reject the Enlightenment distinction of ontological science from metaphysics. They subverted it to justify their claim to be the only group able to guide humanity through these ethical minefields. This process accelerated further when the Mode 2 revolution was initiated after WWII and the positivist polemic again came to the fore. The distinction of science from metaphysics, originally quite innocuous, has become a scandalous anachronism. Even in the early twentieth century, as scientists were debating the meaning of relativity and quantum theory, the metaphysical nature of these debates dealing, as they do, with the reality beyond valorisable experience, was politely set aside. Scientists and technocrats have become the new dogmatic authority supremely qualified to pronounce on complex metaphysical issues by dint of their objective, value-free methods. That its critics should have started calling it ‘modernism’ adds insult to injury - it is the quintessence of nineteenth century social engineering. Scientists are no longer passive observers dissecting and experimenting while history unfolds. For two centuries now we have been active participants. Scientists qua scientists are regularly called on to help manage conflicts, assess risks

208

and create incentives. Thousands of hacks have developed heuristics and new models of the scientific process that expedite the work, not just in theory, but in practice. Science students in schools and universities are taught little about the boundaries beyond which hard science provides no substantial leverage. Engineers are not even given a working knowledge of the methods we need to operate beyond those boundaries. To deny them this knowledge is to send them out into the twenty-first century with conceptual models that were already beginning to look flaky at the end of the nineteenth and became quite untenable seventy years ago. This is particularly damaging for those who will engage in applied research, many of whom spend years exploring welltrodden dead ends and un-learning what they have been taught. They could be better prepared for contact with wicked problems whose solutions are never objectively true or false, but subjectively better or worse. In wicked problem-domains there are no stopping rules, no objective criteria for satisfactory solution and every case study is unique. Wicked problems are so ill structured they cannot be expressed in the older ontology of systems, states and processes. Moreover, the boundary judgments that delimit systems coincidentally determine who are considered legitimate stakeholders - often with disastrous consequences for those excluded. This is why putting extra resources into research, or investing heavily in conservation and economic growth often has undesirable consequences, especially if the initiative takes no account of appreciative settings and local circumstances. The commonest effect is to heighten conflict between political insiders (the authentic knowing group with the education and contacts to capture inward investment) and political outsiders (the authentic being group whose lives are disrupted in the process). Inward investment may even weaken local economies by increasing competition and driving people out of business, further aggravating social exclusion.

209

Do not imagine, however, that the wisest course of action is to tip-toe quietly away. A laissez-faire approach can be just as damaging as heavy-handed intervention, especially in environmentally or sociologically sensitive regions. A wise policy-maker, therefore, is one who takes the time to understand local circumstances. Investing in hotels, parks and businesses may be part of the overall plan, but these products will be valueless without matching investment in a process of sustained trust-building. There is no quick-fix. Given our current state of understanding, the positivist idea that scientists deal with objective facts is at best naïve and at worst culpably negligent. When your research projects have damaging consequences, local stakeholders will not say: “it seems your theory has been disproved, but we have read Karl Popper and understand how science works”, they will play merry hell with you for a meddlesome fool - rightly so, in my opinion, because the grey literature of Mode 2 research is stuffed with case studies explaining why you should expect this. The heuristics and conceptual structures I have described in this book - ideas about the spatio-temporal constraints on valorisation, about ontologies as mental constructs and the irreducible uncertainty of all observation; even the observation of a swan or a bucket of frogs as such - are not motivated by an obsession with philosophical niceties. They are more coherent, work better in practice and are more consistent with the results of empirical research than the covert metaphysics of the positivist agenda.

210

Hard science methods work well when we can impose an appropriate ontology on our experiences and formulate an AWPP. In practice, however, socio-natural scientists deal with α-complex domains so ill-structured that this is usually impossible. Once again I must make myself clear.

encourage them to define issues of common interest on which they can co-operate as individuals without compromising the core beliefs of their respective communities. Whenever minor conflicts are resolved, stresses are relieved and they begin to build trust across the divide. Eventually, they may even feel safe enough to revise core beliefs, reducing the level of conflict further.

I am not saying hard science is flawed because categories and standards of falsification are socially constructed. Anyone who has tried to apply it must know this. I am saying that it assumes ontological problems can be addressed without actually changing the problem-domain itself. This is a perfectly reasonable simplifying assumption in classical physics, chemistry, geology and much of biology, but it doesn’t work in cultural ecodynamics.

Knowledge communities are characterised by a shared conceptual model. They recognise named things and categories of things and the states and the processes that describe the behaviour of those things. These models are informed by the values and aspirations of community members and often have characteristic spatio-temporal scales. However, they are invariably launched into the public domain as reality judgments; i.e. as statements of fact.

Socio-natural systems are so ill-structured that simply imposing a formal ontology on them is a form of intervention that can change the problem. This is particularly so in contexts where new models and new patterns of behaviour are desperately needed. Those needs impose stresses on stakeholders which force beleaguered knowledge communities to cohere. The strong cultural bonds created by this cohesion fix people’s appreciative settings, making them unreceptive to new ideas. They cannot innovate until they feel safe enough to experiment with new ideas and are unlikely to feel safe until they have innovated.

These ontological assertions are defensive positions and individual representatives of those communities often use them as such. However, one can sometimes coax them into a less truculent stance by exploring the values that lie behind them. Often these values are culturally embedded so one has to approach them by a process of reverse-engineering. A good strategy is to use the model to identify the stakeholders in a problem-domain. The results are always interesting and often surprising, even to representative members of the community that owns the model. However, they usually feel good about the tacit values and beliefs they discover and this is helpful.

6. Integration and Advocacy

If an applied scientist drives the agenda by claiming that one knowledge system is demonstrably stronger and more objective than another, stakeholders may become even more defensive and the problem of facilitating change is exacerbated. This bind not only prevents innovation and problem resolution, it increases the likelihood that wellintentioned intervention will make things worse. This is the wicked feedback mechanism that has defeated so many socio-natural scientists, policy makers and consultants. So how are we to handle it? We should certainly be wary of imposing a simple, overarching ontology on communities in conflict. Rather, we must accept their inability to agree as part of the social milieu and

211

Shifting attention from conflicts about ontology to discussion of values facilitates integration because knowledge communities in open, public debate seldom deny the stakeholding of their opponents. They may bicker endlessly about questions of morality, equity, legality or who started the fight, but the stakeholding exists de facto as an artefact of open conflict. When they begin to compare values, they discover, sometimes with a little innovation-jolt, that the arguments about ontology can be re-expressed as a debate about the legitimacy of a stakeholder group. Draining a swamp cannot become a source of conflict until someone acknowledges the

212

frogs as legitimate disagrees.

stakeholders

and

someone

else

Lest anyone imagine the idea of non-human stakeholders a joke, recall that a century or so ago the idea that women had a legitimate stakeholding in governance or higher education was equally risible. Every generation has its unacknowledged stakeholders. The emancipation of slaves, the abolition of public executions, bear-baiting and cockfighting, legislation to protect health and safety in the workplace or protect natural resources, controls on the advertising and distribution of harmful recreational drugs like alcohol and tobacco - all hinged on conceptual adjustments that changed our collective understanding of who or what must be considered a legitimate stakeholder. Each of these reforms was opposed on the grounds that the new ontology flew in the face of common sense and denied the natural order of things. What is common sense to one person, to another may seem more like institutional bigotry. Consider, for example, the post-modern anti-science backlash. If one considers the debate at the community level it is an absurd pantomime in which the spokesmen of both communities hurl abuse at each other from the battlements of their cardboard castles. In section 1 of this book I showed that relativism - the belief that there are no universal truths was logically incoherent. I have shown in this section that extreme forms of reductionism - the belief that there was a single, over-arching set from which all knowledge could be derived as subsets - is similarly incoherent. Neither relativism nor extreme reductionism has much to recommend it as a rational philosophy and a modernist can cheerfully call down a plague on both their houses. However, arguments that appear, at a community level, to be about ontology and truth present very differently in case studies. A mixed group of reductionists and relativists who agree to focus on a problem of common interest almost invariably differentiates to become the advocates of different stakeholders.

213

The relativists argue for a widening of the stakeholder group, for social inclusion, democratisation and empowerment. Not bad values, it seems to me. The reductionists, on the other hand, have a difficult job to do meeting the needs of the stakeholders they already know about without revising problem specifications every five minutes. Their wish - to serve the needs of their clients and get down to the serious business of solving problems - also seems honourable and worthwhile. However, the delegates of one community want to move boundaries to facilitate social inclusion while the other cannot do its job at all if the boundaries keep moving. Little surprise, then, that they disagree from time to time. The points at issue between these communities can not be resolved by arbitrating about which community has the better philosophy, but must be judged on a case-by-case basis among individuals and small groups. This is why, despite all the manifest weaknesses of their philosophical models, both communities persist and retain their distinctiveness. One cannot engineer unanimity in a world where conflicts of interest and belief are unavoidable. Someone has to advocate consensus while someone else monitors boundary conditions and defends the interests of unacknowledged stakeholders. That is how integrative research works. It is wholly proper that the advocates should not also be arbiters in these disputes. There must be a regulator to guide the process (subject to proper ethical scrutiny and a system of checks and balances). However, there is, in my view, something rotten in any system of governance that reifies the beliefs of those arbiters while licensing them to work in universes where the concept of objective reality cannot be valorised. Policy makers who cannot abandon nineteenth century certainties are a public menace.

214

Section 9: Appreciating Cultural Ecodynamics Your children are not your children. They are the sons and daughters of Life’s longing for itself. They came through you but not from you And though they are with you yet they belong not to you. You may give them your love but not your thoughts, For they have their own thoughts. You may house their bodies but not their souls, For their souls dwell in the house of tomorrow, which you cannot visit, not even in your dreams. Kahlil Gibran The sanest like the maddest of us cling like spiders to a self-spun web, obscurely moored in vacancy and fiercely shaken by the winds of change. Yet this frail web, through which many of us see only the void, is the one enduring artefact, the one authentic signature of humankind, and its weaving is our prime responsibility Geoffrey Vickers A good teacher should understand and impress on his students the view that no problem whatever is completely exhausted. There always remains something to do; with sufficient study and penetration we could improve our solution, and, in any case, we can always improve our understanding of the problem. George Polya There are no ideas in science so well proven that they cannot be challenged. There are none so certain that they need not be reexamined occasionally. R E Blackwelder

Orientation My interest in cultural ecodynamics is grounded in biology and archaeology - I am (or was) a bioarchaeologist - I hardly ever get a chance to work in my own field these days. Archaeologists study human-environment interaction, sometimes over extremely large areas and deep timeframes. This is a fascinating domain of research, but much of the knowledge obtained this way is not germane to policyrelevant science. To integrate my own knowledge with that of colleagues interested in the management of contemporary life-support systems, I must paint with a broad brush on a large canvass and fill in some of the gaps with information from other sources, including my own experience. The resulting synthesis, viewed from an archaeological perspective, is deeply unsatisfying - it lacks depth and ignores fine-grained spatio-temporal pattern. It is a defensible coarse-grained model of the time geography of Europe and the Middle East over the last 10,000 years. It can be made empirically testable by setting boundary conditions on its categories and trying to valorise them in other regions. There are cradles of civilisation in the Far East, the Americas and Southern Africa and ample evidence of domestication without incipient civilisation in many other places. My choice of material for this section has been based on explicit value judgments. I take it as axiomatic that the purposes of policy-relevant science are: to resolve conflict and inequity, to obviate poverty and social exclusion, to find pathways to sustainability, to facilitate innovation, to create an educational system that turns students into effective and willing stakeholders and to develop a dimensionally coherent model of cultural ecodynamics that informs policy on many spatio-temporal scales. Since those are my values (and, I believe, the values of the funding agencies that employ me) I have made reality and operational judgments that are consistent with the empirical evidence and can be used to promote them.

215

216

Two of the authors quoted at the beginning of this section have been encountered before. The Vickers quote is from The Anatomy of Judgment. The third author is George Polya, a mathematician who made a special study of heuristics and wrote what many consider the definitive textbook: Polya, G (1944) How to solve it. Stanford, Stanford University Press. RE Blackwelder was one of the biologists I read as a schoolboy. Jean Henri Fabre, Konrad Lorenz, Alfred Sherwood Romer, Edwin Steven Goodrich and the irascible Alexander Francis Magri McMahon were others. My understanding of systematic method is broader now than it was then, but I remain grateful for what they taught me. This quotation is from: Blackwelder, RE (1966) Phyletic and Phenetic versus Omnispective Classification. in Phenetic and Phylogenetic Classification. 17-28 London: The Systematics Association Publication No 6. BM(NH), London. Many of the ideas about evolution come from my own undergraduate studies. Normally, one doesn’t write undergraduate notes into a book, but these have slipped from the undergraduate curriculum over the last thirty years. My approach is rather old-fashioned. I learned it from a botanist whose proud boast was that, through him, only four academic generations separated his students from Thomas Henry Huxley. The Darwin-Huxley synthesis has nothing to say about whether a chicken is an egg’s way of making more eggs, it simply requires that there be a population under selective stress. If there’s no stress, there is no evolution. This is a useful idea because it leads immediately to policy norms of direct relevance to the quest for sustainability in cultural ecodynamics. We need to avoid stress. T.H. Huxley’s mature position on evolution (actually much richer than the Darwin-Huxley synthesis) is found in his article Evolution in the ninth edition of Encyclopaedia Britannica.

217

Incorporating evolution into taxonomy led to the New Systematics. Julian Huxley was T.H’s grandson: Huxley, J (1940) Towards the New Systematics. in Huxley, J (ed.) The New Systematics. London: The Systematics Association. pp 1-46. The concept of a prudent predator is found in: Slobodkin, LB (1961) Growth and Regulation of Animal Populations. New York: Holt, Reinhart and Wilson. Now to two books that represent the extreme points in the pendulum-swing between realism and scepticism of the last fifty years. Liberated from their own rhetoric, however, both have made a seminal contribution to socio-natural science. Quantitative approaches to biological systematics were trumpeted by: Sokal RR and PHA Sneath (1963) Numerical Taxonomy. London: Freeman. The mathematical methods themselves are very useful. However, their justification for using them (that they are stable, objective and repeatable) was plain silly - a symptom of the drooling Physics Envy that dragged post-war science back into the mire of positivism. This book presaged the death of many university courses in systematic biology. An important post-modern treatment of co-evolution is found in: Norgaard, R (1994) Development Betrayed: the end of progress and a coevolutionary revisioning of the future. London: Routledge. Norgaard confounds Darwinian evolution with Spencerian adaptation and mechanistic types of co-dynamic change. His attacks on ‘modernity’ are classical straw-man arguments, every bit as silly as Sokal and Sneath’s ‘objectivity’. But his book is very stimulating and thoughtful and has also opened up many avenues for research. The archaeological evidence that relates to my model of cultural evolution can be gleaned from many old textbooks, case studies and papers. Actually, you have to dig rather

218

deep into the case study literature to get at the methods themselves. The following will do to get you started: Binford, LR (1983) In Pursuit of the Past. London: Thames and Hudson Childe, VG (1934) New Light on the Most Ancient East: the Oriental Prelude to European Prehistory. London: Kegan Paul. Flannery, KV (1973) 'The Origins of Agriculture' Ann. Rev. Anth. II Higgs, ES and M Jarman (1969) ‘The Origins of Agriculture: a Reconsideration.’ Antiquity, 43. pp 31-41. Finally, an interesting perspective on interdisciplinarity and the role of universities is the EURAB report titled: Interdisciplinarity in Research. The article starts off with a quotation to the effect that knowledge is extracted from an integrated world but disintegrated by academic disciplines. Wicked, wicked Mode 1 researchers! That is the exact antithesis of my book which holds that the world is logically unconnected and problems of integration arise because policy-makers (at all levels in society) force knowledge communities into conflict by differentially supporting those whose opinions resonate with their own. Indeed, that quotation is the strongest indication I have had so far that the Mode 2 revolution is over. Apparently Mode 1 research is rubbish, because it forces the truth into a disciplinary strait-jacket while Honest Joe, the Mode 2 man, tells it like it really is. Maybe it’s time I retired !. There is more to this report than the quotation at the beginning. Some of the discussion of legal and institutional policies that happen to discourage interdisciplinarity is particularly relevant. Read it yourself and decide what you think. It is on-line at: http://www.ademe.fr/pcrd/Documents/eurab_interdisciplinarit y_research.pdf 219

1. Defining Evolution One might reasonably imagine a direct, almost lineal succession of ideas connecting the work of Darwin to that of the neo-Darwinists, ultimately leading to research on the genetic code and the Human Genome Project. Yet the story was not one of continual advance, but of crisis and defensive regrouping. With the disclaimer that these pages are based partly on reading and partly on the recollections of my teachers, I will re-tell the story. In the early nineteenth century theories about the development of life and of human society were the preserve of theologians and humanists. The realists among them argued that the categories to which those theories referred were independent of human knowledge and belief. The sceptics, while accepting this was not so, were often deeply interested in probing the reality beyond sense experience. Their approach was metaphysical. Opposition to Newtonian ideas had focussed on an ontological problem. Everyone accepted that a cannonball in motion was still a cannonball, but some were unsure that the fluxes Newton invoked were ontologically defensible. Evolutionary theory, however, struck at the heart of realism by implying that not only the accidents, but the essence of the species could be transformed. If species are mutable, perhaps we live in an anarchic universe where a cannonball is only a cannonball by negotiation and common consent. This incensed the theological realists and polarised the debate into a full-blown power struggle. Sir Herbert Spencer and Karl Marx were groping towards scientific theories about human society, while Darwin, Wallace and Darwin’s spokesman, Thomas Henry Huxley were doing a similar job for biology. In the ensuing power struggle much of the disagreement focussed on boundary judgments designed to distinguish science from non-science. These scientists rejected metaphysical speculation and concentrated instead on ontological arguments. By the late nineteenth century this process had produced the distinctive anti-metaphysical philosophy known as positivism which

220

held that science dealt with observables and metaphysics with imponderables and biblical criticism, but here we have to deal with this tendency in nascent form. What Darwin, Huxley and Wallace were saying, in effect, is that the species exist (we can observe them) but are not real because, when we extend our universe of discourse either by delving into geological time or travelling on voyages of exploration, their boundaries fade. The argument was not just about plants and animals; it struck to the heart of dogmatic realism by implying that the distinction of humans from brute creation, though ontologically defensible in the present, had no solid metaphysical basis - it was certainly not decreed by God at the dawn of time. Thomas Henry Huxley favoured Darwin’s model because it suggested that the essence of a species could change without divine intervention. The focus of all Huxley’s efforts in those evolution debates was to demonstrate that evolution existed (you could observe change in the fossil record) and did not require some mystical agency to drive it - just hunger, war and death. The Darwin-Huxley synthesis can be reduced to six words:

variation exists, selection happens, populations change. We can valorise variation and deaths, both in the present and in the fossil record, but cannot valorise natural selection in the fossil record or irreversible change in the present. The model is dimensionally incoherent. It also begs questions about where the variation comes from. Natural selection, after all, destroys variation. Some process must be replenishing the stock. Perhaps it is a catastrophe that spontaneously creates new variation - nudging the population beyond essential boundaries before allowing selection to fine-tune the new species. Perhaps it is gradual, but there must be such a process because variation never seems to run out. Huxley was aware of this problem, but glossed over it because to do otherwise would threaten his antimetaphysical thesis. If there was a process that created variation spontaneously, it could possibly underwrite

221

adventitious change in the absence of natural selection. The metaphysicists could then argue that biological change was driven by divine intervention and social change by new values, revealed wisdom or a new covenant with God. This was not what the evolutionists wanted to hear. To see how scientists evaded this let us begin with the social sciences. People can create (epistemic) diversity that drives change adventitiously without natural selection. To place sociology on a non-metaphysical footing, social theorists had to find a way of denying α-complexity without denying the existence of adventitious change. Spencer solved this problem by invoking a sociological law (the ‘survival of the fittest’) that locked all social systems into a ratchet of ever-increasing social complexity. Marx’s theory of social change was also more or less deterministic by the end of his life, but postulated a cycle of growth, revolution and decay. As we saw in Section 8, Essay 3, the deterministic theories due to political Marxists and Spencer licensed a denial of socio-political complexity that has cost millions of lives over the last sixty years. Both theories originated in the nineteenth century, survived two World Wars and, by my own childhood, had come to symbolise the power blocs of the Cold War. Sociological determinism is rationally incoherent by Jonah’s law. Processes that only exist through human agency can always be modified by human agents. Determinism is also dangerous, especially if no boundary conditions are set, because the laws become moral imperatives and can be imposed on society by a process of social engineering that may even lead to genocide. Social determinism is empirically, rationally and ethically wrong and yet was the mainstay of social science from about 1840 to 1950. Indeed, even today policy makers find the itch to engage in social engineering hard to ignore. In contrast, the Darwin-Huxley synthesis sits on a knife-edge between theology and the humanities. A millimetre in one direction and you are probing the reality behind sense experience. A millimetre the other way and the species only exist because people believe in them. To stay on that line

222

one must ignore social systems and stick to biology where the ontological method described in Section 2, Essay 5 can be deployed. One must also ignore adventitious change because this might seem like the hand of God. Biological evolution became a process of natural selection capable, under certain circumstances, of transmuting species. Darwin’s original theory was deeply rooted in natural history. Although the terms had not been coined, he was an ecologist with an interest in co-evolution. He mentioned the peculiar co-adaptation of wild flower and bee species in chapters three, four and seven of Origin, for example and devoted much of his introduction to artificial selection among domesticated species. The relationship between humans and domesticates is evidently co-evolutionary. Furthermore, Darwin paid great attention to sexual (mate) selection as an evolutionary agency. In an environment where co-evolution is possible, changes in one population can knock-on by changing the environment within which others live. Thomas Henry Huxley understood this perfectly as his mature position (outlined in the ninth edition of Britannica) illustrates. He was, among other things, a geologist, and rejected Spencerian determinism on the grounds that geological evidence did not support it. Spencer, though influential, was isolated on this point. The synthesis Huxley defended in the evolution debates was not deterministic. By the time those debates had been resolved, however, co-evolution and sexual selection had been set aside and were not re-evaluated until the 1960s. They had to go because they compromised Huxley’s debating position. Let me explain why by example. A few years ago someone captured a gravid hedgehog (or perhaps a litter of youngsters) and released them on my native island which, till then, had been a hedgehog-free zone. It seems they carried a recessive blonde gene and now a substantial number of fair-prickled hedgehogs inhabit that place. People like them. They eat garden pests and have no natural enemies. Their fair colouring makes them easy to see so they are more likely to be noticed (and nurtured) by friendly gardeners.

223

This is an adventitious change effected by stochastic and behavioural processes. Dark hedgehogs do not die of being dark (there is no role here for natural selection) it’s just that light hedgehogs do slightly better because people are differentially kind to them. Imagine you are a hedgehog looking for a mate. If your mate is blonde (and you carry the blonde gene) your offspring will benefit from the selective kindness of gardeners. This differentially rewards blondes that seek out blondes and may put hedgehogs inclined to be choosy at a small competitive advantage. This is sexual selection, a process of change driven adventitiously by being differentially friendly to a recognisable class of neighbour. Huxley and Darwin knew very little about the adventitious processes that create the diversity which may (or may not) be winnowed. To emphasise this ignorance in the context of the evolution debates would have compromised Huxley’s position. Adventitious processes were played down and the Darwin-Huxley position (a compromise that neither of them would have endorsed in private) became the base-line argument on which the evolution debate was founded. In the 1920s and 30s ecology was established as a discipline in its own right, Soon system theory emerged as a response to complex organizational problems in environmental science, and Operational Research was pioneered in management science. Developments in quantum physics had obliged natural scientists to build contingency and logically irreducible uncertainty into their models. However, the pendulum of evolutionary biology was swinging out of phase with the rest of science in the first half of the twentieth century. Early research on inheritance focussed on eugenics, a hodge-podge of Darwinian and Spencerian ideas named in the nineteenth century by Francis Galton (a cousin of Darwin). Eugenicists captured popular imagination between the wars by arguing that the lot of humans could be enhanced if congenitally unfit individuals were discouraged from breeding. Thus, from the earliest

224

days eugenics was a dangerous mixture of pure science and social engineering. One of the most significant pure science questions it addressed was what one should make of the work of early evolutionists like Lamarck who believed in the inheritance of acquired characters. Was it possible, for example, that the children of the village blacksmith would inherit the blacksmith’s strength? Research into these questions led to the rediscovery of particulate inheritance (gene theory) Lysenkoism (a Stalinist travesty of Lamarckian theory) and positivism became influential in Russia and geneticists were purged, the great Russian geneticist, Nikolai Vavilov was imprisoned and starved to death in the 1940s. Elsewhere, a Nazi variant of Spencerian theory, once more sustained by the positivist model of science as a value-free quest for empirically verifiable truth, enabled Himmler to proclaim that prehistory was the doctrine of the eminence of the German people at the dawn of time. Eugenics was enlisted (along with archaeology and anthropology) to support this view. Thus, by the end of WWII, eugenics, Lamarckian theory, history and archaeology had all been tainted by association with genocide. In English-speaking countries biologists (like archaeologists and anthropologists) distanced themselves from these crimes against humanity and many became uneasy about positivism and the quest for objectivity. Ethical scrutiny was gradually introduced and the new science of genetics emerged from the ashes of eugenics. The simplest way to save genetics from association with genocide was to reinvent it in a form that precluded social engineering. In a rerun of the anti-metaphysics backlash of the nineteenth century, evolutionary biologists set their faces against social theory in general and Lamarckian inheritance in particular.

225

2. The Challenge of Mendelian Genetics The rediscovery of particulate models of inheritance precipitated a crisis. Mendelian theory seemed to contain no mechanism capable of generating new variety spontaneously. A population in which height was controlled genetically could be made taller by natural selection, but once all the genes that determine height had been switched from ‘short’ to ‘tall’ would have no further scope for evolution. Wolves could be turned into Great Danes by natural selection but could not be driven beyond species boundaries without some sort of catastrophic adventitious change. Thomas Henry Huxley, of course, was dead, but his successors were anti-metaphysicists too and did not like the idea of rapid, catastrophic change driven by spontaneous mutation. They rose to the new challenge by forsaking ecology to study genetics. The outcome of this work was a remarkable re-definition of evolution.

evolution is any process (adventitious or selective) that leads to a change in gene frequencies Thirty years ago my genetics lecturers patiently explained this to me. The Neo-Darwinists, it seemed, had proved that evolutionary processes necessarily involved genetic change. Molecular and population genetics, therefore, provided a complete synthesis of evolutionary theory. Only a very few Edwardians could be heard to grumble that this was humbug. What the Neo-Darwinists had actually shown was that genetic change is possibly evolutionary. It is actually evolutionary if, and only if changes in gene frequencies were driven by selection pressure. Often this is not the case. Then we are dealing with adventitious change. It is hard to overstate the elegance of the neo-Darwinian model. It contains a mechanism for the spontaneous creation and conservation of heritable change and a selective mechanism that can winnow that change. It is capable of predicting directional change (subject to suitable boundary conditions) and of underwriting unpredictable change under a change of rules.

226

Early population genetics treated selection pressures as exogenous, more or less fixed and gradual in their effects. This simplifying assumption implied that gene frequencies must tend towards well-defined fixed points called ‘HardyWeinberg equilibria’ in each generation. It is equivalent to the ‘close to equilibrium’ caveat imposed in classical thermodynamics and allowed geneticists to declare problems of population genetics almost well-posed. Although evolutionary systems were governed by stochastic factors at the level of the individual, gene pools would obey statistical laws that predict the outcome of an evolutionary process subject to certain boundary conditions. The neo-Darwinists also bounded evolutionary systems to exclude cultural factors, learned behaviour, catastrophic environmental change and what Darwin had called the “mutual relations of all organic beings” (co-evolution). All of these factors were driven beyond the pale as it were. These new boundary judgments became culturally embedded because biologists were genuinely traumatised by the death of Vavilov and by the use of eugenics to justify the holocaust. Evidence that supported catastrophism or a complex conception of socio-natural dynamics was systematically suppressed until the 1960s and early 70s and then rehabilitated in a number of social and natural disciplines as part of a wonderful renaissance. Some of the luminaries of this renaissance were system theorists or cyberneticists of the 1930s and 40s. Their ranks were swelled by war veterans, many of whom were keenly aware of the link between positivism and genocide, ripe for change and eager to embrace complexity. Remember, this was the time of the post-war spending boom when system theory was going to change the world by providing a single, over-arching paradigm for studying complexity. The changes I describe in biology can best be understood in a wider historical context of which my own discipline, archaeology, was a microcosm. Many Englishspeaking archaeologists got in on the act, though others remained aloof. Consequently, archaeologists can now be

227

located along the older, Franco-Prussian axis (exploring the tension between positivism and relativism) and the newer, Anglo-American axis that represents the tension between archaeology as the history of material culture and archaeology as social anthropology. The Anglo-American axis eventually produced its own systems revolution: new- or processual Archaeology and its own little post-modern backlash, post-processualism. Similar developments in anthropology, geography, sociology and so on created an enormous diversity of views including unreformed positivism; attempts to embrace complexity using scientific methods and a powerful anti-science backlash. This was the context in which post-war biology must be set. One of the minor effects of post-war developments was the establishment of a New Systematics, an approach that required taxonomists to harmonise classifications and evolutionary inferences. This attempt to link static and dynamic approaches; the so called phylogenetic approach was not altogether successful. It works rather well with the vertebrates, particularly the mammals because we can set boundary conditions, in the form of predictions about chronology and the fossil record, that can be tested empirically. However, it hardly works at all in botany where the fossil record is deeper and its chronological resolution too narrow to permit this. In practice taxonomists formed their classes using traditional methods and imposed a dynamic story on them post-hoc. In the 1960s computer-based approaches became possible, not only in systematics, but throughout biology, the humanities and social sciences. Many of the new generation of computational specialists grew up in the Cold War and did not acknowledge the link between positivism and the excesses of political extremism. The systems revolution slipped back into its positivistic comfort zone. Taxonomists began to claim that their work was more scientific (stable, objective and repeatable) because it had been quantified. This was the phenetic revolution. The phenetic revolution, like so many other computational

228

approaches to socio-natural complexity, was a damp squib. My own (albeit limited) experience of these methods has been that some groups are easy to classify (with or without a computer) and some evolutionary trajectories are easy to infer. Moreover, a difficult group is no less difficult if you have a computer on your desk, though you can automate some data processing. Much of the work done to reconstruct evolutionary paths from morphological and biochemical data, including computer-intensive DNA research, is based on plausible guesswork with no serious attempt made to set boundary conditions on the result. Systematic biology has lurched into an uncritical, forensic mode. This has ethical implications. Genes are implicated in protein synthesis but recent evidence suggests the number of unique proteins in the human body is substantially greater than the number of active genes in the human genome. If this is so biochemistry is under-determined by the genetic code. It is only a matter of time before methods developed to identify the genetic basis of schizophrenia or catch criminals are used to erode the civil liberties of a recognisable group. Miscarriages of justice are very probable. The problem is not that our knowledge is imperfect; it is that we have fallen into the scholastic trap. By failing to set adequate boundary conditions on evolutionary theories we have given priority to a form of genetic reductionism future generations may find as incomprehensible as Nazi eugenics or Lysenkoism. In the 60s and 70s ecology and evolution began to reconnect, Papers about co-evolutionary theory began appearing in reputable journals. This is astonishing. There is a gap in the biological literature between the Origin (full of references to co-evolution) and its miraculous ‘discovery’ in about 1964. At the same time, books about geological catastrophism, particularly the theories of Continental Drift were suddenly accepted for publication. The evidence for catastrophic rates of evolution was finally acknowledged and the theory of ‘punctuated equilibria’ began to be reevaluated. These were not scientific discoveries; all the

229

relevant data had been made available and publicly ridiculed by the ‘experts’ for decades. This was glasnost. However, Lysenkoism casts a longer shadow and the integration of biology and sociology has proven much harder. Culture is an acquired attribute that can be passed on through generations and sometimes has an effect on biological fitness, but many cultural traits have little impact on the gene pool. The inheritance of acquired characters Lamarck’s theory - was formally rejected by neo-Darwinists in the mid twentieth century. Consequently, neo-Darwinian theory denies the very existence of cultural evolution qua evolution. There is only one problem. Cultural evolution exists de facto so this part of the neo-Darwinian synthesis has been falsified by empirical evidence. Let us suppose that evolution is, as the Darwin-Huxley synthesis suggested, a process of natural selection capable, under certain circumstances, of transmuting species. Then adventitious processes that create diversity spontaneously are not evolutionary by definition. In the course of the twentieth century two models of adventitious change were developed. •

The genetic model explains the genesis and conservation of heritable variation and precludes the inheritance of acquired characteristics.



The cultural model explains the genesis and conservation of conceptual variation and includes the inheritance of acquired characteristics.

The genetic model of adventitious change connects to evolution to produce an α-closed core that is the basis of the neo-Darwinian model. In cultural ecodynamics, however, this core is embedded in, and perturbed by an α-open arena where adventitious cultural processes occur that create conceptual diversity. Some cultural configurations may be winnowed by natural selection, but many are not. The problem domains expressible in terms of the systems ontology can represented by a set (see diagram below). A subset of these problem domains can be tackled without

230

appeal to processes. These are the static or ontological problems that can be expressed purely in terms of systems and attributes.

be valorised are not evolutionary by definition. Very many social, cultural and biological systems are non-evolutionary. The evidence of co-evolutionary stress among plants, for example, is very clear. They tend to grow rapidly, to over-top and shade others. Strangling and climbing plants often compete with and destroy the trees that support them. This is Darwin’s ‘tangled bank’ where you can almost see the competition for light and food - clearly an evolutionary system However, the grazing mammals that feed on the plants often avoid direct competition with each other. Each species has its own characteristic body form and dentition and feeds on different plants or different parts of the same plant. Often the plants have defensive mechanisms that protect them from grazers (thorns and toxins, for example) and browsers move around, selecting choicer parts of plants but seldom destroying a whole foodplant.

The remainder of this set represents dynamic problems that can only be addressed if systems ontology is extended to include systems, attributes and processes. Many of these dynamic problem domains are mechanistic (there are no epiphanies and processes do not change), but a small subset is not. These are historical problems where systems are α-open and we need to invoke epiphanies that transform the essence of some class. You do not have to be human or even sentient to experience an epiphany, you just have to be capable of recognising categories of thing and able to respond to different categories in different ways. A choosy hedgehog that suddenly starts to fancy blondes has had an epiphany. Epiphanies create behavioural variety and underpin new types of process that give historical processes their characteristic direction and ‘path-dependence’. Within this domain of historical problems we find yet another subset. Here constraints are so severe that systems which cannot or will not respect them become inviable. These are evolutionary systems with manifest selection pressures. Historical problem domains in which selective stress cannot

231

Even the macro-predators that feed on browsers and grazers have developed behaviours that minimise the selective impact of predation on prey. There are aggregation effects, for example, by which predators avoid small pockets of prey and focus on large groups. Many are so-called prudent predators, differentially taking older or weaker individuals or culling straggling youngsters that might otherwise de-stabilise food supplies by over-grazing. It is as though these species now occupy niches in which prey and predator can co-exist indefinitely. The healthiest, strongest individuals are left to complete their life-cycle and only the frail and old are targeted. To a neo-Darwinist the processes that bring species into a prudent configuration must be evolutionary. However, it is far from clear what sort of selective process would have brought this about or, even. what impact this would have on the gene pool. One possibility is that non-prudent populations were less able to withstand the effects of climatic and other environmental perturbations and were driven to extinction by some sort of meta-selection. Another is that it is safer for predators to kill weaklings than to take on prime specimens and for herbivores to avoid large doses of toxic food. A third

232

possibility is that we are dealing with elective change and learned behaviours - epiphanies leading to cultural evolution among non-human mammals. Whatever the cause, it is clear that the effect was a local amelioration of selection pressure - a shift from arms-race to rapprochement that relaxed selection pressures and made the predator-prey interaction more resilient. Thus my solution to the question of how best to define an evolutionary system is of Alexandrian simplicity. An historical process is evolutionary if, and only if selective stress can be valorised. Coercion and predation are in. Enlightened self interest, prudence, elective change and the greater good of the greatest number are out. My principal justifications for this are that: •

It is a more useful basis for policy-relevant science than any other I have encountered.



Selective stress can be valorised in many universes of discourse.

We can see the effects of evolutionary stress on a hospital ward, in a village street and in national statistics and most of us can also see that it is something to be avoided. Enlightened self-interest and the greater good, however, are dimensionally incoherent in many policy domains. We cannot see them in the eyes of a starving child or a bombtorn apartment. We cannot even see them in national statistics.

3. Early Subsistence Strategies When archaeologists made their bids for post-war funding, early subsistence strategies formed a natural research theme. Ancient urban societies produced spectacular sites founded on an agricultural subsistence base, that often yielded beautiful artefacts. Animal and plant domestication was a prerequisite of urban life and urban life a prerequisite of ancient (and modern) splendour. The broad thrust of many research polemics was that by understanding early subsistence strategies we could understand the origins of civilisation (life in cities). The idea was that ancient causes must have had effects on the archaeological record. If we collected enough data we might be able to valorise the effects and infer the antecedent causes - rather as we decipher ancient scripts. This was a reasonable theory, but archaeologists generally underestimated the difficulties, some of which were technical, others were theoretical. Large sites produce millions of objects and the databases created to describe them may take years to prepare. Archaeometry, the process of describing and recording archaeological data, is an advanced ontological science demanding a high degree of skill and training. It takes years to prepare some databases and when they are complete we have only discharged half the task. Then we have to decode the data and, for that, we need what the American archaeologist Lewis Binford calls ‘Middle Range Theory’ but is more prosaically captured by an older word - heuristics. Archaeological heuristics are interpretive rules that transform data into information about the past by imposing an ontology of types on it. A zoologist, for example, might believe that domestication invariably results in a reduction of stature in the domesticated animal. This provides a forensic test for ancient domestication that has actually been used from time to time. If the beasts get smaller, we infer domestication. To make the process scientific we must test this key attribute effectively setting boundary conditions on the heuristic and rejecting it if those conditions are violated.

233

234

This procedure, described in Section 2 Essay 5, is too often neglected and much of the literature on early subsistence strategies and, indeed, of forensic approaches to the humanities, consists of the unsubstantiated opinions of boffins based on their own pet diagnostics. Unless we ensure that the key is smaller than the essence and that diagnosis is reliable, we might as well be reading chicken entrails. Archaeologists have always contended with interpretive problems, but they exploded in complexity during the 1960s and 70s as we began to extract extremely fine-grained data from archaeological sites and, coincidentally, to justify grant applications with hard-science polemic about the objectivity of empirical method. Heuristics can only be generalisations - they characterise a whole class of similar circumstances. However, the formation of this particular pit, or of Phase 3 on that particular site was conditioned by local accidents of history and geography that cannot be captured in a general, empirically testable rule. It is easy to valorise fine-grained patterns in the archaeological record, but inferring antecedent cause requires so much special pleading, no-one actually knows what most of these patterns signify. For example, archaeologists commonly take the human control of animal or plant reproduction as the key attribute of domestication. This ties in nicely with known farming methods and can be tested by looking at the contemporary evidence. If we find human control of reproduction that lacks the essential attributes of domestication (as conventionally understood) or domestication that does not influence relative reproductive success, this heuristic can be refuted. We have set a boundary condition and, when we apply conventional ontological method, the heuristic fails almost immediately. A person trampling wild flowers in a forest might reasonably be said to have ‘domesticated’ them because the action of clearing a path determines which of them will, or will not reproduce. Conversely, a herd of reindeer or colony of bees preserved from natural enemies, located near to food sources and exploited is not ‘domesticated’ unless someone

235

intervenes selectively in reproduction. This definition of domestication cannot even be valorised in the present, much less in the archaeological record. It is ontologically indefensible. So let us require that a population be domesticated when humans constrain its spatial distribution. This new definition covers a range of human-animal and human-plant relationships from the propagation of nut-trees in the wildwood to intensive poultry-rearing on factory farms. Constraining distribution is a familiar aspect of agriculture. If the sheep get into the cornfield or the dogs get into the sheepfold, we are all in trouble. Domestication (defined this way) marks an intensification of human-environment interaction that increases productivity by constraining the distribution of populations in space-time. The only dependable archaeological domestication, therefore, are that:

indicators

of

1. Technology of husbandry (yoke, harness, ploughs, axes, walls, tackle, fences, storage facilities). 2. The distribution of a species exploited by humans beyond its natural range (the spread of cereals and sheep into Europe, for example). 3. Representations or written records suggestive of domestication. In the absence of such evidence, domestication is merely possible. In the presence of such evidence, however, it is probable. The more points are satisfied, the more probable it becomes. Thus it is possible people in central and southern Europe domesticated dogs, deer and perhaps even horses in the old stone age (the palaeolithic period). It is probable that they domesticated some ungulates - hoofed mammals like sheep, goat, cow and pig - together with the dog and some cereals in the new stone age (neolithic period). Let us now see where this heuristic leads.

236

4. Ratchets and Co-Evolution The domestication of European cereals and some ungulates (particularly bovids like sheep and goats) probably began in the Fertile Crescent of the Middle East. Here these species are found associated with houses that were re-used over a substantial period and gradually displace other species and eventually spread into Europe with their human commensals. We see possible domestication in the Fertile Crescent by about 11,000 years ago and probable domestication in southern Europe about 5000 years later. Once in Europe it spread rapidly along the more fertile soils. It is possible Europe was initially full of hunters and gatherers. However, it is equally possible that local populations of plants and/or animals had been domesticated and resident populations simply adopted the new species. We simply do not know how much endemic domestication there was in Europe. What is certain, however, is that neolithicisation (the spread of ungulates, cereals, pots and ground stone tools) corresponded to a phase of forest clearance. This clearance, visible in the pollen record of ancient lakes, changed the distribution of trees and shrubs and coincided with the introduction of axes. It is evidence of probable domestication and may have had an impact on any hunters and gatherers in that landscape. The effect of animal and plant domestication was to create pockets of disturbed land where crops grew. These would be attractive to herbivores, so it would have been necessary to control ungulate access to the plants. Some would have died (many would have been eaten by humans) but the population as a whole would survive because their spatial distribution and that of the cereals had been reconciled. Even the wolves that preyed on them were domesticated while wild carnivore populations were kept at bay. None of this need necessarily have resulted in urbanisation. Agriculture was probably adopted in a number of large river valleys and flood plains where a suitable mix of ungulates and plants could be found. Once the humans were running

237

the show, however, domesticates were only vulnerable to over-predation from one species, the humans themselves. The success of the whole venture came to depend on our ability to negotiate distribution patterns that would sustain all these populations indefinitely. Of course, agriculture demands sedentism so humans were no longer able to play hide and seek with their food as hunters and gatherers do. They had to find a way of coexisting with it, perhaps by splitting communities into pastoral and horticultural sub-sets or by establishing stable trading relationships between hunters, pastoralists and horticulturalists. In many areas there would have been plenty of suitable soil and a mosaic of woodlands, wetlands, farmlands and pasture would develop. This would have had an impact on non-farming communities, particularly those that depend on hunting and gathering. Hunter-gatherers and pastoralists tend to be mobile. Very rapid birth rates reduce mobility because infants must be carried or helped over long distances. Lewis Binford, among many others, has suggested that in small social units there is a broad congruence of interests between the individual and the group and that high levels of mobility contribute to the well-being of all by bringing new resources and information on-stream. We should not romanticise this situation. Tensions and conflicts over resources occur among huntergatherers too. However, humans are innately co-operative, social animals and any individual that denies resources to an immediate neighbour may be disadvantaged later because each is dependent on the other. These factors impose constraints on intra-group competition which obliges individuals to look after their immediate neighbours, at least as long as they contribute to the well-being of all. With larger groups, intragroup competition and conflict can occur but can always be resolved by splitting the group. We would therefore expect the groups to self-organise into small, coherent units with most of the inter-personal conflict expressed between rather than within those units.

238

However, by controlling the distribution of animals and plants, early farmers and herders would have had to fence their neighbours out. Population densities would increase beyond the levels sustainable by hunting and gathering. The adoption of agriculture in Europe need not have been driven by an evolutionary mechanism. It could merely have been a cultural norm adopted adventitiously by local communities. Nonetheless, it would have acted on those regions as a socio-natural ratchet that eventually made older, more extensive adaptations difficult to sustain. Those who chose not to exploit domesticates would have experienced increased selective stress. As the landscape filled, one would expect aggravated levels of conflict between agriculturalists and their neighbours. This would lead to tension, not only between farmers, pastoralists and hunter-gatherers, but also between adjacent farming communities. In small village-based agricultural societies, many subsistence products are privately owned and birth rates are often higher than in less sedentary communities. Under internal tension these are likely to split and colonise new regions - cutting the landscape up and aggravating intercommunity conflict; perhaps even causing cycles of famine and disease. In these circumstances warrior chieftains or a priestly class might arise to keep order in villages and manage inter-community tension.

5. Domesticating Humanity In some parts of the ancient world we find large urban units in which small elites control most of the collective resources while a substantial section of the population lives in poverty or even starves. The earliest urban societies are found in a handful of ‘cradles of civilisation’. The European story, for example, begins in Mesopotamia and the Levant in the new stone age (neolithic period) where the first appreciable settlements appeared. POLOs (pockets of local order) formed in these ‘cradles’ to exploit patchily distributed resources (water sources, for example). Here the congruence of interests that hold in smaller units would be destroyed and yet people might not be free to walk away from the resulting arguments. Birth rates that would destabilise a hunter-gatherer or village community become possible and intra-group squabbling probable. The unit need not be large. Typical huntergatherer units and small agricultural villages contain a few dozen people. Boost those to a thousand or two and settle them on a critical resource and you may see demographic take-off. Urbanisation, then, created yet another socio-natural ratchet, producing circumstances where intra-group conflict must be expected. As these larger communities became permanent, demographic pressure from immigration and indigenous growth rendered them ecologically unstable and they were obliged to intensify patterns of exploitation. In the East conurbations emerged during the later neolithic and we see great advances in potting technology, the emergence of prestige goods and goods manufactured from exotic materials. There is early evidence for institutionalised religion. However, the undisputed civilizations of the Nile Valley and Mesopotamia belong to the bronze age. Although each city must have a water supply and access to food, these secondary cities, with their powerful elites became a POLO in their own right. They are often found in regions where one might expect extensive mosaics to form and it is natural to ask why.

239

240

Perhaps secondary urbanisation is something to do with metalwork and craft skills? This was certainly a common theory among archaeologists fifty to one hundred years ago because, like Sir Herbert Spencer, they were naturally inclined to assume that a civilised life was superior to rural poverty and that people would elect to live in cities once technology had advanced to the stage where economic development was possible. Perhaps the key to identifying secondary urbanism is the development of the pyrotechnolgies (advanced potting, glass- and metalwork). Once again, it is easy to set boundary conditions on this model. Bronze working spread through Europe rapidly. It may have been discovered independently in several locations. Most European bronze age settlements, however, are fairly small, though there are exceptions; Knossos on Crete is one of the most substantial. Homeric Troy and Mycaenae, in comparison, are poky fortified villages. There is little evidence in these settlements of intra-group competition. Local elites seem to have been warriors chiefs whose attention was directed to inter-group conflict. The appearance in western Europe of individual burials well furnished with baubles and geegaws suggest chieftains at about 2,500 BC, but the settlement evidence is meagre, even by the standards of southern Europe. These are not convincing civilisations. So bronze working and pottery cannot be the key. Indeed, urbanism is rare in southern Europe before the early iron age. Iron-working, when it reached Europe spread much faster than the cities, which only reached northern and western Europe about 2000 years ago. Conurbation continued right up to the twentieth century. The first Arctic cities are about a century old. To get hung up on the three ages of stone, bronze and iron is to confound technology with culture. The neolithic of Mesopotamia, with fortified villages like Jericho is culturally like the bronze age of eastern Europe but technologically simpler. The neolithic of eastern Europe is culturally like the earliest bronze age of western Europe. The bronze age of the Middle East with its rich cities is like the early iron age of southern Europe, the late iron age of northern Europe, the

241

mediaeval period in southern Scandinavia and the early twentieth century in the low Arctic are all civilizations (stratified urban societies). Technological and cultural norms changed at different rates and in different ways. Conurbations - settlements that have reached a critical mass and flipped over into demographic runaway - do not spread as rapidly as the technologies they exploit. Agriculture, metalwork and inter-group competition raced through Europe, while conurbations, poverty and epidemic disease spread like a slow cancer from the south and east to the north and west. As we examine the spread of these cities we find evidence that people living in them had developed a new and, to my mind, shocking form of domestication. They developed alliances across social groups to defend resources against encroaching outsiders and minimise internal conflict. Rapid population growth, probably fuelled by immigration, increased the pressure on these resources. Presumably, many responses were tried but one seems to have worked better than the others. They domesticated people too. We see this very clearly in the slave-owning communities of the ancient Middle East and the practice is established in southern Europe with the rise of the Classical states and persisted in Europe into the late modern period. The domestication of people produced a new socio-natural ratchet which, under certain circumstances led to an imperial structure. By corralling people together in large units, these early states were able to sustain critical resources while maintaining sufficient personnel to fight off outsiders. Once in the cities, the demographic brakes came off and intragroup competition accelerated. The population would have grown and the process became irreversible. If large numbers of humans had walked out of those conurbations into the agricultural hinterland, the effect would have been catastrophic. As soon as the population was too large to go back (or had forgotten the old skills) it would have been locked in to the new, demographically aggressive, urban way of life.

242

The people in the hinterland would have experienced strong evolutionary pressure to obviate an influx of hungry refugees from the city, and trade between the centre and periphery would be favoured. But what do you trade for subsistence goods when you live in a city made of mud bricks and stones? The answer seems to be that the city forms an army to protect the fields and raises taxes to pay for it. It trains slaves to bake mud and make pretty pots, to melt stone and make pretty metals and then persuades its rural neighbours that these goods and services are intrinsically desirable. The amazing human capacity to negotiate non-adaptive cultural norms really came into its own as they persuaded each other that this was ‘normal’. The possession of these bits of cooked mineral became so important people would give food away to own them. Internal competition within the cities fed social stratification and systems of laws would be developed that kept people safely corralled. As the population grew, the size of the exploited hinterland must have grown too. Small trading posts would establish on the periphery to channel food to the centre and craft goods to the hinterland. The urban elites had to control subsistence resources and this would require them to manage the production of the high-status goods that brought those resources in from the hinterland. Markets would be established and craft specialists recruited, either from the urban population or from that of the hinterland, to meet this demand. The cities filled up with craftsmen working for the elites who fed them with some of the food they got in exchange for craft goods. From time to time the burgeoning population would outgrow the ability of the hinterland to sustain it. People would escape and the rural population would find it difficult to survive on the land. The ensuing ecological calamity would force them to regroup. Trading posts would be attractive places to go because they are closer to the periphery and subsistence goods are channelled through them. Craftsmen from the cities and immigrants from the hinterland could set up business in these trading posts which would then only need a supply of minerals to establish small cities in their own right. Local elites would control subsistence and trade

243

goods, immigration would provide more personnel, and new cities, with new hinterlands would form. Writing and a scribal class would be needed to keep records and organise taxation. The system would grow into an Empire, polarising trade and drawing resources from the periphery to the centre. Occasionally, the centre might collapse and be recolonised by outsiders. The difference between the cradles of civilization (like Mesopotamia) and those places where agriculture emerged without civilization (like the Loessic soils of the Danube valley, for example) is the rate of immigration to a given locale. If the population grows slowly and evenly, as on a flood-plain where agriculture is possible almost anywhere, the result will be a more or less stable mosaic of villages, woods and wetlands with the humans domesticating and exploiting cattle and cereals and competing between groups for resources. However, if large POLOs form and local population densities accelerate continually through immigration and internal growth, humans not only domesticate and exploit cattle and cereals; they domesticate and exploit each other. With the people, as with the cattle, domestication initially brought aggravated levels of natural selection, but the population as a whole survived and grew. Sheep, goats, cows, pigs and human slaves have been distributed far beyond their original range. Over the years we have even learned to exploit our cattle without killing them. People have, at various times, taken blood and milk from cows, wool from sheep and labour from horses, cows and serfs, for example. When expressed in these terms, the issue seems to be one of rights, both human and animal, but socio-natural systems change and things are not now as they once were. We cannot go back to a golden age when every neighbour was indispensable to our own happiness and security. Our forebears negotiated those worlds out of existence. We now live in a society where a well-fed human can walk past a hungry neighbour and see only a troublesome stranger. Even the doomsday scenario (a catastrophic population crash) would not turn the clock back because cultural norms, values and beliefs have changed.

244

These changes have had knock-on effects on our natural and cultural life-support systems that cannot be ignored without causing significant suffering. Sheep have heavy fleeces they cannot shed for themselves. Cattle produce enough milk to drown a calf and bellow with impatience if the dairyman comes late to work. Healthy, well tended horses come frisking from the stable. Craftsmen think themselves free and delight in their skills. There is no longer enough room for us all to be hunters and gatherers. Subsistence agriculture is unremitting grind, working 60 or 70 hours a week for bread, cheese and salt meat. Even if we could put the clock back, who would wish to? Indeed, it is easy to become too judgmental about early civilization. It is quite possible that the earliest urbanisation was precipitated by a climatic catastrophe, the Younger Dryas, in which changing ocean currents forced humans to withdraw from territories to the north and east and created cold-stage conditions with very high sea levels. The presence of cold-tolerant edible grasses and ungulates was a lifeline but the people who exploited them had powers of distribution that should have driven everything to extinction. That did not happen. People re-negotiated their perceptual models to co-exist with the ungulates, cereals and each other for an additional ten millennia. Certainly, most of our contemporary ecological problems arose as unforeseen consequences of these strategies - even third world poverty today. Far from proving the benign, socialising influence Sir Herbert Spencer imagined, civilisation allowed warfare, imperial oppression, poverty and slavery to become institutionalised. However, the ingenuity and cultural flexibility of our forbears must be a source of encouragement. What one fool can do, another surely can. The ratchets of socio-natural change will not allow us to go back, so we must go forward. The value, reality and operational judgments we form today open some doors and close others. We pass through those doors into the future - a new world that we must live in and bequeath to our children.

and chronic ill health that characterises primitive states. I use that word without apology. A society that cannot emancipate its own citizens is less advanced (nearer the state of an ancient civilisation) than one that can. However, ‘advanced’ civilisations are seldom so by dint of their superior ideology, but by their ability to export misery from centre to periphery. Internal emancipation has always aggravated the suffering of unacknowledged stakeholders whose misfortune it was to live beyond the pale. The freedom of slaves in the urban centre is paid for by slavery, starvation and death in the rural periphery, by the enslavement of barbarian peoples and by uneven trading relations and immigration policies that disadvantage outsiders. Primitive civilisations domesticate insiders; advanced civilisations domesticate outsiders. Western democracies are already thinking of the world’s economy on a global scale. This is a courageous and probably dangerous vision. A global economy can have no outsiders by definition so the challenge for champions of globalisation is to emancipate the core without the benefit of a periphery to absorb the misery. We can, if we choose, define citizenship on a global scale and reject the use of security and immigration laws to prevent free movement of goods, services and people, but the problems of dimensional incoherence will be an order of magnitude more complex than anything we have yet seen. The re-conceptualisation required to create an advanced global economy - not one that conserves a groaning consensus by killing millions, but one that can emancipate unacknowledged stakeholders - demands a distinction of reality from belief. Our boundary judgments must be kept open to ethical and empirical scrutiny on a host of spatial and temporal scales from the extremely local to the truly global. Integrative problems of this order cannot be resolved by appeals to authority and nineteenth century ideas about objectivity. If it were that easy, we’d have done it long ago.

Intelligent, critical choices made in the past did much to liberate present societies from the oppression, starvation

245

246

6. Socio-Natural Policy and Artificial Selection In its extreme manifestations dogmatic realism actually reduces evolutionary fitness by killing people before they can reproduce. Wars, revolutions, holocausts, anthropogenic famine and inquisitions are co-evolutionary processes in which the fitness of one community is determined in part by the beliefs and behaviours of another. This is not natural selection - pressure arising through competition for limiting resources - it is an anthropogenic (artificial) selection pressure. The effect of selection pressure (natural or artificial) on most biological organisms is to reduce survivorship and/or reproductive success differentially between adaptations. However, advanced social animals like humans, wolves and elephants may respond pre-emptively to selection pressure by avoiding deleterious behaviours and even coercing neighbours into conformity. Where other species simply act and live (or die) with the consequences, we can reason and act a priori to avoid risky situations. In general the processes and epiphanies we invoke to explain change can also be used to explain stasis. The gravitational field that draws two massive bodies together, for example, explains the acceleration of a stone falling to the ground and the stillness of a stone on the surface with equal facility. Gravity, for example, is a mechanistic process that sometimes prevents and sometimes drives change. In much the same way the selection pressure that sometimes acts as a potent driver of evolutionary change can also manifest as stabilising selection - a constraint that actually prevents change by punishing deviations from type. Human populations stressed by continual wars and slave-raids, for example, are exploring a more tightly constrained optionspace than those that are not stressed in this way and these constraints may actually prevent some types of change from occurring. Selection pressure is only one of the mechanisms we observe in cultural ecodynamics. When a malnourished underclass abandons food production for drugs, for example,

247

we can reasonably infer a selective mechanism, but when a prosperous gangster hires armed thugs to prevent them abandoning drug production he is not responding to selection pressure. He has a range of options to consider that would not reduce his selective fitness, but this is the one he favours. Sometimes these adventitious behaviours are adaptive - governed by a conscious act of choice. However, on many occasions such actions are motivated by normative behaviours or habits. The economic intensification that has characterised the last few centuries, for example, may have arisen as a response to selection pressure or through conscious choices based on enlightened self-interest, but the expectation of economic intensification has now become culturally embedded. Ownership of a gas-guzzling car does not enhance anyone’s biological fitness; we can breed, live long, healthy lives and raise our children without them. This culturally embedded norm has led to steady intensification in the exploitation of natural resources that now demands access to oil resources and markets. In order to maintain that access, technically advanced nations must invest in standing armies to protect their interests and develop weapons of mass destruction to impose their values on others and defend them against rival ideologies. Cultural lock-in in one population often imposes severe selective stress on another. This stress may be inter-specific or intra-specific. Whaling, for example, aggravates the selective stress experienced by whales, and the ratchet of economic intensification in the cities of many third world countries aggravates the stress levels experienced by the rural poor. Genuine intra-specific co-evolution, in which two human populations impose selective stress reciprocally on each other, often manifests as open conflict (wars, revolutions and the like). However, asymmetrical processes in which a culturally embedded behaviour in one population imposes selective stress on another are also common, especially in advanced civilisations. That selective stress may be propagated across large intervals of space or time. The

248

perceived need for cheap nursing staff in northern Europe, for example, is currently driving health services to recruit skilled nurses from Sub-Saharan Africa, Young people in these regions are dying for the want of basic health care as a result. The cause is a culturally embedded need on one continent and the effect is selective stress experienced on another. This process may continue to blight the lives of unborn generations long after the initial recruitment drive is over.

selective advantage, for example, but seems likely to continue.

One of the commonest effects of anthropogenic selection pressure on humans is to impose the burden of selective stress differentially on some cultural and ethnic groups when there is no obvious explanation for reduced fitness other than membership of that group. A starving farmer may be biologically fit for life in a wide range of environments but driven into ecologically marginal circumstances by coevolutionary linkages with urban neighbours and militiamen.

Indeed, stabilising selection is a common attribute of the Phoenix Cycle in the later realist phase. As consensus ceases to be part of the solution and becomes part of the problem some groups begin to resist or defy culturally embedded norms and the custodians of consensus will act to discourage what they see as absurd, delinquent or immoral behaviour. The dialectic tension between these groups has a stultifying effect, reducing the adaptive potential of society as a whole. If this process is not interrupted, a co-evolutionary stand-off can result in which an authoritarian elite imposes its reality and value judgments on an unwilling, but powerless underclass. Eventually a catastrophic event, a revolution, war or epidemic perhaps, tips the whole system into the abyss and, in the chaos that follows, a new type of system may emerge.

Thus we have two broad types of co-dynamic linkage to consider in cultural systems. The first is the selective linkage in which some possible adaptations are either rendered untenable or lead to low rates of survival and reproduction. The second is the adventitious linkage in which individual choices and habits, often consolidated by cultural lock-in, determine the course of history. In general, adventitious processes can be changed without reducing biological fitness. It seems reasonable, therefore, to focus policy intervention on adventitious processes and to design policy instruments in such a way that artificial selection is avoided wherever possible.

Thus the difference between a sustainable and an unsustainable configuration is not manifest in the rate of structural change, but in the selective stress experienced by one or more of the linked groups. In a low-stress configuration individuals and institutions can vary their behaviour in response to an external perturbation. In a highstress configuration, however, populations are tightly constrained and may be forced to adapt to external perturbation by escalating competition. Communities can become locked into co-evolutionary arms-races that maintain individual fitness in the short-term but may have disastrous consequences in the long term.

Remember, there is no necessary correlation between the rate of structural or behavioural change manifest in linked populations and the selective stress they experience. Stability can occur under high stress as, for example, when people were forced to depend on domestic service and subsistence farming in the nineteenth century because infant mortality rates were high and wealth was concentrated in the hands of a small sector of the population. Similarly, irreversible change can be driven adventitiously under low stress. The use of credit cards confers no proximate

In the long run intense co-evolutionary relationships are likely to be destabilised and less stressful configurations will win out, but the transition from a more to a less stressful configuration is likely to be traumatic. Indeed, it is possible that the new configuration will be one in which our species or community does not figure. The challenge for research, therefore, is to respond pro-actively - to relax locked-in cultural norms before the ratchet of self-organisation is set and we are punished for them.

249

250

It is relatively easy to operationalise the co-evolutionary model as I have formulated it. Archaeologists, for example, can gather information about behavioural stasis and change and sometimes identify periods of high or low coevolutionary stress using demographic, palaeopathological and ethnographic evidence. There is clear evidence of inter-specific co-evolutionary stress between early farmers, domesticated species and their pathogens. Intra-specific stress was probably manifest between farmers and hunter-gatherers during the advance of farmers into Europe. It is also manifest during the emergence of the bronze age empires of the Middle East. Urbanism, in particular led to rapid rates of reproduction, high infant mortality levels, the use of coercive force and social stratification - beliefs and attendant behaviours that distributed selection pressure unevenly between sub-sets of the same community. However, the evidence of co-evolutionary stress is less clear in the mature neolithic, bronze and iron ages of those parts of Europe beyond the influence of the burgeoning urban systems. Here there are plenty of behavioural and technological innovations but they seem to have been driven by a simple evolutionary dynamic (competition for limiting resources) or adventitiously by the pursuit of local interests, rather than by co-evolutionary stress. The policy implications of this model are rather clear and very interesting. It is possible to have directional change, technological development, adaptive change and innovation without experiencing extreme co-evolutionary stress. Indeed, it is usually very difficult to innovate when stress levels are high. A wise policy maker might reasonably consider developing norms and legal structures that maintain and, where possible, enhance adaptive potential, so socio-natural resilience is not compromised to sustain perceived needs and culturally embedded norms.

251

The most direct way of ending co-evolutionary arms races is to prioritise decisions that conserve the rights of others, including people not yet born, people who live in remote regions of the world and the other species with which we share the planet, to make choices too. This is not to demand equal rights for bed-bugs or an even distribution of resources through space-time, but for freedom from extreme physical hardship, from premature death, from the destruction of habitat and, in the case of sentient animals, from the miseries of chronic suffering and social isolation. This policy may be the key to breaking the Phoenix Cycle. The co-evolutionary process reduces adaptive potential, shortening lives, destroying epistemic diversity and reducing socio-cultural resilience. Evolutionary change is always dangerous and should be avoided by discouraging cultural lock-in whenever evolutionary fitness is compromised. I am not so naïve as to imagine that all the conflicts of interest between rich and poor, centre and periphery, could be resolved this way, but the identification of selective stress as an indicator of system health seems a much more equitable basis for negotiation than cost-benefit, because it can be valorised in many universes of discourse. It can even be extended to inter-specific interaction, subject to qualifications that take account of ecological factors. We are an omnivorous, predatory species, subject to attack from pathogens, parasites and a very few macro-predators. Nonetheless most non-human species have no direct impact on our health and selective fitness and it is difficult to see why our interaction with them should not be selectively neutral for them if we ourselves are not imperilled. This reasoning can be applied to second- and third order coevolutionary relationships without loss of generality. If servicing third world debt results in activities that destroy the habitat of neutral species, those whose evolutionary fitness is highest should carry the cost of preventing that destruction or write the debt off altogether.

252

Equity among human communities (conventionally expressed using a financial standard) could easily be reexpressed in terms of selection pressures. Every human community could, in principle, be given protection from anthropogenic selection pressure under international law. This right would over-ride economic interests up to the point where selection pressures on linked human populations are balanced. The only justification for action that starves children is that failure to act would cause even higher death rates in a directly linked population. I should emphasise that balancing selection pressures could only ever be part of the process of moving towards a more equitable world. One can easily construct scenaria in which selection pressures were perfectly balanced, but which many people would find ethically intolerable - the broken Nemesislogic of a political assassin, the commercial institution that trades in misery and pays a tithe of its profits to charity or the emerging industrial power that offsets the misery of child labour against the malnutrition of a child that does not work. We are not cattle to be corralled and traded thus. Yet selection pressure has a number of advantages over conventional financial measures.

Naturally, policy makers and industrialists are likely to resist calls for reform, but you can drive the process and raise awareness by asking difficult questions. Ask how much money they would accept by way of compensation for the premature death of a loved child. Have they made provision for paying that level of compensation for the victims of their own policies? If not, why not? Don’t let them hide behind constitutional and civil law or mount filibustering actions while injured stakeholders die, insist on unambiguous answers and dimensional coherence. Tell them that the value of an eye is the process of seeing and that of a child is a long, full lifetime. Power and money cannot buy them nor can vengeance justify their destruction. They have intrinsic value that cannot be reduced to a number or traded for personal gain.

A dollar in Calcutta is not the same as a dollar in California, but a premature death is always a death - both a valorisable demographic event and a human tragedy. The uncertainty on indicators of co-evolutionary stress can often be made operationally negligible - we know when the health of children is impaired or when an ecosystem and the species it contains are being destroyed. It is a natural standard for legislation and policy. In general, knowingly causing death and environmental mayhem is a criminal act when the perpetrator is a delinquent individual, but governments and commercial concerns are able to evade their responsibility by reducing human suffering onto a monetary standard. This is a form of obfuscation - valuing the lives of the poor in terms of money and gizmos for the rich, and offsetting Global Commons like the seas, atmosphere or rainforests against local economic growth. It is both scientifically and ethically indefensible.

253

254

7. Education or Acculturation Education encourages us to value what we are and can become. Acculturation teaches us to despise what we are and value what we can never be. I have experienced both. I was born into an Anglo-Norman community of returned war refugees and immediately taken into care while my mother underwent surgery for tuberculosis. When my younger sister and I eventually joined our older brothers it was to grow up among the barbed wire and bunkers of occupation and the fortifications of centuries. I learned my catechism pairwise with my brothers, my mother taught me natural history and syllogistic reasoning while our priest explained that modernism was a sin deserving excommunication. Women with snuff-stained lips and drippy noses kept house while men in hats and waistcoats laboured under the summer sun. A few misfits, unhinged by the loneliness of belonging nowhere, mumbled an ancient language that war, immigration and the English contempt for ‘patois’ had all but destroyed. So tight was the strangle-hold on our culture, we were not even taught French, though the Cotentin Peninsular was about ten miles away. Now when I go home, new immigrants correct my pronunciation to Haute Francais. The fields of my childhood have misspelled names and are full of houses I cannot afford to live in. Older immigrants, many of them good friends, talk fondly about ‘local characters’ of those days apparently unaware that these were not sideshow freaks but normal people obliged to live in abnormal circumstances. The world of my childhood is gone because our community was too small and too disrupted by war to resist acculturation. The local school, founded in 1790 on the proceeds of smuggling, privateering and war, was then in a parlous state. The headmaster shut himself in his study and drank while his colleagues tried to keep order in what had become a prison for the children of labouring men. No child was offered the chance to take public examinations after the age of twelve. Those who failed 11+ exams had to serve their time.

255

This was cruel usage for outdoor children and many became intransigent or openly rebellious as they passed puberty. A sadistic martinet was eventually drafted in to replace the drunk, some 16+ examinations were then offered and the level of physical and mental hardship experienced by the less able increased accordingly. As a myopic, fat, isolated child I was twice shipped out for education elsewhere. At eleven I was sent to live on a neighbouring island to attend Grammar School (a painful fiasco despite the kindness of my hosts). At 16 the local authority sent me to attend a Boarding School and this time things went better. Thus, with no study skills, the offer of a place at university a means-tested grant and financial support from a childhood mentor I had to decide what to do with my life. My peers at home were properly unimpressed by my lack of solidarity, but elders gave me good advice: “I’ll tell you this, my cock, never wear a watch and a ring - too much jewellery!” or “Don’t become one of those perpetual students. Education is alright if you gets an indoor job, but you might never quit school”. Armed with this necessary knowledge I bleached my peasant English paler than a stranded fish and set out on my first adult adventure. After a lifetime on small islands, I was ill-prepared for the claustrophobia of a small town. The ‘mainland’ was smaller and dirtier than I had thought possible. The night sky was full of orange light, there were no stars and no roads that ran down to the sea and stopped. You couldn’t be in the open and not hear the sound of motor cars and it stank. It was like living too close to the Impôt - except everywhere was too close to the Impôt. I experienced great kindness and made wonderful friends in North Wales and later in England, but had to endure more than a decade of sustained culture shock as I came to terms with my new situation. My undergraduate teachers, recognising a troubled spirit, planted ideas in my mind and left them to grow. I was incapable of specialisation but eventually scraped through with a degree in Botany and Zoology. The ideas those teachers planted grew well over

256

the years following my graduation. I kept studying and eventually went back to university, this time better equipped to cope. Today I am a professional researcher, still unable to specialise, deeply interested in method and blessed (or cursed) with an outlander’s view of every society I encounter. My early exclusion and subsequent inclusion taught me that the earth is not inherited by the meek, the mighty or the clever, but by survivors - that is the only qualification. The ability to survive is a quality, not a virtue and most of us possess it - it is certainly nothing to brag about. Each of us comes from a long line of survivors. Indeed, most of those who perished did so not because they were biologically unfit but through senseless accidents of history. Yet wherever the hard sciences meet the soft sciences, a strong language meets a weak one, a centre meets a periphery, foreign gentry meet local peasantry or a global economy meets its regional counterpart, people re-invent their own habits as virtues and their own presuppositions as self-evident, universal truths. Information flow between distinctive communities is an enriching experience but acculturation, the absorption or marginalisation of one culture by another, is utterly destructive. Awareness of this problem is growing, but policies in respect of it are still effete and arrogant. Persuading working kids to stay on at school, trying to get them to abandon the football stadium for the museum or driving them into university like cattle will not solve the problem of social exclusion. Indeed, those policies aggravate exclusion by reducing the size and confidence of beleaguered communities. Disaffected young people are more likely to form an anarchic underclass if they see the values of their elders denigrated by teachers and policy makers. They are trapped in a cultural no-man’s land with no positive role models to guide them on their journey to maturity. After a couple of generations extended families can become marginalized beyond retrieval.

257

It is easy to valorise social exclusion. Millions of people are held in penal institutions world-wide. Some of these are unquestionably evil - a menace to themselves and to those around them. But many, probably most, are square pegs that can’t be driven into round holes. When you add to these the legions of the sad, the mad, the destitute, the confused and the lonely - medical friends suggest this may be as much as ten percent of the population - you see the scale of the problem. Retrieving these lost souls is terribly difficult and their presence in society is deeply disruptive. It is imperative, therefore, that we prevent more young people joining their number. We are not going to do this by providing stronger incentives to conform to some arbitrary social standard, but must help them find more sociable ways of affirming their distinctiveness and independence. Recent developments in educational policy have emphasised the importance of objective standards and auditable learning outcomes and imposed a long-overdue rationalisation of national curricula, at least in European countries. Citizens who cannot reason, communicate and interact with each other are incapable of functioning socially. Most people should have a basic understanding of formal reasoning (including some mathematics and the use of maps). They must read, write and speak one language fluently enough to demonstrate critical reasoning skills and be able to communicate adequately in another. They must also be able to look after their own mental and physical health and interact with each other in a sociable, tolerant way. Most people are capable of achieving this in the first sixteen years of life, though even now surprisingly few actually pull it off. Language skills and mathematics are a genuine difficulty for many. The ability to learn a second language degrades rapidly as puberty approaches. Pupils who have not started learning by their seventh birthday, preferably from a teacher who has genuine fluency, are unlikely to become proficient. Dubbing foreign television programmes in local languages reduces incentives and opportunities for practice.

258

This is why the English, French and Spanish are so lousy at foreign languages, while the Scandinavians, Germans and Dutch are so good. Maths is a language too and so must be mastered early. Sadly, most primary teachers can handle arithmetic well enough, but few have abstract reasoning skills. They are like French teachers who have a little grammar and vocabulary, but almost nothing to say more interesting than that their aunt’s pen is on their uncle’s desk. However, the skills base of the school curriculum is generally well understood. Great progress has been made over the last twenty years. There is a remarkable consensus among educators that children, as a population, now have academic skills their parents and grandparents undoubtedly lacked. This will probably be consolidated in the future. It is the knowledge base of the curriculum that most urgently needs attention. Students attend school almost continually for eleven or twelve years and have opportunity to learn something of the literature of their principal language, develop a layman’s understanding of science and a level of artistic, cultural and social awareness sufficient to enable them to socialise. Sadly the classroom is often the setting for a protracted clash of cultures because teachers and pupils are drawn from antagonised communities. Students on vocational programmes must receive professionally accredited training. Inevitably, the teachers responsible for these will be drawn from a range of knowledge communities and the problems of knowledge integration must be addressed both through formal training and in practice. However, the transfer of vocational skills is an almost well-posed problem. We have been doing it for generations and can readily develop procedures to valorise success. These, in turn, simplify the teaching and learning process considerably. If all parties are strongly committed to the project (as they often are with vocational training) success is almost assured. In contrast, the transfer of cultural values and social awareness is a wicked problem-domain because the people engaged in the process often have incongruent beliefs and a

259

range of unacknowledged stakeholders complicate the process. Unsurprisingly, children learn cultural values and social awareness more easily from elders of their own knowledge communities than from academic missionaries because this reduces the likelihood of culture-shock. The two principal causes of low teacher morale are the high proportion of disruptive children who have no stakeholding in public education and the failure of those responsible for the curriculum to understand that ethnic diversity and cultural diversity are not interchangeable concepts. Ethnically homogenous populations are often very diverse, so there is no one-size-fits all solution to the problem of teaching cultural values and social awareness. Although the principle of receiving cultural education from well-educated elders of one’s own community has been accepted for acknowledged fourth world communities (indigenous peoples marginalized by powerful immigrants) the definition of an indigenous community is very narrow. Saami people in Northern Sweden have their own schools but native Finnish speakers do not. Welsh speakers in Britain enjoy some benefits, but smaller communities in Cornwall and Anglo-Normandy, for example, are neglected. There are many regional dialects in contemporary Britain, the speakers of which could be taught to read, write and speak a lingua franca, without denigrating their own tongue. The case for providing an educational experience tailored to the culture and aspirations of its recipients is strong. This experience need not supplement mainstream humanistic, scientific and cultural studies; adding yet more to an already over-stuffed curriculum. We could dump much of the geographical and historical knowledge base to make room for a geography and history more relevant to the students. The skills base of the curriculum would be much as it is now. Students would be expected to develop the communication, reasoning and social skills that enable them to interact with their neighbours, but would no longer endure the specious myth of a shared national or supra-national heritage. After all, if this cultural unity is so palpable, why is it necessary to

260

impose it on us by force of law and why do so many of the consumers of educational services feel alienated by them?

environmental and teachers are an important part of that environment.

Let us replace the centralised, demand economy for educational services with a properly regulated market so the consumers of those services can tailor the knowledge base to their own needs and aspirations. Let them specialise, as professional historians and geographers do, by applying core skills to the topics that interest them. If they value ancient Greek and Latin, or the language and culture of Lancastrian cotton-weavers, the behaviour of Neolithic peoples or vocational training, let them negotiate provision with their local schools. Then teachers would be spared the stress of proselytising in hostile communities and could negotiate enforceable learning contracts with parents who, in turn, would be able to choose schools that affirm their own values while supporting the principle of tolerance in diversity. None of this will solve the problem of social exclusion as it presents today because too many children have parents and grandparents who were themselves excluded. So the exclusion of the parents is visited on the children, creating social problems that resonate on a very long time-cycle. All we can do is provide support and incentives on a case-bycase basis in the hope of a gradual amelioration. It is possible to crawl out of the abyss of social exclusion, but difficult and painful. Many of those who do so need continuing support. The reflexes of community life, if they are not learned in infancy, never become completely spontaneous. However, the benefits for society are considerable because the pain and disruption stops with the current generation. There is a grim old teacher’s joke that education is the business of casting imitation pearls before real swine. That isn’t education, but acculturation and its origins lie in the specious snobbery of the aristocratic lineage. In the schoolrooms of my childhood were some who still spoke of ‘ill-bred children’ and ‘bad blood’ when what they meant was that they disapproved of the way we spoke and behaved. Doubtless there is sometimes a genetic component to antisocial behaviour, but more often the problem is

261

262

Tragically, one can seldom solve such problems by transplanting children into ‘good’ environments - the act of transplantation is itself an emotional insult that fosters guilt and exclusion. One must at least consider making the child’s current environment as congenial as possible. Schools are part of this process. Responsibility for managing and avoiding conflicts must ultimately lie with teachers. Children who do not consider themselves stakeholders in society can hardly be expected to solve problems that defeat professionals. Although teachers rightly assert that a child’s parents must be held responsible for the behaviour of their offspring, many of the parents of disruptive children have themselves suffered social exclusion at the hands of other teachers. They are understandably defensive and usually lack the social capital to resolve these conflicts.

exclusion is aggravated and costs rise. The other group favours a lighter touch, but soon runs into problems of quality assurance. Education is centrally funded and weak regulation creates an environment in which incompetent managers prosper. The challenge, therefore, is to develop a model and set of boundary conditions that separate managerial and regulatory functions and delegate effectively - allowing schools to innovate by tailoring services to local demand while assuring the quality of core services.

Teachers and students need protection from abusive children, of course, just as pupils need a curriculum that chimes with their values and aspirations, but the primary responsibility must lie with teachers. Teachers who cannot cope with these problems should not be accredited to work in front-line schools. Those who can should receive extra rewards for their skills and dedication. They certainly earn them. Recall the distinction of management (the flexible, introvert competence) from regulation (the normative, extrovert competence). Teachers and school boards clearly exercise a managerial role. Regional education authorities serve as the regulators that negotiate a managerial model and delegate responsibility for implementation to managers. Conflicts between politicians who see education as a tool for imposing shared beliefs and those who see it as a device for enabling and empowering students often paralyse the educational process by disturbing this managerial model. One group demands auditing procedures to reduce managerial flexibility. The result is a centralised ‘command economy’ for educational services in which regulators have no choice but to manage. Teacher morale slumps, social

263

264

8. Cranks and Boffins In Section 8 Essay 3 I ridiculed political realists oppressing those who frustrated the operation of supposedly deterministic laws, saying it was like throwing people in prison for defying the law of gravity. Political extremists often attack intellectuals (teachers and professional academics, for example). For every victim there is someone waiting in the wings whose opinions chime better with those of the regime. We tend to think of these as delinquent acts qualitative deviations from normal behaviour. Yet their principal effect is to reify new boundary conditions that favour one set of academics and disadvantage another. This is a process we can observe in every generation of every modern polity. Every generation of academics has its unacknowledged stakeholders. We see them rather clearly in the Stalinist era or Nazi Germany because there were purges with evident loss of life and liberty. Elsewhere the effects of reification are less spectacular, but still palpable. Neo-Darwinists redefined evolution to drive Lamarckian evolutionists into the wilderness. Nineteenth century geologists reified Lyell’s uniformitarian principle and held onto it right into the twentieth century, suppressing the theory of Continental Drift for half a century and marginalizing the catastrophists. This process continues today. Despite local deficiencies of scholarship and methodology, post-modernists, soft system theorists, sociologists, planners and humanists have amassed overwhelming evidence that knowledge is socially constructed. They know that ontology and values are autocorrelated and have substantial evidence to support that claim. Yet any physical scientist, however innocent of case study experience, can polemicise about the obvious need for a ‘physics of society’. That they hold these opinions is perfectly acceptable in a free society. That national and supra-national governments actively promote those opinions with differential funding is not. This favouritism cannot be attributed to commercial spin-off, empirical evidence, market choice or the will of the people

265

manifest through the democratic process. If it were so, growth in quantitative sociology and natural science programmes at university would be funded commercially on the back of those spin-offs. Follow-up studies would show that large infrastructural projects paid measurable dividends in sustained economic growth. Students would abandon humanities, political science, social studies and management science to enrol on natural science programs. This does not happen. The hard science hard-sell is COWDuNG reified at enormous expense by ideologues in national and supra-national agencies. Integrative research is demanding, even under ideal circumstances, but when polities try to ‘buck the trend’ by imposing nineteenth century models on twenty-first century science, the process is complicated by resentment, mutual incomprehension and clear conflicts of interest. Good research managers can sometimes accommodate these conflicts, but life would be much easier if politicians would stop compounding them. I am not suggesting we replace the command economy with a market free-for-all, but advocating a gradual shift to a mixed knowledge economy in which supply and demand determine the broad sweep of provision and central funding is used to sustain strategically important competence. There will be a niche for integrative researchers in such a market, but it is bound to be a minority interest. Knowledge communities that have persisted for centuries are not going to disappear because politicians say they should and throw money around. They might as well save their breath and direct funds towards strategic investments. Integrative researchers have a clear role to play in policyrelevant studies. The difference between multi-disciplinary and integrative research lies in the negotiation of a conceptual model all can accept, at least for long enough to develop a co-operative approach to some issue. Every knowledge community serves its members by providing a forum within which knowledge can be created and

266

maintained. This knowledge may be incoherent with that of another community.

dimensionally

An anthropologist, for example, knows human behaviour is contingent and unpredictable. Policy makers know that the course of history is both predictable and manageable. Neither is wrong; they have been working on different spatiotemporal scales. The anthropologist works on an extremely local scale where you cannot appeal to statistical laws of large numbers, and tiny accidents of history and geography can be valorised. Thus, when anthropologists co-operate with policy makers they must negotiate a model in which both can work and this, as we have seen, imposes constraints on both. As the number of communities represented in a research project is broadened, the axiomatic basis of the project and the scope for co-operation are reduced. Often progress can only be made if people set aside traditional rivalries to focus on shared values. Those involved in integrative work must keep their appreciative settings as open as possible - assuming a sceptical position in respect of their own beliefs. This can be uncomfortable. We are largely unaware of our own cultures and inclined, as a species, to be dismayed when culturally embedded beliefs are challenged. We are obligate social animals. Our survival depends on our ability to co-operate. When core beliefs break down that ability is threatened. Thus people who work across the boundaries of knowledge communities often pay a high price for doing so - not only must they tolerate their own elevated stress levels, they often lose the trust and goodwill of peers. This is why Mode 2 researchers have such poor career prospects in Mode 1 institutions. For much the same reason integrative researchers can never be sure of a welcome on the other side of the divide. Trying to integrate knowledge communities whose belief systems deviate in some key respects - hard and soft scientists, for example - creates stress and social isolation because it takes place in an epistemic frontier zone where hostilities can be triggered by an innocent word.

267

You cannot resolve such tensions with rhetoric and appeals to common sense. Even hard science method will not help because it hinges on the assumption that ontological problems have been solved - that our knowledge of categories is sufficiently close to reality for progress to be made simply by checking the goodness of fit between expectation and observation. Let me re-iterate one of the fundamental messages of this book. To speak of the social construction of reality I would have to relax the definition of reality as that which is independent of human knowledge and belief. This would be a logically defensible model (subject to explicit re-definition) but semantically disastrous - a rich source of potential misunderstanding between hard and soft scientists. It is clearer and less contentious to assert that our knowledge of reality (expressed in terms of named categories) is socially constructed and informed by our purposes and understanding of causal relationships. Real or not, those shared beliefs cannot be ignored because they determine human behaviour which, in turn, re-shapes the socio-natural milieu that sustains, refutes or modifies those beliefs. We change the world by changing the way we see the world Value-, operational- and reality judgments are autocorrelated in human systems - we cannot alter one without some knock-on effect on the others. Consequently, we cannot solve ontological problems arbitrarily by fixing the definitions of categories (as natural scientists and mathematicians often do) because these constrain the value- and operational judgments on which the research is based. Institutions, laws and policy initiatives are not planets or electrons which we study simply for the fun of finding out to know them is to influence them. Cultural ecodynamics is an applicable, self-referential science in which a definition constructed innocently ‘for the sake of argument’ can become a self-fulfilling prophecy with disastrous consequences for some stakeholders.

268

Since we cannot present our beliefs about the categories of human experience as objectively real, nor can we arbitrate without fixing those categories and actually changing the socio-natural systems we study, disagreements in integrative science can seldom be settled simply by taking a few measurements and making some calculations. This, as I have already explained, is not a philosophical nicety but a fundamental result of empirical research. So how are we to impose some scientific discipline? The solution I have proposed in this book is a fusion of softscience ideas about boundary critique, hard-science methods in respect of boundary conditions and ideas about dimensional coherence derived from my own experience as a scientific odd-job man. We test a model by setting boundary conditions on the manifestation of dimensionally coherent (i.e. observable) classes (of systems, attributes and/or values). A model predicated on those classes is only defensible as a basis for executive action while the boundary conditions are satisfied. If the boundary conditions are violated the appreciative basis of the model is flawed and an effort of re-conceptualisation is required. However, if no explicit boundary conditions are set, or the boundary conditions are expressed in terms of objects so dimensionally incoherent they cannot be valorised in any universe of discourse; we have, to use Popper’s wonderful phrase, immunised the model against refutation. To immunise a model is to abandon modernism - humankind’s only safeguard against the scholastic trap. It is terribly dangerous. I have emphasised the sense of social isolation that often accompanies integrative work with good reason. In my experience people only become completely comfortable in an ‘inter-discipline’ (biochemistry or economic geography, for example) once the appreciative phase has passed and value judgments are more or less fixed. Indeed, the degree of social isolation experienced by researchers is a direct measure of the integrative challenge they have accepted.

generation willing to take it on. There is the excitement of helping to develop a new knowledge system; access, through colleagues, to a breadth of vision and of knowledge it would take several lifetimes to encompass and the chance to study and, maybe even solve, problems that cannot be addressed with any pre-existing model. Integrative research demands that we use a tightly constrained conceptual model to address a broadly defined and ambitious problem. We can never build a model that adequately reflects the knowledge of all participants but can sometimes negotiate a model that represents a subset of their individual beliefs. These reduced models sometimes change our understanding of the world by resonating in a surprising way with experiences. Relatively simple knowledge systems can cast a new light on our experience. Unified and multi-disciplinary models are much less likely to do this because they are wholly focussed on conventional ontologies. They deal with the things everyone knows when we sometimes need to see the things the world has neglected, forgotten or missed. This style of modelling is not good for the career. Politicians and stakeholders often have vested interests in conventional models and don’t always want some troublesome researcher to bring other factors to the fore. They want to find pathways to sustainability but do not want to irritate the electorate. So they constrain us to respect conventional beliefs. Most people value economic sustainability more than environmental sustainability. How many have given up our annual holidays or our cars to reduce the rate of global warming, for example? In contemporary politics, the word ‘sustainable’ ranks with ‘devout’ and ‘civilised’ as synonyms for virtuous. ‘Unsustainable’ simply means evil. Politicians pay lip-service to sustainability but have no more idea what it is than the rest of us. When the electorate tells them what, precisely, must be sustained, they will concur or be thrown out of office. There are many good reasons for funding bad science.

However, the rewards of integrative research are considerable and there seem always to be a few in each

269

270

However, we should not overplay the conspiracy theory. If, like me, you believe the task of a scientist is to keep society open to the surprising corollaries of simple theories, you will become absorbed and delighted by abstraction. You will want others to share your enthusiasms and may even be a little irritable with those who resist them. In short, you will become a crank. If, on the other hand, you claim to predict the FTSE from now till doomsday, tell politicians how to save the world (without inconveniencing influential pressure groups), write technical stuff that nobody understands and graciously re-interpret it in glib parables, they will be genuinely grateful for your attention. Our paymasters don’t give a toss about the technicalities. They want authoritative opinions that chime well with their own and empirical evidence to substantiate those opinions. The high status afforded to scientists and technologists in advanced economies stems from their role as the advocates and sponsors of public policy. Indeed, the role of the scientific, economic or forensic ‘expert’ in our society is very like that of the temple priest in antiquity; they advise, legitimise and, within limits, constrain the secular power. To assume this role effectively one must have gravitas, and this puts the modernist at a disadvantage. Why hire a crank when you could have a boffin? The world needs boffins. If all scientists were cranks, knowledge would change so fast we could never make use of what we have learned. Such instability could easily lead to chaos and anarchy. Occasionally a crank wins popular acclaim and is acknowledged as a prophet, but cranky science is usually an obscure preoccupation. Cranks reformulate problems, boffins develop solutions and gather data to test them. Six generations of boffins came and went between Isaac Newton and Max Planck. Charles Darwin made work enough for at least four generations. Most generations of scientists consist of a mass of boffins with a leavening of cranks. The mix works astonishingly well. Scientists, like theologians and politicians, are the products of their own culture and you find all the vices and virtues of each age coded in their writings. Science gave us germ

271

warfare, atom bombs and eugenics, it is true, but scientists are not solely responsible for the imperialism, slavery, poverty, racism, institutionalised misogyny and repressive legal systems which created the demand for these - they were known two thousand years ago as the grievous byproducts of urban life. Neither science nor theology is inherently dangerous, but the priestly role that scientists and theologians have sometimes assumed most definitely is. When they make scepticism a cultural norm, scientists can re-create human knowledge in a form that better fits experience and better serves our needs. That’s part of the reason we live longer, are healthier and better fed than we were in the mediaeval period, or even a century ago. However, when a society (religious or secular) allows sceptical tendencies to be marginalized an unhealthy consensus is established. Dogmatic authority re-creates the world in a form that accommodates the prejudice of its ruling class. That is how theologians decided it would be kinder to torture heretics to death than to let them face eternal damnation. That is how Holy Wars are initiated and witch-hunts justified. It is also how secular experts ‘proved’ women were intellectually inferior to men, Caucasians more highly evolved than Negroes and that there would be less suffering in the long run if counter-revolutionaries (or Jews, or aboriginal populations, or people with disabilities, or…) were spared the ignominy of a gradual decline by preventing them from breeding or ‘educating’ them into passive dependence. Not only is the problem of reified knowledge ancient, so too is the solution, but each institution committed to it is liable to destruction or assimilation. Innovation is not an event that jolts society onto the ‘true’ path, but an exhausting process of appreciation and adaptation that cannot be sustained. Monastic communities that challenged priestly opulence were either purged or regulated by bringing them in to the political mainstream. When the monastic schools could not meet popular aspirations in the twelfth century, peripatetic scholars began to trade and had to be organised into universities and regulated. The modernists abandoned the

272

stagnant mainstream of the fourteenth century universities but were themselves purged and marginalized by Reformation and Counter-Reformation.

guardians of truth are insufficiently concerned with reason and the interests of those more vulnerable than themselves.

Eventually Europe grew sick of sectarian bickering and the later modern period saw scientists and technical experts tending the shrines of timeless truth. Yet the Phoenix Cycle remained. The secular Grand Viziers of the later modern period have as much to apologise for as the Inquisitors of the early modern period. Even the renaissance events of the 1930s and 1960s were sorted and tamed - the sheep are now herded into the Mode 1 fold, the goats driven into the Mode 2 pit. The career expectancy of contractors is less than ten years. If, after that time, they have not found a conventional academic role, they must persist on the margins or quit. Scepticism and sustained rational debate are ethically necessary because they clear away some of the COWDuNG that accumulates in times of stability. As a citizen and stakeholder, your opinion is as valuable as any expert’s, but such debates are too often clouded by the rhetoric of scientific objectivity and universal truth. The idea of science as the rational pursuit of universal truth is only defensible in a world where scientists stick to physics and chemistry. The new science of cultural ecodynamics can only be the rational pursuit of useful knowledge. The most valuable knowledge, in our current predicament, is not that which shores up the status quo by making the rich richer, or replaces one dogmatic elite with another. We need knowledge that sustains peaceful, disciplined diversity and liberates others, including members of other species and those as yet unborn, from chronic suffering and habitat destruction. The threats and opportunities we face re-form constantly as history unrolls and our most useful knowledge is often transient. The scholastic trap, alas, is with us always - an artefact of the human condition. Modernism is a strategy for evading that trap. To our detractors, modernists are insufficiently concerned with truth, yet it seems to me the

273

274

Thematic Glossary Education is what survives when what has been learned has been forgotten. B.F.Skinner Pedantry is the only antidote to antiquism and this book contains a strong dose. I have laboured the distinction of reality from existence, modernism from antiquism and of probability from chance. I have distinguished metaphysics (the study of reality) from epistemology (the study of knowledge) from ontology (the study of existence). This allowed me to argue that early modern science is ontological; concerned with categories of object that might reasonably be said to exist. Later modern science, notwithstanding positivist spin-doctoring, is metaphysical; using mathematics to probe the reality beyond the valorisable. Finally, cultural ecodynamics is epistemological; exploring the social construction of beliefs and values and the complex co-dynamic interaction of this knowledge with the natural world. I have longed to use such words as: ‘likely’ and ‘really’ and slip back into the comfortable habit of talking about unpredictability in the language of gambling and chance. For a while I even contemplated re-naming ‘reality judgments’ as ‘existential judgments’, but something inside me rebelled. Modernism is not the formal rules of late mediaeval argumentation, the taxonomy of early modern scientists or the algebra of the later modern period. It is not even the glossary that follows this ‘orientation’. These pedantic forms are the persistent artefacts of attempts to inculcate modernism in living communities - they are pattern-cards for modernist neophytes, not modernism itself. After a few decades they lose their immediacy and become historical curios - hardly worth reading. Modernism is a living culture of rational open-mindedness manifest as intellectual habits acquired through play, repetition and social interaction. Once those habits are fixed, the catechetical verbiage can be shed like an outgrown skin.

275

Accident The values of attributes that may be possessed by members of a class - possible values. See essence. Accidents of History unpredictable events that may lead to epiphanies. Adaptive Change a response to perceived threats or opportunities that results in qualitatively new types of dynamic. See epiphany, normative behaviour, a priori. Adaptive Potential the ability of a social unit to embrace adaptive change. Determined in part by the appreciative setting of key actors and in part by the social and evolutionary constraints under which they operate. See evolution, Darwinian. Aha! Moment (see innovation) Antiquism rational realism. Originally associated with late Mediaeval scholastics, but also manifest among nineteenth century physicists and philosophers of science. See rationalism, modernism. Appreciation a concept formalised by Sir Geoffrey Vickers to describe the process of formulating policy by developing value-, reality- and operational judgments. Corresponds in general terms to what some social theorists call ‘discourse’. See knowledge. Appreciative Setting the receptivity of an individual or group to new reality-, value- and operational judgments. Determined by intellectual abilities, knowledge and receptivity. a fortiori reasoning is reasoning from the strongest (i.e. most persuasive) argument. It requires us to form a subjective judgment about plausibility on the basis of empathy and shared belief. The development of probability theory in the 1660s placed a fortiori reasoning on a quantitative footing by equating the strength (i.e. probability) of a proposition with a gambler’s likelihood. a priori and a posteriori perspectives Induction (reasoning from observed effects to putative causes) is also called ‘reasoning a posteriori’. Isaac Newton claimed to have

276

worked this way in his calculus text, De Quadratura Curvarum. Yet he did not stop with these induced rules, but set them down as axioms, articulated them with physical laws, some of which could not have been obtained by induction, and systematically deduced their corollaries. This method (working from putative cause to effect) is called ‘reasoning a priori’. In this book I extend these ideas to human action. As we look back in time we see a range of processes and resolved contingencies that appear as an inexorable, deterministic progression leading to our present state. This gives the impression of normality and predictability and encourages ‘action a posteriori’ - on the basis of custom, experience and praxis. When we look into the future, however, such contingencies are unresolved and we gain the impression of many possible pathways diverging from our present state. The need to adapt and manage uncertainty becomes apparent and we can only decide a course of action a priori deducing or inferring a response to perceived threats or opportunities from theoretical first principles - the ‘what if?’ approach. Arena the spatial domain in which a system operates - a localised subset of a landscape. Attribute an object that represents a category of observable property. Each attribute has two or more values. Thus value is to attribute as species is to genus. The attribute of blueness, for example, may have values that represent discernible shades and hues of blue. The set of values that describes a system is its state. The value ‘ultramarine, for example, implies the existence of a class of ultramarine objects. Axioms a set of reality-, value- or operational judgments accepted as prior beliefs. Boundary of an object. Originally the region of space-time occupied by a system, the concept has been generalised to all reality judgments made in respect of systems, attributes, processes and epiphanies. The act of identifying or valorising an object takes place in a recognisable arena and

277

requires a process of observation over some time interval. The boundary of an object is the spatio-temporal envelope or universe of discourse within which it and, by implication, all its constituent sub-objects can be valorised simultaneously. Boundary Conditions Originally coefficients of differential equations that had to be fixed to declare systems of differential equations solvable. In physics and biology these coefficients often referred to flows across system boundaries. Here the concept is extended to refer to the conditions that must be satisfied in order for a reality judgment to be defended. Since the existence of any object is contingent on a set of reality judgments (a theory) violation of boundary conditions refutes that theory. The object (as we understand it) does not exist. Boundary Critique a body of method developed from the ideas of C West Churchman to consider the ethical implications of boundary (i.e. reality) judgments. Class a named category or type; a species or genus of object. In Mediaeval times, classes were called universals and arguments about whether they were real or negotiated by common consent divided the realists from the sceptics. See essence, accident, key. Classification See systematics. Classification, Natural a theory or model that summarises a large amount of sensory experience as a system of readily identifiable classes so that a community can use its name to communicate a lot of information about the object. Coherence (logical) a set of axioms is coherent if they can possibly be simultaneously true in some universe of discourse without entailing contradictions or paradoxes. Coherence (dimensional) a set of objects is dimensionally coherent if there exists a spatio-temporal envelope, a universe of discourse within which the problem of valorising them is well-posed. Community a group of people with a shared culture. See emic.

278

Complex of a problem-domain in which problems cannot be well-posed, i.e. a problem whose solution either does not exist or is not unique. Complex problems are made tractable by an effort of appreciation that resolves them into a model system with a deterministic core embedded in an unpredictable environment. Hard science deals with βcomplex (almost well-posed) problems which can be solved subject to boundary conditions. Soft science deals with αcomplex (wicked) problems that are so ill-structured it is sometimes impossible to reduce them to a model without aggravating social tensions and changing the system itself.

Deconstructionist one who specialises in observation and the critical evaluation of models.

Conflicts (over ontology) arise when two knowledge communities disagree over some policy issue. They can sometimes be resolved by shifting attention from products to processes but invariably require some adjustment of appreciative setting.

Disciplines (academic) etically defined subject areas used as administrative categories by academic institutions and policy makers.

Connectedness (logical) two objects are connected if one is entailed by the other. If one object is valorised, the other is implied. Logical connectedness is rare.

empirical

Deep Ecology (ecosophy) an environmental movement initiated by Arne Naess which asserts that value judgments are a significant determinant of attitudes to the environment a belief that nature is of value in and of itself and that policy in respect of nature should not be based purely on economic spin-off or hard science. Diagnostic values see key. Dimensional Coherence see coherence.

Dogmatism irrational realism. the belief that some knowledge, however incoherent, is nonetheless universally true. See rationalism, relativism. Dynamic of a system whose state varies through time.

Constraint the regulation of one object by another.

Ecosophy see deep ecology

Constructionist one who specialises in the construction of models and the formulation of an almost well-posed problem.

Emergence The nineteenth century English philosopher George Henry Lewes is credited with distinguishing emergence from resultant phenomena. A phenomenon is emergent if it is not logically entailed by a set of axioms.

COWDuNG an acronym due to C.H Waddington to describe the prejudices and culturally embedded beliefs of a ruling class - the COnventional Wisdom of the DomiNant Group. Creed see knowledge. Cultural Ecodynamics the dynamic inter-connection of culture and nature. Culture tacit or unexamined beliefs that define an individual’s sense of identity and communal membership. Culture Shock information embedded knowledge.

that

challenges

culturally

Data symbols representing observed or inferred values.

279

Emic a distinction of personal or cultural significance. See etic. Empathy a tendency to interpret the actions of another being or thing as driven by internal value judgments. See teleology. Englobe the act of representing another knowledge community as a special case of one’s own domain of expertise. See reductionism. Enlightenment Myth a scientific creation myth that locates the origin of science in the seventeenth century and associates it with the development of formal algebra and the later modern period. This myth, which is more prevalent in Protestant countries, equates all metaphysics with biblical

280

hermeneutics and dogmatic realism. It argues that science is rational, quantitative and value-neutral while metaphysics is irrational, qualitative and value-driven. See Renaissance Myth.

2. It shifted attention from natural selection (processes that punish failure) to a more Spencerian conception of survival of the fittest, thereby building adventitious processes into the model.

Epimenides’ Paradox: this statement is a lie. Epiphany an object that transforms a category system into one of a qualitatively different type by changing its essence. An epiphany can either create, modify or annihilate a whole class of system leading to innovation. Epistemology an approach to problems of knowledge. Essence the values of attributes necessarily shared by all members of a named class - necessary or essential values. See accident. Etic a pragmatic or functional distinction imposed by an outside observer. See emic. Evolution a word meaning to unroll or develop. Sometimes used by hard scientists to describe any dynamic process. Here restricted to historical processes in which the behaviour of one species or class of organism is actually or potentially constrained by its material and biological environment. Five evolutionary theories are explored here: Evolution, Darwinian a process of ‘descent with modification’ under natural selection - a form of stress that reduces reproductive success or viability. Cultural systems usually evolve by anticipating the consequences of deleterious adaptations and avoiding them (see a priori). Animals and plants explore these options and live or die with the consequences. Evolution, Neo-Darwinian following the rediscovery of Mendelian genetics, biological evolution was re-defined as a process that caused a change in the gene pool. The effects of this were three-fold: 1. It excluded social and cultural processes from consideration because these proceed in a Lamarckian way without concomitant genetic change, driving a wedge between biology and social sciences. 281

282

3. It focussed attention on genetics and equilibriumseeking models, weakening the role of contingency and unpredictability in evolutionary models and separating evolutionary theory from ecology. Evolution, Lamarckian Lamarck believed evolution was effected through the inheritance of acquired characters. This model works well for cultural systems, but rather badly for biological systems and was abandoned by the NeoDarwinists. Evolution, Marxian a theory comparable, in some respects, to Herbert Spencer’s in which social evolution proceeds by the pursuit of self-interest and the survival of the fittest, but which acknowledged that one of the effects of this was to create winners (motivated by self-interest) and losers (effectively responding to natural selection). This model allowed him to predict a revolutionary cycle comparable, in many respects to the Phoenix Cycle explored here. Later Marxist theories were highly deterministic and, in this form, were refuted by empirical data. Revolutionary cycles can be broken and evaded. Evolution, Spencerian Sir Herbert Spencer was a social evolutionist whose work on the subject predated Darwin’s, but who subsequently adopted the Darwinian mechanism of Natural Selection. Spencer believed progress was logically necessary. The evolution of modern, social humans, stronger and fitter in every respect than their forebears was explained in terms of continual improvement of the human stock by natural selection. It was Spencer, not Darwin who coined the term ‘survival of the fittest’. Existence requires that the members of a named class of object can be valorised dependably in some universe of discourse and that the key to this class be smaller than its essence. It is conceivable that an object is real but yet does not exist. (See mystical). It is also possible that an object exists but is not real. Universal truth, for example, may be real, but does not exist in valorisable form. Credit ratings, on the other hand, exist, but are not real - they are artefacts of human belief.

283

Existential Proposition a proposition constrained in such a way that probability theory can be derived from a set theoretical basis. This requires that the probability of a non sequitur (like all swans are white AND all swans are black) must be zero. Explanation, Explanatory Power When scientists began to investigate dynamic problems they incorporated objects into their models that would once have seemed abstract and mystical. Although these objects do not exist in the domain of sense experience, their properties can sometimes be inferred by observing changing values and used to predict valorisable regularities that cannot be predicted by other models. Closing a system ontologically in this way is called ‘explanation’. Genus a super-class consisting of one or more species. Each species is a member of only one genus. Every instance of a species possesses the essence of the genus of which it is a member. The essence of a species, therefore, is a superset of the essence of its genus. Hard Science see science. Hermeneutics The scholarly method of interpreting text. It depends heavily on teleological method and empathy. Biblical hermeneutics, in conjunction with dogmatic realism, may be used to justify oppression and, in this form, is often erroneously equated with metaphysics. See Enlightenment myth, emic. Heuristic a general rule that allows us to tackle a problem an interpretive theory. Heuristics are generally weaker than universal scientific laws in the sense that a single counterexample cannot disprove them. However, it is possible to set boundary conditions on heuristics and so make them empirically testable, often using ontological method. Identification see systematics. Information data flowing into an object from its arena that changes its state. In hard science and engineering applications of cybernetics it is generally assumed that information simply exists - the definition is etic. In soft science applications information is data interpreted by the 284

recipient. The distinction of information from data, therefore, is determined by the appreciative setting of the recipient and emic. Innovation a change of beliefs leading to a change of behaviour. A class of epiphany commonly manifest in cultural ecodynamics. A shift from normative to adaptive behaviour. See appreciative setting. Integrative of any process that requires the co-operation of two or more communities. Jizz a gestalt ability that enables an experienced observer to identify the members of a class without explicitly checking its key attributes. Jonah’s Law we can only predict the course of history in respect of processes no human action can change and can only change the course of history if our predictions are irreducibly uncertain. See Russell’s Paradox. Judgment an explicit belief. May be creedal (axiomatic) or theoretical. See also knowledge. Judgment, Operational an opinion about causal relationships that may form the basis of executive action - by engineering certain products or processes we hope to change the course of history. Judgment, Reality an opinion about the objects that exist, or might exist. Often formulated pragmatically. See etic. Judgment, Value an opinion that gives direction to executive action - a sense of purpose or aspiration. May not have any ethical or moral overtones, but has an emic basis. Key values the subset of essential values sufficient to identify an object as a member of a named class. K.I.S.S. a latter-day form of Ockham’s razor – Keep It Simple, Stupid. See reductionism. Knowledge Shared beliefs that enable people to co-operate. Knowledge is linguistically communicable and so requires us to resolve experience into named objects and classes of object. Culture is embedded or tacit knowledge that defines the norms and expectations of a community. Creedal 285

knowledge is explicit, but deeply held belief - judgments a person or community is reluctant to abandon. Theoretical knowledge is a set of beliefs or axioms formulated provisionally that, nonetheless, form a basis for action. Knowledge Community see community Knowledge Creation the discursive or appreciative process by which shared beliefs are changed. See innovation. Knowledge System (K-System) see system, knowledge. Landscape a physical subset of space and all the objects (animate and inanimate) within it. See time geography. Likelihood the ratio of successes to trials used by gamblers to estimate risk and calculate bets. See probability, odds, a fortiori Management the process of ensuring that executive action is timely, legal and efficient. See regulation. Metaphysics an approach to problems of reality. Modelling a method of disciplined contemplation that resolves human experience into named objects (systems, attributes, processes and epiphanies). A model is not a map of objective reality formed solely to facilitate understanding, but a map of subjective belief developed to help people co-operate in acting upon the world. Consequently, models are always informed by an effort of appreciation and determined, in part, by prior knowledge. Modernism rational scepticism. A willingness to assume a sceptical position in respect of one’s own beliefs. See rationalism, antiquism. Modern Period in the later fourteenth or early fifteenth centuries saw the rise of secular humanism and new forms of art in Europe. The early modern period began with the Renaissance. The later modern period corresponds to the introduction of dynamic approaches in the seventeenth century and led to a schism between the humanities and the natural sciences.

286

Mystical a class of object that does not exist i.e. cannot be valorised in a universe of discourse accessible to human senses. Mystical objects may be real in the strict sense of the word. Since we cannot apprehend them we cannot prove their un-reality. The investigation of mystical objects belongs to the domain of metaphysics. Naming see systematics. Natural Selection see evolution. Necessary see sufficient. Non sequitur a proposition or argument that is logically inconsistent with itself. The proposition all swans are white AND all swans are black is a non sequitur. Normative Change is driven by a process that has no direct impact on conceptual models or beliefs. The nature of the dynamic, therefore, is qualitatively uniform. See adaptive change, a priori. Null State a state that describes an object that is not yet or no longer an element of a named class. Object a perceptual structure representing a more or less persistent, named entity (thing). There are four types of object: system, attribute, process and epiphany. The distinction requires an effort of appreciation - it is socially constructed. See systems ontology. Observable information derived from sense-data by a process of valorisation. Ockham’s Razor Pluralitas non est ponenda sine necessitate (plurality should not be posited unnecessarily). The idea apparently did not originate with Ockham, though he used the words in his own writing. It may even have been a scholarly aphorism. Odds the ratio of successes to failures Ontology an approach to problems of existence. Organism in biology, a living thing. The founder of General System Theory, Ludwig Von Bertalanffy tried to generalise the concept to social systems arguing that an organism was

287

a system capable of extracting information and energy from its environment, buffered against environmental fluctuations and capable of apparently teleological behaviours. Paradigm Shift a form of scientific innovation described by Thomas Kuhn. Pocket of Local Order (POLO) in time geography a localised bundle of resources and opportunities in a landscape. Policy a set of norms (the product of an appreciative process) designed to give direction, coherence and continuity to some executive action. See regulation. Positivism a system of philosophy originating in nineteenth century social science which holds that metaphysical speculation is meaningless and that there is a universal, a priori scientific method that covers both the natural and social sciences. Positivism emphasises the role of empirical (i.e. positive) evidence as a value-free method of investigation. In its extreme forms it holds that non-science (metaphysics) is non-sense. Positivism is a product of the Enlightenment Myth and, as such, confounds metaphysics (the study of reality) with biblical hermeneutics and dogmatic realism (the belief that the categories of human knowledge are objectively real). Possible see sufficient Probability a number between 0 and 1 that represents a person’s assessment of the strength or believability of a proposition. 0 represents disbelief, 1 absolute confidence. See likelihood. Problem, Almost Well-Posed a problem whose solution exists and is unique subject to boundary conditions - a complex problem. Problem, Dynamic a problem involving systems whose states change. They are solved by treating process as a necessary cause and state as its effect. The ability to tackle dynamic problems was the defining attribute of later modern science.

288

Problem, Historical a problem that can only be addressed by treating process as a possible effect of a designated epiphany because human beliefs determine behaviours that, in turn, feed back to reinforce or invalidate belief. Problem, Ill-Posed a problem whose solution either does not exist or is not unique. Problem, Ontological a problem solved by deciding what categories of object may be said to exist in the domain of sense experience - the principal focus of Greek early modern science. Problem, Well-Posed a problem whose solution exists and is unique. Problem, Wicked a problem so ill-structured it cannot be posed definitively and must be resolved by an effort of appreciation. Wicked problems deal with situations in which no coherent model can be agreed among stakeholders. They often refer to issues of equity where solutions are not objectively ‘true’ or ‘false’, but subjectively ‘better’ or ‘worse’; there are no stopping rules, no objective criteria for satisfactory solution. Every case study is a unique, one-shot operation and every implemented solution has consequences that cannot be undone. The term was coined by Rittel and Webber Process an object capable of changing the accidents of a system. May be represented as a sequence of diachronous states - a body of experience with appreciable time-depth or as a function, a single-valued transformation of a space onto itself. Processes are sometimes represented cybernetically as information flows that change the state of the recipient system. They may also be represented mathematically as a function or mapping of a state space onto itself. Every state space has a null state so that a process can, under some circumstances, create or annihilate a system. Processes have no effect on conceptual taxonomies. See epiphany. Product a valorisable object, usually generated by human action.

289

Project (research) executive action with a start-date, an end-date and one or more deliverables or products. Project (time geographical) executive action subject to space-time constraints - a time geographical process. Rationalism the belief that any logically incoherent set of axioms contains at least one that is false. Real to be real is to be independent of human knowledge and belief. A credit rating, for example, may exist - it can be valorised in some universe of discourse, but is not real. Realism the belief that it is possible to assert some universal truths as such with absolute certainty. Realism, Naïve the belief that the axioms and empirical observations of natural science are value free and objective. In the late eighteenth and nineteenth century many scientists equated metaphysics with biblical hermeneutics and hermeneutics with realism. As they struggled to assert their identity and distinctiveness, a great deal of semantic slippage occurred. Naïve realism was one of its products. See Enlightenment myth, positivism. Reductionist one of those troublesome words that have many definitions in the wider literature. It is applied to those who accept the reductionist thesis (that all science can be reduced to Newton’s laws of motion) which is a case of englobing. However, it is also used to describe one who uses a simplified model to render a problem almost wellposed (the K.I.S.S. approach). The term is often used critically. To be called a reductionist in a humanities department is to be denounced. See Ockham’s Razor. Regulation the constraints imposed on a process by its external environment. In cybernetics, regulation is synonymous with external control - an Information flow is received that changes the state of a system and the recipient is passive. In appreciative theory, however, regulation is often negotiated on the margins of a system and complicated by emic factors. There are two types of regulation. Regulation by norm sets conventional standards (policies) for system behaviour. Regulation by threshold

290

avoidance sets limits on the acceptable range of responses. See management. Relativism irrational scepticism. The belief that all knowledge systems are of equal merit. See rationalism, dogmatism. Renaissance Myth a scientific creation myth that locates the origin of science in the later fourteenth and early fifteenth centuries century. This myth, which is more prevalent in Catholic countries, associates science with the re-birth of humanism and so must gloss over the mathematical revolutions of later seventeenth century science which can hardly be explained in terms of a Classical renaissance. See Enlightenment Myth. Russell’s Paradox a proof that the set of all knowable sets is not closed and, by implication, that knowledge dynamics are α-complex. Scepticism the belief that it is impossible to know universal truths as such. Some hard scientists perversely misinterpret scepticism as the belief that the world of which our senses speak is not real. It is possible to engage in metaphysical research (addressing questions of ultimate reality) without abandoning scepticism provided one accepts the insights gained thereby as imperfect and provisional. Scholasticism an approach to biblical hermeneutics based on the rational analysis of texts and formal debate. Originated in the late eleventh or twelfth century. Scholastic Trap the acceptance of new knowledge often obliges us to adopt a sceptical position in respect of the old. Scepticism is, therefore, a prerequisite of innovation. However, the exploitation of that knowledge requires consensus and the process of building that consensus predisposes a community to a realistic position. The appreciative setting of a community often changes as it moves from knowledge creation to exploitation. Science the rational pursuit of knowledge.

291

Science, Hard (natural science) a technical or engineering approach to knowledge creation focussed on well-posed and almost well-posed problems. See complexity. Science, Soft (artificial science) the study of human action, beliefs and institutions. An approach to knowledge creation typically focussed on almost well-posed and wicked problems. See complexity. Small World Bias We can often posit dimensionally coherent propositions and estimate conditional probabilities in a small universe of discourse. This is how specialist knowledge bases are created. However, when we use specialist knowledge to formulate policy options we must acknowledge the possibility that the axioms on which that knowledge stands cannot be defended because the categories on which they are predicated cannot be valorised. Species a named class of objects all of which share certain essential characteristics. Static of a system whose state is time-invariant. State the values of the attributes that characterise an object at a given time. State Space the set of all possible states a system can display. Every state space needs a null state that describes a system that is no longer or not yet a member of the designated class. Sufficient, Necessary, Possible the manifest values of a set of attributes are sufficient to deduce membership of some class if possession of those attributes implies membership. They are necessary if membership of the class implies possession of those attributes. They are possible if membership of the class does not preclude their possession. Clearly, the sufficient set is a subset of the necessary set which, in turn, is a subset of the possible set. Symmetry a logical expression of equivalence that enables an observation to be translated into the solution of a problem. For example, if I believe that X + 3Y = 0 and observe that Y = 2, I can use the symmetry: X = 0 - 3Y to solve the problem of valorising X.

292

System a persistent object or collection of interacting objects. To be persistent the system must have some attributes whose values do not change (parameters). Some of these static attributes are diagnostic - they enable us to identify the system as a member of a named class (see systematics). Others are essential - possessed by all members of the class. Many systems also have dynamic attributes (variables). The values of dynamic attributes are transformed by processes. Epiphanies modify essential values - transforming or annihilating a category of system. A system is analogous to a mathematician’s set. The word comes from Greek roots meaning to ‘stand together’ See universal, modelling. System, Knowledge (K-System) a system that represents the community working on some problem. Used for designing and managing integrative projects. System, Open see complexity. System, Viable a concept due to the Cyberneticist, Stafford Beer. A viable system is one whose processes are sustained despite environmental fluctuations. See organism. Systematics the process of classifying, naming and identifying the species and genera of human experience - an approach to modelling. Classification involves identifying a group of objects which have the values of many static attributes in common. These shared values define the essence of the class. Naming is to associate a unique name to that class. Identification requires us to select a small number of diagnostic or key values that allow us to valorise a member of that class. The minimum requirement of a non-trivial classification is that the set of essential values be larger than the set of key values in every class. The ideal towards which we are moving is a strong classification in which the key is very much smaller than the essence so that the act of identification unlocks a large body of supplementary information. See also boundary conditions.

and sense data. Mediaeval systems research was restricted to two types of object, which I call systems and attributes. This allowed them to tackle basic ontological problems. The revolution of the seventeenth century was the addition of a new structure - the process - an object capable of transforming the accidents of a system. From the process we derive the mathematical concept of a dynamic system or machine a single-valued transformation of a set onto itself. The third revolution is the addition of a new type of object, the epiphany an object that can transform the essence of a system - thereby creating a qualitatively new type of system. Teleology the tendency to ascribe purpose to other things. Theory see knowledge. Time Geography an approach to space-time behaviour developed by the Swedish Geographer Torsten Hägerstrand. Time geographical processes (called projects) are subject to Constraints arising in part from the limitations of the human condition (we are social, corporeal beings who cannot be in two places or exactly co-located in space-time, for example) and in part from attributes of the arenas in which we operate. Unacknowledged Stakeholders When we impose an ontology on a socio-natural problem we often make boundary judgments that co-incidentally determine who is and is not a legitimate stakeholder. In the late nineteenth century, for example, the stakeholding of women in the political process was unacknowledged. Debates about the claims of unacknowledged stakeholders often degenerate to polemical statements about common sense and the correct definition of terms because reality judgments and value judgments are intimately linked.

Systems Ontology the key idea of this book is that all system objects are mental structures imposed on experience 293

294

Universal in Mediaeval philosophy a category or class of object analogous in some respects to a mathematician’s set or a systematic biologist’s species. Universals are either explicitly named categories (the category of cats, for example) or implied by the use of attributes. The word derives from Latin roots: uni (one) versum (turned toward). See system, systematics. Universal Proposition A universe of discourse exists in which probability methods are useful. This is a fundamental premise of probability theory. See a fortiori. Universe of Discourse the spatio-temporal envelope or boundary over which an object (system, attribute, process or epiphany) can be valorised to an operationally acceptable level of uncertainty. It consists of the arena in which an object exists and a spatio-temporal envelope of remembered arenas. Valorise of an object, to give value to or make manifest - to identify. Every recognisable object must have a boundary a spatio-temporal universe within which it and all its constituent sub-objects (systems, attributes, processes and epiphanies) are simultaneously and uniquely valorisable. Values the manifestation of an attribute in a particular object. Value is to attribute as species is to genus. The attribute of blueness, for example, may have values that represent discernible shades and hues of blue. The set of values that describes a system is its state. Viable System see system, viable.

295