Evolutionary Robots: Our Hands In Their Brains? - Semantic Scholar

3 downloads 0 Views 131KB Size Report
light and sound are packaged for us by the physical char- acteristics of our sense organs, so that what we take to be the perceptual world, is actually the result of ...
Evolutionary Robots: Our Hands In Their Brains?

James V Stone

Biological Sciences/Cognitive and Computing Sciences, University of Sussex, England. [email protected]

Abstract The study of learning and evolutionary adaptation has yet to provide a theory which is suciently detailed to enable the construction of even the most primitive synthetic animal. I argue that this is so for three reasons. First, unlike other scienti c elds, such as theoretical physics, there is no universally accepted paradigmatic approach to the study of the brain. Second, there are certain fundamental (highly complex) `functional primitives' (e.g. types of learning) immanent in nervous systems, which are a necessary prerequisite for perceptual processes, and which are not currently possessed by any animat. Third, even though genetic algorithms are powerful optimisation techniques, the conventional use of genetic algorithms is awed because it attempts to model only a restricted set of properties found in natural evolutionary systems.

1 The Evolution of Selective Blindness \We face, then, two great stochastic systems that are partly in interaction and partly isolated from each other. One system is within the individual and is called learning; the other is immanent in heredity and in populations and is called evolution. One is a matter of a single lifetime; the other is a matter of multiple generations of many individuals." [1].

The doors of perception through which we view the world were rst opened as experiments in desperation. Out there, beyond the vista of our senses, is a raging cacophony of interacting events, too fast and too slow, at frequencies too high and too low, to be detectable by us. This is the stu that our evolutionary forebears, playing in the primordial casino, rejected as useless. Had they not done so we could be capable of seeing a wide range of colours, beyond the limited spectrum available to our eyes. Infra-red to see in the dark, ultra-violet to see through fog. We could have polarity sensitive eyes (and be partially blinded by wearing polaroid sun-glasses).

Back in the casino, some animal found that sensing vibrations on its skin could be used to avoid attack, and some other developed ears to take advantage of these vibrations. One of its o -spring was born tuned into the 30-50KHz bandwidth; what a feat of engineering for all that sticky, organic fragility to meld itself into such precision. But that animal was literally out of tune with its environment, where useful vibrations arrived in the 5-20KHz range. It was eaten whilst staring xedly at a beetle whose mating call was a 40KHz frequency modulated quiver. Evolution has selected only a small proportion of the physical world for analysis. But it is not sucient that we, as perceivers, are insensitive to most types of physical events around us. Even those events that are detectable by our sense organs are processed in terms of their physical context. Thus we do not see photons, or hear frequency cycles. We see spatially extended entities, and hear temporally extended sounds. The physical world has been twice ltered (once by the types of sense of organs we possess, and once by their means of transmuting events) before each of its constituent parts is realised even as the most primitive of perceptions. And there are more lters. The green we see at midday is physically di erent from the `same' green we see at sun-set. That is, even though the re ectance properties of a `green' surface remain unchanged, the spectral balance of light varies throughout the day. Consequently, the spectrum of light re ected from any surface also varies throughout the day. Thus our perceptual apparatus not only has access to a small proportion of the physical events around us, but it also transforms these events before we are aware of them. This third ltering operation ensures that certain events which are physically dissimilar are perceived as being the same (and vice versa). The nature of the transformations on the physical world is a function of an individual's developmental history. This constitutes a fourth lter type, the developmental lters. Every new-born infant can discriminate between phonemes not in its `native' languages[5]. The experience of hearing a single language, which contains

a sub-set of all phonemic categories, ensures that each adult speaker can discriminate only the phonemes which exist in his/her own language. There appear to be corresponding lters in the visual system. If monkeys are raised in a normal environment then their visual systems become sensitive to lines at all orientations. However, monkeys raised in an environment of lines at one orientation only have visual cortical neurons which respond preferentially to lines at that orientation[3]. Thus, what we perceive as the physical world is a multiply ltered and transformed version of that world. The combination of these four lter types ensures that most of the physical world is literally o -limits to us. Both light and sound are packaged for us by the physical characteristics of our sense organs, so that what we take to be the perceptual world, is actually the result of an interaction between the physical world and the structure of our sense organs. Our perceptions are the result of an historically interactive process. The world we perceive is given to us via organs suited to di erent purposes, and tuned for di erent reasons to various ranges and amplitudes, via developmental biases which re-package the physical world into the necessary perceptual half-truths required to confront what would otherwise be the overwhelming richness of the physical world. Even the historical interactions between the physical world and our inherited lters do not de ne our perceptions. We are not idle cameras pointed at a shifting scene. We are constantly engaged in the process of actively seeking out precisely those qualities that our sense organs are suited to analysing. So where is this new born infant, the tabula rasa, waiting to be written upon? The slate is not blank. It is already lled with ancient writings that specify what types of entities may be added to it. Furthermore, each infant is not a slate that is only waiting to be written upon. The slate contains instructions concerning what types of things to seek out. Accordingly, any changes, brightness, colours, and sounds, simple then complex patterns, voices and faces. Thus the infant is born with a set of well de ned behavioural and perceptual predispositions. No event in its evolutionary history is too small to in uence the child's ability to investigate and absorb the physical world around it. The pre-tuned senses of the child are guided through the chaos of light and sound by a repertoire of behaviours which are the result of millions of years of unrelenting selective pressure. Whilst most of the decisions about which physical variables to attend to have been taken in evolutionary pre-history, a certain amount of exibility remains in each individual. An infant thrusts its senses into the myre of light and sound like a sherman's net into a turbulent sea. And like a net, the infant's senses are designed to ignore certain types of events, and to drag others up for closer inspection. By degrees, the unfettered

kaleidoscopic chaos is beaten into a form of order, but a particular type of order, which speci es that temporally and spatially coherent sets of events must be attended to, whilst unchanging events can be ignored. Like frost on water, the senses descend upon an indi erent orgy of events, locking them into place, each frosted frond captures an iota of entropy, and then becomes a seed for further branching into the diminishing details of disorder. And so the physical world is made to yield, rst, in coarse blobs of light and sound, and nally, in tight bundles of rainbowed tapestry.

2 Brain Wanted: No Metaphors Need Apply

It is a great bene t to be able to absorb new knowledge by a process of analogy with old knowledge. This metaphorical mode of thinking allows us to abstract the concept of distance from the separation between two tangible objects, motion from the change in that distance, acceleration from a change in that change. It is only after such abstractions have been accomplished that Newton was able to provide a compelling answer to the question: \What governs the motions of the heavenly bodies?". It is a great handicap to be able to absorb new knowledge by a process of analogy with old knowledge, if the new knowledge is (and it almost always is) incomplete. For instance, it has often been stated that the brain can be understood in terms of the modern computer. Whilst this is true in some respects, the most interesting aspects of brains (e.g. adaptation, habituation, and learning) cannot be so understood. The trick is to nd an analogy in the old knowledge that adequately represents the behaviour of those components of the new knowledge which are deemed to be important. This is the hard part of any scienti c endevour (see [17]). Insightful thinkers, such as Newton, are able to make use of their analogies (e.g. between a falling apple and an orbiting planet) by recognising some deep property that both the old and new knowledge share, and then driving a wedge between the old and the new by rigorously exploring which other properties they do not share. The old knowledge we have about the rigid, non-adaptive, atomistic, and stylized processes of the modern computer is unlikely to tell us anything useful about the new knowledge we have about the dynamic, adaptive, and distributed processes characteristic of brains. So how are we to think about brains? What old knowledge will allow us to think usefully about how brains might work? It has always been the case that the currently most complex technology has been used as a metaphor for how brains might work. The dominant mode of thought throughout the industrial revolution dictated that any theory of human behaviour had to have a mechanical basis. The clock-makers of the 18th century became so

skilled in their trade that they created dancing, talking automata which \possessed an uncanny reality and were therefore called androids..." [7], (page 55). With the advent of electricity and, in particular, the telephone, brains were commonly described in terms of the workings of an electro-mechanical telephone exchange, with all its switches (neurons) relaying messages on their way from one place (sense organ) to another (muscle bre). After the telephone, the computer became the favourite metaphor for the brain. And last on the stage of metaphors is arti cial life (Alife), with its promises of robust, adaptive, `situated', dynamical systems. Why should any researcher interested in how brains might work adopt the methods of arti cial life? What indications are there that this `new' eld is more than just another plausible, but inadequate, metaphor? No other physical system has had as many, nor as diverse, metaphors applied to it. It is as if the workings of the brain are chameleon-like, apparently able to take on the characteristics of the surrounding metaphor. This, alone, should tell us something about the brain. That it is, in its entirety, unlike any other physical system, and also that it shares particular sub-sets of properties with many other physical systems. This latter is precisely what enables us to conceptualise the brain so diversely as a telephone exchange, a computer, and, a connectionist system. There is little of interest that can be said of the brain that is not true in some respects, and that is not false in others. Unlike other scienti c elds, such as theoretical physics, there is no single method of investigation that has proved to be an e ective strategy for obtaining reliable answers. This is probably why there are as many types of brain-scientists (e.g. perceptual psychologists, neuroscientists, neuropsychologists, psychopharmacologists, zoologists, connectionists) as there are metaphors for how the brain works. And perhaps this is a necessary diversity. The increasingly inter-disciplinary nature of brain research suggests that each of these metaphors in isolation increases its contribution when viewed in the context of the others.

3 Evolutionary Robots: Our Hands in their Brains?

\Other maps are such shapes, with their islands and capes; But we've got our brave captain to thank" (So the crew would protest) \that he's bought us the best - A perfect and absolute blank!" [4], page 144. One of the attractive features of the use of genetic algorithms (GA) for evolving robots is that it appears as if the GA embodies the principle of least commitment with regard to the structure of the robot. There seems little doubt that this approach will prove superior to the modular-design approach, best exempli ed in the NASA space program. The modular-design approach speci es

that a problem should be broken down into its constituent component problems, that each problem should be solved, whereupon the entire system should work. In practice, there are limits to the size/complexity of problems solvable by this method. The frequency and nature of failures witnessed as part of NASA's space program may be a warning of the eventual triumph of system complexity over human design ingenuity. \Rather than attempting to hand-design a system to perform a particular range of tasks well, the evolutionary approach allows their gradual emergence. There is no need for any assumptions about means to achieve a particular kind of behaviour ..." [8], page 62. However, unless the robot can evolve de novo there is necessarily some degree of human intervention in designing the initial state of the robot. The designer of the robot incorporates particular capabilities into the robot. These might be infra-red `vision', hearing between 0 and 20kHz etc.. The robot evolves from a qualitatively different starting point from that of organisms. Whereas the latter could (and did) choose to be sensitive over different ranges of many physical variables, the robot can only choose which sub-parts of a small set of physical variables to utilise1 In a sense the evolving robot is in a similar position to any not-too-primitive organism: \For change to occur, a double requirement is imposed on the new thing. It must t the organism's internal demands for coherence, and it must t the external requirements of the environment." [1], page 158. Thus, even organisms cannot suddenly decide (in an evolutionary sense) they would like to see in the infrared range; back in the organism's evolutionary history it was a good idea to close that perceptual door, and doing so was part of an adaptation to its environment. So it might be for robots - if they ever had that same choice, which they did not. \GAs should be used as a method for searching the space of possible adaptations of an existing robot, not as a search through the complete space of robots" [8], page 62. Whilst both evolutionary robots and organisms evolve from a current state that implicitly precludes access to many physical variables which might otherwise be of use to them, the organism can fully justify its current state by virtue of its inherited viability. In contrast, the The work of Cariani[2] is relevant to this section, but unfortunately came to my attention just before this paper went to press. 1

animat is the creation of a person with particular pretheoretical ideas about which set of doors of perception should be left ajar, and which should be rmly locked. No amount of selective pressure will allow the robot to generate the types of adaptations that an organism can generate, because the robot comes into existence in a highly di erentiated form. \At rst, systems ... are governed by dynamic interaction of their components; later on, xed arrangements and conditions of constraint are established which render the system and its parts more ecient, but also gradually diminish and eventually abolish its equipotentiality." [19], page 6. There is some reason to believe that the capabilities of the evolving organism are of use to it in its current environment. In contrast, there is no a priori reason to suppose that the capabilities of an animat will allow it to fully exploit the information around it in order to execute its task.

4 The Evolution of Evolutionary Mechanisms

The genetic algorithm used to drive the arti cial evolutionary process makes use of certain xed arti cal genome interpreters. That is, for a single instance of an arti cial genome there exists a multiplicity of possible phenotypic (or functional) consequences, depending upon how the phenotype is generated from the arti cial genetic material. (Note that an arti cial genome interpreter determines how the arti cial genetic material is used to generate the phenotype, whereas an arti cial genetic operator determines how the arti cial genetic material is transformed between generations). The ability of any optimisation technique to nd extrema on a given function depends as much upon the nature of the moves made over that function as it does upon the nature of the function itself. The genome and the genome interpreter jointly constitute a move generator. If we view the arti cial evolutionary process as a search over a tness landscape in which neighbouring points are related by altering the value (allele) at a single gene locus then each genetic operator induces a type of movement on that landscape. Note that an operator such as cross-over induces large jumps across the tness landscape because it causes many alleles to be simultaneously altered. More precisely, each set of genetic operators, in combination with each arti cial gene interpreter, de nes a con guration space[15, 11] in which adjacent points are related by a single move. Thus the ability of a GA to nd maxima on a given tness landscape is determined as much by the nature of its arti cial genome interpreter as by its genetic operators. Work by Lister[12] using the stochastic simulated annealing technique suggest that the optimising ability of

Figure 1: Carapaces of crabs: (a) Geryon; (b) Corystes; (c) Scyramathia; (d) Paralomis; (e) Lupa; (f) Chorinus. After [18]. a given core technique can be substantially increased by utilising appropriate move generators. Unlike AESs, which make a distinction of type between genome interpreter and the arti cial genes on which they operate, NESs make no such distinction. The inability of AES to evolve new forms of genetic operators and new means of controlling how each gene is expressed in an individual necessarily limits its ability to generate the diversity of functional forms encountered in a NES. In contrast to AES the relation between a sub-set of genes in a given genome and its phenotypic realisation is not xed. Compelling evidence that the mechanics of evolution are themselves subject to the pressures of natural selection is the appearance of sexual reproduction. Additionally, long before the mechanisms of genetics were known, D'Arcy Thompson's book[18] ( rst published in 1917) was instrumental in suggesting the existence of a genetic mechanism capable of imposing global continuous geometric transformations on the shape of organisms. His main contribution was to demonstrate that, \..two di erent but more or less obviously related forms can be so analysed and interpreted that each may be shown to be a transformed version of the other" [18], page 272. That Thompson was essentially correct is supported by more recent studies which demonstrate that, \Every one of the twenty eight bones of the human skull has been inherited in an unbroken succession from the air-breathing shes of the pre-Devonian seas" [6]. A genetic mechanism which alters the overall size or shape of an organism is clearly capable of modulating and coordinating the expression of multiple sets of genes in the genome. In terms of the tness landscape of a given or-

ganism this corresponds to a large (`horizontal') movement, which nevertheless is likely to generate only a relatively small change in tness. Thus the con guration space de ned jointly by the original tness landscape and the `topology preserving' move generator has fewer local extrema than the original tness landscape. This, in turn, makes it more amenable to any search for high magnitude extrema on its surface. The important point, for our purposes, is that these gene controlling mechanisms (`meta-genes') evolved, just as the original genetic code evolved. Any meta-genetic process that generated a relatively smooth con guration space, either by controlling the expression of genes or by switching genes on and o , would inevitably ensure its own survival and incorporation into the evolutionary process. In order to enable the evolution of qualitatively new forms, new and increasingly complex metagenetic controlling mechanisms must be capable of evolving. Thus, it is not sucient to model a small but accessible xed set of adaptive mechanisms involving only the evolution of genetic material, and not the evolution of the mechanism that interprets that material, in order to generate a phenotype.

5 Learning To Evolve

The single available instance of a successful evolutionary system (i.e. evolution on Earth) suggests that, even if the opportunity exists for the evolution of new genetic mechanisms, their appearance takes many millenia. It is thought that bacteria evolved some 3500M (million) years ago. The rst sexual eukaryotic single cell (a cell with a nucleus, and discrete organelles such as mitochondria) appeared some 1800M years ago. Thus about half of the time that life has existed on Earth was required to generate the rst sexual reproduction. Once this evo-

lutionary threshold had been breached the evolutionary process was accelerated substantially, and some 1000M years ago sponges, the rst multicellular organism with di erentiated cell types, appeared. The accelerated evolution associated with sexual reproduction generated the rst simple nervous systems, in the form of jelly shes, around 650M years ago, 350M years after the sponges appeared. However, in terms of the total accelerated evolution time more than one half of the time between the rst unicellular sexual reproduction and the present was required to evolve the rst simple nervous system.

Once nervous systems became suciently plastic it is likely that the Baldwin e ect2 further accelerated the evolutionary process. It was then a `mere' 250M years before the rst vertebrates evolved. The celebrated conquest of the land by vertebrates occurred just 300M years 2 The e ect of learning on behaviour ensures that, rather than a given genome representing a xed point in the space of possible phenotypes, it represents any point within a region around that xed point.

after the rst nervous system evolved. This type of analysis, in terms of the relative time between landmark changes in evolutionary history, suggests that, in its broadest context, evolution is almost static for relatively long periods, and is interrupted by `sudden' changes that essentially alter the fundamental nature of the pre-existing evolutionary process. Moreover, such long periods of stasis suggest that, in spite of the large number of degrees of freedom, accessible over long periods of time, to an evolving physical system, the types of changes induced during periods of change are non-trivial. In summary, even in an evolving system with many degrees of freedom and with more time than any computer, evolution generates qualitative changes only rarely. As stated above, about one half of the time between the rst unicellular sexual reproduction and the present was required to evolve the rst simple nervous system. The other half of the time bridged the gap between this rst nervous system and the human brain. The complexities of the adaptive learning processes of even the simplest nervous systems are not well understood, and if a robot were endowed with even the visual abilities of a bumblebee3 it would be rich indeed. Given this, admittedly circumstantial, evidence just how much should we expect of an evolutionary robot?

6 Evolving To Learn

There are certain common computational primitives (e.g. learning, habituation) which are possessed by most organisms, and which may have evolved in such a way as to allow each to take advantage of a generic physical environment. \... there appear to be no fundamental di erences in structure, chemistry or function between the neurons and synapses in man and those of a squid, a snail or a leech." [9], page 29. These primitives enable simple organisms to learn perceptual and motor skills, but using a type of learning mechanism that is universal throughout the animal kingdom. This is not intended as a denial that speciesspeci c types of learning occur, but rather that all learning is based upon certain fundamental principles of neuronal interaction. Once such a fundamental learning capability is possessed the evolution of pre-dispositions for learning particular perceptual/motor skills within an animal's environment is a relatively simple matter. Circumstantial evidence of the generality of learning mechanisms in the brain is given by the results which suggest that cells in the auditory cortex can be induced to generate visual receptive elds by re-routing thalamic visual outputs from visual to auditory cortex[14]. 3

Bees rst appeared about 300M years ago.

Just as the evolution of sexual reproduction, and of the rst multi-cellular organism, were protracted a airs, so the evolution of learning mechanisms took a long time. That is, relative to the quantitative alteration of a preexisting characteristic, a qualitative alteration, such as learning, is likely to have taken a long time. Once such an evolutionary threshold had been breached, even the most primitive learning ability endowed such advantages that its continued (probably accelerated) evolution was assured. The point is that only the `simple' learning abilities (habituation/sensitisation) of the most primitive organisms (the sea-slug, aplysia) are understood to any extent, and even these are known to be implemented by complex processes[10]. Given the complexity of simple learning in neuronal systems, and the relative poverty of learning abilities in arti cial learning systems (e.g. arti cial neural networks), is it reasonable to expect an evolving animat to be able to learn general perceptual/motor skills?

7 Learning As Adaptive Behaviour

\Even if we knew down to the last molecular detail what goes on inside a living organism, we should still be up against the fact that a living system is an organized whole which by virtue of the distinctive nature of its organization shows unique forms of behaviour which must be studied and understood at their own level, for the signi cance of all livings things depends on this." [16], page 148. (Italics added).

Learning in animals is traditionally classi ed into one of the following categories 1) Habituation/sensitisation, 2) Classical (Pavlovian) conditioning, and, 3) Associative or reinforcement learning. Note that these di erent types of learning assume that learning constitutes an adequate description of the goal of the system; that the functional signi cance of the observed behaviour is immanent in the process of learning. This view of learning as a goal in itself is unlikely to yield an account which can be used to construct models in which learning is part of a more general set of adaptive behaviours. A theory of learning that does not include an account of the underlying computational problem being addressed by the learning system is unlikely to yield useful insights into how to construct models which can solve computational problems (such as learning to walk across a room). It can be argued that such a theory is not so much a theory of learning as a theory of adaptive behaviour. That is, a theory which seeks to explain not only how learning occurs, but why it is desirable that it should occur at all. I can nd no grounds for disagreeing with this argument, and would go so far as to say that a theory of learning is interesting only insofar as it is part of a theory of adaptive behaviour.

The point being made here is suciently similar to that made by Marr in a slightly di erent context, to justify applying his cogent criticisms to learning theories in general: \They may be able to say that such and such an \association" seems to exist, but they cannot say of what the association consists, nor that it has to be so because to solve problem X... you need a memory organised in such and such a way; and that if one has it, certain apparent \associations" occur as a side-e ect." [13], page 47. Given the complexity of mechanisms underlying the simplest forms of learning, it seems unlikely that similar mechanisms would evolve within an AES. Just as one would not expect an animat to evolve a brain of any sort if it did not have any arti cial neurons, so, one would not expect an animat to learn to `see' if it did not have certain fundamental learning abilities. Thus, certain functional pre-requisites are required in order to be able to evolve other, more complex, functional characteristics. Whilst it is a simple matter to provide an animat with arti cial neurons, and thereby facilitate the evolution of a functional `brain', it is not so easy to provide it with generic learning capabilities because these are simply not suciently well understood to allow this. It is for this reason that I believe that the understanding of certain fundamental characteristics (via computational modelling of the development of perceptual processes) of living systems must achieve a certain level by studying these characteristics in relative isolation, before they can be fully understood as an integral part of that system. Acknowledgements: Thanks to Raymond Lister and Inman Harvey for useful discussions, and to Stephen and Pheobe Isard for comments. The author is supported by a JCI grant awarded to J Stone, D Willshaw and T Collett. This work was initiated whilst the author was at the University of Wales, Aberystwyth, Wales.

References

[1] G Bateson. Mind and Nature. Flamingo, 1979. [2] P Cariani. Implications from structural evolution: semantic adaptation. Proc Int. Joint Conference on Neural Networks, Washington DC, pages 47{51, 1990. [3] M Carlson, D Huble, and T Wiesel. E ects of monocular exposure to oriented lines on monkey striate cortex. Brain Research, 390:71{78, 1986. [4] L Carroll. Rhyme? And Reason? Macmillan and Co., London, 1897. [5] P Eimas, ER Siquelena, PW Jusczyk, and J Vigorito. Speech perception in infants. Science, 171:303{6, 1971.

[6] WK Gregory. Our face from sh to man. Hafner, New York, 1967. [7] Mary Hillier. Automata and Mechanical Toys. Bloomsbury Books, London, 1988. [8] P Husbands, I Harvey, and D Cli . An evolutionary approach to situated AI. In Prospects for AI, Proc

AISB93, Birmingham, England. Sloman S, Hogg D, Humphreys G, Ramsay A (Eds), pages 61{70, 1993. [9] E Kandel. Small systems of neurons. Scienti c American, pages 29{38, September 1979.

[10] E Kandel and R Hawkins. The biological basis of learning and individuality. Scienti c American, September 1992. [11] S Kirkpatrick. Con guration space analysis of travelling salesman problems. J. Physique, 46:1277{ 1292, 1985. [12] R Lister. Annealing networks and fractal landscapes. In IEEE International Conference on Neural Networks, San Francisco, pages 257{26, 1993. [13] D Marr. AI: A personal view. Arti cial Intelligence, 9:37{48, 1977. [14] AW Roe, SL Pallas, JO Hahm, and M Sur. A map of visual space induced in primary auditory-cortex. Science, 250(4982):818{820, 1990. [15] SA Solla, GB Sorkin, and SR White. Con guration space analysis for optimization problems. In Disordered systems and biological organisation, Bienstock E, (Ed.), Springer-Verlag, pages 283{293, 1986.

[16] G Sommerho . The abstract characteristics of living systems. In Systems Thinking, Emery F, (Ed), pages 147{202, 1969. [17] J V Stone. Computer vision: What is the object? In Prospects for AI, Proc. Arti cial Intel-

ligence and Simulation of Behaviour, Birmingham, England. IOS Press, Amsterdam., pages 199{208,

April 1993. [18] D'Arcy Thompson. On Growth and Form. Cambridge University Press (First publication 1917), 1961. [19] L von Bertalan y. General system theory. In Yearbook for the Advancement of of General System Theory, pages 1{10, 1956.