Avalanches in Out of Equilibrium Systems ...

4 downloads 0 Views 26MB Size Report
tants dinars: Malu, Clara, David, Adri, Aleix, Toni, Carles, Marc, Manu i colla. ...... José-María Martín-Olalla, Francisco Javier Romero, María Carmen Gallardo, ..... [Ferjani et al., 2008] Ferjani, S., Sorriso-Valvo, L., De Luca, A., Barna, V., De ...
Avalanches in Out of Equilibrium Systems: Statistical Analysis of Experiments and Simulations

A thesis dissertation by:

Jordi Baró i Urbea for the degree of Doctor in Physics

Departament d’Estructura i Constituents de la Matèria. Universitat de Barcelona

Avalanches in Out of Equilibrium Systems: Statistical Analysis of Experiments and Simulations

Tesi presentada per

Jordi Baró i Urbea per optar al títol de Doctor en Física

Director i tutor de la tesi:

Eduard Vives i Santa-Eulàlia

Aquesta tesi fou inscrita dins el programa de doctorat de Física en la línia de recerca de Física de la Matèria Condensada en el departament d’Estructura i Constituents de la Matèria de la Universitat de Barcelona. Ha estat dirigida per el Dr. Eduard Vives i Santa-Eulàlia, professor titular del mateix departament.

Jordi Baró i Urbea

Eduard Vives i Santa-Eulàlia

Aquesta tesi ha estat possible gràcies a la beca predoctoral ‘Formación de Personal Investigador’ del ‘Ministerio de Economia i Competitividad’ (FPI-MINECO) com a part del projecte de recerca MAT2010-15114.

This thesis have been possible thanks to a pre-doctoral fellowship ‘Formación de Personal Investigador’ from the ‘Ministerio de Economia y Competitividad’ (FPI-MINECO) as part of the research project MAT2010-15114.

Acknowledgements / Agraïments Once the arduous task of summarizing these four years of research is completed is time to remember all people who helped me during those years. This work would have been impossible without the help of my advisor Eduard. His guidance, advices and methodology have shaped the researcher that I am today. No minor role was played by Antoni Planes: a mentor in all matters of physics, science and life. Thanks to all partners of the research group and coauthors. Special thanks to Alvaro Corral, Jose Maria Martin Olalla, Ekhard Salje, Xavier Illa, Pedro Castillo and Daniel Soto for so many hours of discussions. To all those collegues found in such many seminars, conferences and workshops. I’ve learn a lot from you in all sorts of interdisciplinary fields. To the Benasque people that do not miss any event within the borders. To those who helped me feel like home in Urbana-Champaign. Thanks to Karin Dahmen for those amazing three months in her research group and for her kindness and hospitality. To Monica, Adolf and sons, and Montse, Verne and daughters. Thanks for Thanksgiving 2013! To Fernando, and Iker and the soccer-fan philosophers. To the people from the Institute of Condensed Matter: The administrative stuff, my office partners Mike Le-Blanch and Fei Sun. To Mike Weissman and the rest who shared so many stomach and brain-filling meals. Also to the hiking group of that freezing Saturday. Thanks to the co-authors and collaborators from other departments: Waltroud Kriven, Pathikumar Sellappan and Amy Wagonner. To my office and lab partners during the last four years for their support and sporadic advises: Pau Bitlloch, Pol Llovers, Laura Casanelles, David(s) (Frigola, Gonzalez, Oriola and Palau), Ricard Alert, Carles Blanch, Xavier Clotet, Guillermo Rodriguez, Eli Tibau, Claudia Trejo, Enric Stern and Ruben Millan. I extend my gratitude to the rest of the department and administrative staff. Also to the members of the different ’pachanga’ teams formed during these years, for so many stress-relief football matches. A nivell personal vull donar gràcies al suport dels amics de tota la vida del Projecte: a l’Arnau, a la Nuria, i a la Carla, més els del Costa i bar. Us dec temps per veure-us més sovint! I a tants altres de Física amb qui he compartit tant durant la carrera, màster i aquests quatre anys. Als companys de tants dinars: Malu, Clara, David, Adri, Aleix, Toni, Carles, Marc, Manu i colla. Sobretot gràcies a la meva família. A la meva mare Marisa, al meu pare Xavier i als meus germans Xavi, Ariadna i Janinna, a la Ljuba i l’Edgard i els meus nebots Elisenda, Andreu i Charlie. Gràcies a la Isaura per donar-me forces en els bons i mals moments. Amb tu al meu costat tot és molt més fàcil.

Barcelona, 25 de març de 2015

i

ii

Resum

En comptes de mostrar una evolució lineal i suau, molts sistemes físics reaccionen als estímuls externs en forma de dinàmica d’allaus. Al conduir externament un sistema dominat pel desordre fora de l’equilibri, l’evolució de les variables internes es produeix de forma local i no homogènia en processos col·lectius adiabàticament ràpids que anomenem allaus. Les dinàmiques d’allaus es caracteritzen per la concatenació d’instants de transformació en salts sobtats i seqüències de repòs. Les allaus estan associades a la transformació de dominis espacials tant de mides petites —on necessitem instrumental d’alta precisió per a la seva mesura— com a gran escala en fenòmens catastròficament violents com són els terratrèmols o les erupcions solars. La fenomenologia de la dinàmica d’allaus no es restringeix a l’àmbit de la matèria condensada sinó que es troba també en àrees tan interdisciplinàries com són el preu de les transaccions en els mercats financers, els senyals en cascada produïts en una xarxa neuronal o l’evolució biològica. Una de les peculiaritats de les dinàmiques d’allaus és la invariància d’escala present en molts casos, i associada a un fenomen de criticalitat. La física que s’observa en un anomenat punt critic és la mateixa a totes les escales d’observació. Existeix la possibilitat de que alguns d’aquests fenòmens comparteixin certes lleis empíriques —conegudes com a factors estilitzats en alguns àmbits interdisciplinaris— i es puguin englobar en Classes d’Universalitat que poden permetre la reducció dels sistemes complexos a models matemàtics més simples. La criticalitat en dinàmica d’allaus rep el nom de ‘soroll de cruixit’ (o ‘crackling noise’ en anglès) degut a que està caracteritzat per un soroll que conté esdeveniments llargs i amb intensitat variable. La invariància d’escala provoca que les distribucions d’allaus siguin del tipus lleis de potència i estiguin determinades per els anomenats exponents crítics. Resulta difícil, a priori, predir el comportament de les allaus i caracteritzar els seus paràmetres estadístics a partir d’una llei de potències ja que els seus moments aporten poca informació. Des de fa algunes dècades s’ha estandarditzat l’ús de les tècniques de màxima versemblança per la determinació dels exponents crítics. Tot i així, cal tenir en compte que, ja sigui per motius físics o instrumentals, la invariància d’escala només es pot determinar en un cert rang de les magnituds. Per poder determinar els extrems de la invariància iii

d’escala, tant en sistemes experimentals com models numèrics, hem desenvolupat una metodologia que permet l’avaluació d’exponents crítics explorant l’espai de possibles rangs de magnituds. Els anomenats Mapes d’Exponent per Màxima Versemblança (MLEM per les seves inicials en anglès) han sigut posats a prova amb variables sintètiques abans de ser emprats en resultats experimentals. Un altre aspecte fonamental per entendre i poder estimar els riscos relacionats amb processos de dinàmica d’allaus és la determinació de les probabilitats d’ocurrència en intervals de temps. Amb aquest propòsit hem adoptat un seguit de tècniques analítiques que permeten l’estudi de les allaus com a esdeveniments en un anomenat ‘point process’: una seqüència ordenada d’esdeveniments instantanis. La intensitat, o probabilitat de que ocorrin esdeveniments en un instant de temps, pot estar definida per fenòmens externs al proces —efectes exògens— o per la història del mateix proces —efectes endògens—. Històricament, una de les propietats més estudiades ha estat la distribució de temps d’espera entre allaus. Aquests estudis han demostrat l’existència d’una nova classe d’invariància d’escala en el cas dels terratrèmols: l’anomenada Llei d’Escala Unificada (USL per les seves inicials en anglès) que nosaltres hem corroborat en altres sistemes experimentals. Per altra banda, l’anomenat test de Bi ens permet determinar si un ‘point process’ conté efectes endògens i els podem ajustar un tipus de ‘point process’ (l’anomenat Model de Repliques Epidèmiques , ETAS per les seves inicials en anglès) , si hi identifiquem uns patrons també d’invariància d’escala, a través d’un mètode que hem anomenat Seqüència de Repliques Mitjana (MAS per les seves inicials en anglès). Amb l’ajut de les tècniques per determinar exponents crítics i els seus rangs amb invariància d’escala (els MLEM) i les propietats del ‘point process’ determinant l’evolució de la dinàmica d’allaus, hem analitzat les propietats del ‘crackling noise’ generat en diferents sistemes experimentals i simulacions numèriques. La fallida mecànica de materials heterogenis sota compressió uniaxial és un problema encara obert degut a la complexitat dels fenòmens que s’hi veuen implicats. En general, el procés de fallida es produeix en un seguit de fractures i desplaçaments sobtats que podem detectar per mitjà de tècniques d’emissió acústica. Els mecanismes que generen aquesta dinàmica d’allaus són molt semblants als que originen els terratrèmols però a una escala molts ordres de magnitud inferior. Per aquest motiu anomenem a aquestes senyals acústiques ‘terratrèmols dins del laboratori’: ‘labquakes’ en anglès. Hem realitzat experiments de compressió sobre mostres de diversos materials porosos tot registrant les allaus com a senyals acústiques en una seqüència temporal que hem analitzat com a ‘point process’. Els resultats presentats en aquesta tesi es corresponen a: mostres d’un vidre porós comercial anomenat Vycor® (SiO2 ); mostres naturals de mineral de goethita (FeO(OH)) obtingudes en un aflorament de Nova Caledònia; i mostres pures d’alúmina (Al2 O3 ) sinteritzades artificialment. Tots els materials examinats exhibeixen estadístiques d’invariància d’escala en un cert rang de magnituds, i presenten, sovint, producció endògena d’allaus. En el cas del Vycor, la similitud amb les lleis estadístiques dels terratrèmols és notable i els paràmetres del model ETAS han pogut ser ajustats per iv

a la seva descripció. Les anomenades transformacions martensítiques són degudes a una transició de fase estructural o magneto-estructural que atorguen a certs materials un seguit de propietats molt valorades a nivell industrial. És conegut que les transformacions martensítiques ocorren per un procés allaus del tipus ‘crackling noise’. Les allaus de transformació es poden detectar o bé a través de tècniques de calorimetria d’alta sensibilitat o a través de tècniques d’emissió acústica. En el cas de que la transformació involucri un canvi en les propietats magnètiques aquestes també es poden detectar a través d’un sistema de bobines enregistrant el soroll magnètic o Barkhausen. Els resultats de l’anàlisi com a ‘point process’ han posat de manifest la invariància d’escala i la presencia també de producció endògena, encara que no ha estat possible associar-la a cap model concret. Finalment, els mètodes han estat emprats per determinar les propietats del model d’Ising amb camps aleatoris (RFIM per les seves inicials en anglès), considerat el model prototip de les dinàmiques d’allaus fora de l’equilibri. Gràcies als MLEM hem pogut verificar l’adequació d’un sistema de classificació d’allaus que permet l’estudi del punt critic a través de tècniques de Escalat de Mida Finita (FSS per les seves inicials en anglès). No hem observat que el model sigui capaç de generar producció endògena d’allaus, posant en dubte que l’anomenat ‘crackling noise’ sigui condició suficient per trobar aquesta fenomenologia.

Estructura de la tesi Aquesta tesi doctoral resumeix les contribucions de l’autor a la recerca en matèria de dinàmica d’allaus. S’ha dut a terme durant el desenvolupament del programa doctoral de Física de la Matèria Condensada a la Universitat de Barcelona entre els anys 2011 i 2015. Els capítols d’aquesta tesi no segueixen l’ordre temporal de desenvolupament o publicació, sinó una visió unificada dels resultats, tant numèrics com experimentals, obtinguts en una serie de treballs de recerca paral·lels, tot justificant els procediments analítics i l’interès dels resultats. A part de presentar resultats, aquest manuscrit intenta mostrar una visió pedagògica, i la major part del material teòric és auto-contingut i accessible a un estudiant de grau. Sota aquesta premissa, els tres primers capítols introdueixen al lector al tema de les dinàmiques d’allaus (Capítol 1), els ‘point process’ (Capítol 2) i el cas específic de la sismologia com a exemple de dinàmica d’allaus interpretada com a ‘point processes’ (Capítol 3). El lector especialitzat en la matèria pot accedir directament a les Parts I i II del manuscrit. A part, el Capítol 6 inclou una breu introducció a l’emissió acústica i els processos de fractura, el Capítol 9 introdueix les transformacions martensítiques i el Capítol 10 presenta el Model d’Ising amb camps aleatoris. v

La Part I del manuscrit, constituïda per els Capítols 4 i 5, descriu en profunditat les eines estadístiques emprades per l’anàlisi de les dades numèriques i experimentals. L’autor ha desenvolupat algunes millores a tècniques ja existents com són els mapes d’exponent per màxima versemblança (Capítol 4), el test de Bi adaptat per processos de Poisson retardats o la tècnica normalitzada de la seqüència mitjana de rèpliques (Capítol 5). La Part II del manuscrit presenta els resultats obtinguts mitjançant les tècniques introduïdes a la Part I aplicades sobre diferents processos amb dinàmica d’allaus. La major part del treball està enfocada als experiments d’emissió acústica, incloent-hi els labquakes (descrits en el Capítol 6) produïts en diferents materials porosos analitzats als Capítols 7 i 8. A part de l’anàlisi estadístic, l’autor ha desenvolupat part dels experiments. La contribució experimental ha estat menor en el cas de les transicions estructurals i magneto-estructurals presentades en el Capítol 9, en les quals l’autor s’ha centrat en el filtratge i anàlisi de les dades i la posterior discussió i presentació dels resultats. Un dels objectius d’aquest treball ha estat el desenvolupament d’un codi eficient i la simulació massiva del model d’Ising amb camps aleatoris en sistemes grans. El Capítol 10 presenta els anàlisis de la seva criticalitat mitjançant els MLEM i l’estudi de les correlacions per una tècnica de MAS adaptada i el test de Bi. Les conclusions del treball es resumeixen al Capítol 11, seguit d’una llista de publicacions de l’autor que inclou aquelles presentades en la tesi i altres. Una bibliografia general està inclosa al final del document, contenint les referències citades en el text ordenades alfabèticament segons el cognom del primer autor.

vi

Abstract

Instead of a linear and smooth evolution, many physical systems react to external stimuli in avalanche dynamics. When an out of equilibrium system governed by disorder is externally driven, the evolution of internal variables is local and non-homogeneous. This process is a collective behaviour adiabatically quick known as avalanche. Avalanche dynamics are associated to the transformation of spatial domains in different scales: from microscopic —only detected through high precision devices—, to large catastrophic events such as earthquakes or solar flares. Avalanche dynamics is also an interdisciplinary topic involved in the return prices of stock markets, the signalling in neuron networks or the biological evolution. Many avalanche dynamics are characterised by scale invariance, trademark of criticality. The physics in a so-called critical point are the same in all observational scales. Some avalanche dynamics share empirical laws —known as stylized facts in other disciplines— and can define Universality Classes, reducing the complexity of systems to simpler mathematical models. Scale invariance cause avalanche distributions to be power-laws, determined by the so-called critical exponents. Statistical moments of power-laws are meaningless. Thus, it is difficult to predict the behaviour of avalanche dynamics. Few decades ago, Maximum Likelihood techniques were standardized to determine critical exponents. Either by physical reasons or instrumental resolutions, scale invariance can only be found within a certain magnitude range. The Maximum Likelihood Exponent Maps (MLEM) is a methodology developed to determine both the critical exponent and the boundaries of the scale invariance regime. A fundamental aspect to assess hazards related to avalanche dynamics is to determine the event probabilities within a temporal interval. To this purpose we have adopted some analytical techniques to study the avalanches as events in a point process: an ordered series of instantaneous events. The intensity of the point process can be defined by external phenomena (exogenous effects) or the history of the process itself (endogenous effects). Historically, the distribution of waiting times is one of the most studied properties of avalanche dynamics. Such studies have shown the existence of a new vii

scale invariance in the case of earthquakes, the so called Unified Scaling Law (USL). We have found similar scaling behaviours in other experimental systems. On the other hand, we used the so-called Bi test to identify endogenous effects in a given point process. Such effects can sometimes be adjusted to the so-called Epidemic Type Aftershock Sequence (ETAS) model if certain patterns are identified by the method of the Mean Aftershock Sequence (MAS). Such techniques have been used to analyse different systems exhibiting avalanche dynamics. The mechanical failure under uniaxial compression of heterogeneous materials is caused by a sequence of fractures and displacements that can be detected by Acoustic Emission techniques. These mechanisms are very similar to the ones originating earthquakes, but acting in a much lower scales. We performed compression experiments over samples of porous materials while registering the Acoustic Emission. This work discusses the results obtained with a mesoporous silica glass (SiO2 ), samples of natural goethite (FeO(OH)) ore and artificially sintered alumina(Al2 O3 ). Scale invariance and endogenous effects have been identified in such samples. Furthermore, the silica glass has rendered strong analogies with the statistical laws of earthquakes. Martensitic transformations are a sort of structural phase transitions found in several materials sometimes referred as Shape Memory Alloys. Such transformations take place in an avalanche process close to criticality that can be detected by high sensitivity calorimetry, acoustic emission or magnetic (Barkhausen) noise in the case of magneto-structural transitions. Our statistical analysis have also manifested the scale invariance and the presence of endogenous effects. Finally, the methodology has been applied to the numerical results of metastable loops in the Random Field Ising Model (RFIM). MLEM have been used to verify a classification method of spanning avalanches, useful to perform finite size scaling. The model doesn’t manifest endogenous production of avalanches. Hence, criticality in avalanche dynamics may not be a sufficient condition to find this phenomenology.

Structure of this Manuscript The following work constitute a Ph.D. thesis dissertation summarizing the contributions of the author to the research in the topic of avalanche dynamics. It has been carried out during the development of the Ph.D. program of condensed matter physics of University of Barcelona between years 2011-2015. The chapters of the manuscript do not obey to a temporal sequence of publications nor a linear train of thoughts, but a unified review of the numerical and experimental results found in a series of parallel research works. Apart from presenting results, this works attempts to include a pedagogic scope, and most of viii

the theoretical background is self-contained and reachable to most undergraduate students. To this purpose, the first three chapters are intended to introduce the reader to the topic of avalanche dynamics (Chapter 1), point processes (Chapter 2) and the specific case of seismology as an example of avalanche dynamics interpreted as a point process (Chapter 3). Although the statistical analysis presented in the figures of chapter 3 are originally developed by the author, the topic of earthquakes is outside our theoretical background and serves only as a reference frame for both the development of the analytical tools described in detail in Part I, and the analogies with experimental results. The introductory chapters can be avoided by a specialist reader. Furthermore, Chapter 6 includes a brief introduction to acoustic emission tests, Chapter 9 to shape memory alloys and structural phase transitions and Chapter 10 to the Random Field Ising Model. Chapters 4 and 5 constitute Part I of the original work. Represent an in-depth description of the statistical tools used for the analysis. The author has introduced modifications to existing procedures such as a) the Maximum Likelihood Exponent Maps as an improvement to Maximum Likelihood techniques (Chapter 4), b) the Bi test adapted to delayed Poisson processes and c) the normalized Mean Aftershock Sequence technique (Chapter 5). Part II of the original work presents the results obtained from the techniques introduced in Part I applied to several empirical and numerical avalanche processes displaying crackling noise. Most of the work developed during the PhD program is focused in acoustic emission experiments, including the so-called labquakes described in Chapter 6, performed over different porous materials analysed in Chapters 7 and 8. Apart from the statistical analysis, a number of the presented labquake-experiments have also been performed by the author. The experimental contribution was minor in experiments regarding structural and magnetostructural phase transitions presented in Chapter 9. In this case, the author is responsible of the data filtering, analysis, the posterior discussion and the presentation of the results. The Random Field Ising Model is considered a prototype for avalanche dynamics. One of the main goals of this work was the development of an efficient code and massive simulations to study the properties of the model for large systems and high statistics close to the critical point. Chapter 10 presents an analysis of criticality by means of Maximum Likelihood Maps and the study of correlations by means of an adapted Mean Aftershock Sequence technique and the Bi test. Concluding remarks are summarized in Chapter 11, followed by a list of publications by the author that include those presented in this dissertation and others. A general bibliography is added at the end of the document with the references cited in the text, ordered alphabetically by first author’s name.

ix

x

Contents

1

Introduction To Avalanche Dynamics

1

1.1

The Complex System Paradigm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1

1.2

Metastability and Avalanche Dynamics . . . . . . . . . . . . . . . . . . . . . . . . .

3

1.3

Criticality Induced by Disorder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6

1.3.1

Low Degree of Disorder: Snapping Noise . . . . . . . . . . . . . . . . . . . .

6

1.3.2

High Degree of Disorder: Popping Noise . . . . . . . . . . . . . . . . . . . .

7

1.3.3

Critical Degree of Disorder: Crackling Noise . . . . . . . . . . . . . . . . . .

7

Determination of Critical Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . .

8

1.4.1

The Renormalization Group . . . . . . . . . . . . . . . . . . . . . . . . . . .

9

1.4.2

Collapsing of Scaling Functions . . . . . . . . . . . . . . . . . . . . . . . . .

9

1.4.3

Finite Size Scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

1.4.4

Fitting of Power-law Distributions . . . . . . . . . . . . . . . . . . . . . . . . 10

1.4

1.5

1.6 2

Origin of Criticality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 1.5.1

Sweeping Through a Critical Field . . . . . . . . . . . . . . . . . . . . . . . . 11

1.5.2

Self Organized Criticality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

1.5.3

Slow Training Towards Criticality . . . . . . . . . . . . . . . . . . . . . . . . 13

Sources of Crackling Noise Analysed in this Work . . . . . . . . . . . . . . . . . . . . 13

Introduction to Temporal Point Processes 2.1

2.2

15

Stochastic Point Processes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.1.1

Intensity of a Point Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2.1.2

Waiting Times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

2.1.3

Counting Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

2.1.4

Classification of Temporal Point Processes . . . . . . . . . . . . . . . . . . . . 19

Poisson Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.2.1

Homogeneous Poisson process . . . . . . . . . . . . . . . . . . . . . . . . . . 20

2.2.2

Non-Homogeneous or Local Poisson process . . . . . . . . . . . . . . . . . . 20

2.2.3

Power-Law Poisson Process . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 xi

CONTENTS 2.3 2.4 2.5

3

Renewal Process . . . . . . . . . . . . . . . . . . 2.3.1 Dead-Time or Delayed Poisson Process . Conditional Intensity and Self-Exciting Processes Sampling p.p. from Poisson . . . . . . . . . . . . 2.5.1 Bernoulli Approach . . . . . . . . . . . . 2.5.2 Random Thinning . . . . . . . . . . . . . 2.5.3 Time Rescaling . . . . . . . . . . . . . .

Introduction to Seismology as a Point Process 3.1 Magnitudes and Energy . . . . . . . . . . . . . 3.2 Statistical Laws of Earthquakes . . . . . . . . . 3.2.1 Gutenberg-Richter Law . . . . . . . . . 3.2.2 Omori’s Law . . . . . . . . . . . . . . . 3.2.3 Productivity Law of Aftershocks . . . . 3.2.4 Unified Scaling Law . . . . . . . . . . . 3.3 Physics of Seismogenesis . . . . . . . . . . . . 3.4 Evaluation of Seismic Hazard . . . . . . . . . . 3.5 The Epidemic Type Aftershock Sequence Model 3.5.1 Implementation . . . . . . . . . . . . . 3.5.2 Limitations of the ETAS model . . . . .

. . . . . . . . . . .

. . . . . . .

. . . . . . . . . . .

. . . . . . .

. . . . . . . . . . .

. . . . . . .

. . . . . . . . . . .

. . . . . . .

. . . . . . . . . . .

. . . . . . .

. . . . . . . . . . .

. . . . . . .

. . . . . . . . . . .

. . . . . . .

. . . . . . . . . . .

. . . . . . .

. . . . . . . . . . .

. . . . . . .

. . . . . . . . . . .

. . . . . . .

. . . . . . . . . . .

. . . . . . .

. . . . . . . . . . .

. . . . . . .

. . . . . . . . . . .

. . . . . . .

. . . . . . . . . . .

. . . . . . .

. . . . . . . . . . .

. . . . . . .

. . . . . . . . . . .

. . . . . . .

. . . . . . . . . . .

. . . . . . .

. . . . . . . . . . .

. . . . . . .

. . . . . . . . . . .

. . . . . . .

. . . . . . . . . . .

. . . . . . .

21 22 23 23 24 24 24

. . . . . . . . . . .

27 28 29 30 33 35 36 38 39 40 42 43

I Analytical Techniques

45

4

47 47 48 49 49 50 51 52 53 53 54 54 55 56 57 58 60

xii

Estimation of Power-Law Exponents 4.1 Problems with Standard Methods for Power-law Fitting . . . . . . . . 4.1.1 Least Squares Fitting Method . . . . . . . . . . . . . . . . . . 4.1.2 Rank Ordering Fitting Method . . . . . . . . . . . . . . . . . 4.1.3 Bias of the Fitted Exponent in front of Maximum Likelihood . 4.1.4 Limited Range of Power-Laws . . . . . . . . . . . . . . . . . 4.2 Maximum Likelihood Estimators of Power-Law Exponents . . . . . . 4.2.1 Continuous Power-law Distribution . . . . . . . . . . . . . . 4.2.2 Continuous Distribution inside an Interval . . . . . . . . . . 4.2.3 Discrete Distribution of Natural Numbers . . . . . . . . . . . 4.2.4 Binned Data . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2.5 Data Given in Logarithmic Magnitudes . . . . . . . . . . . . 4.2.6 The Likelihood Derivative is Monotonous . . . . . . . . . . . 4.3 Exponent Estimated as Function of Boundaries . . . . . . . . . . . . 4.3.1 Representation of Maximum Likelihood Exponent Maps . . . 4.3.2 Estimation of the Error Bars from Numerical Results . . . . . 4.3.3 Goodness of the Fit . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

. . . . . . . . . . . . . . . .

CONTENTS 4.4

4.5 4.6 5

Tests over Synthetic Data . . . . . . . . . . . . . . . . 4.4.1 Synthetic Pure Power-Law . . . . . . . . . . . 4.4.2 Synthetic Power-Law with Exponential Cut-off 4.4.3 Synthetic Log-normal Distribution . . . . . . . 4.4.4 Synthetic Extreme Value Distributions . . . . . Exponent Maps of Gutenberg Richter Exponents . . . . Further Use . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

Techniques to Evaluate Temporal Correlations in Avalanches 5.1 Maximum Likelihood of a Point Process . . . . . . . . . . . . . . . 5.2 Bi test for Local Poisson and Local Renewal Processes . . . . . . . . 5.2.1 Definition of δt and δτ . . . . . . . . . . . . . . . . . . . . 5.2.2 Distribution of δt . . . . . . . . . . . . . . . . . . . . . . . 5.2.3 Definition of H . . . . . . . . . . . . . . . . . . . . . . . . 5.2.4 Kolmogorov-Smirnov Test and Interpretation of the Results 5.2.5 Bi test adapted to Renewal Processes . . . . . . . . . . . . . 5.2.6 Results on Synthetic Data . . . . . . . . . . . . . . . . . . . 5.3 Distribution of Waiting Times . . . . . . . . . . . . . . . . . . . . . 5.3.1 The Density of Rates Affects the DWT . . . . . . . . . . . . 5.3.2 Distribution of Rates from Modulated Intensity . . . . . . . 5.3.3 Intensity Extracted from Empirical Rates . . . . . . . . . . 5.3.4 Empirical Rates in Rate-modulated Processes . . . . . . . . 5.3.5 The Modulation of Intensity Affects the DWT . . . . . . . . 5.3.6 Waiting Times of a Non-homogenenous Poisson Process . . 5.3.7 Waiting Times of the ETAS Model . . . . . . . . . . . . . . 5.4 Analysis of Production and Inhibition Rates . . . . . . . . . . . . . 5.4.1 Mean Aftershock Sequence Method . . . . . . . . . . . . . 5.4.2 Fitting of Omori Parameters . . . . . . . . . . . . . . . . . 5.4.3 Effect of Dynamic Intensity λ(t) . . . . . . . . . . . . . . . 5.4.4 Additional Problems with MAS . . . . . . . . . . . . . . . 5.5 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

. . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

61 62 66 68 70 74 77

. . . . . . . . . . . . . . . . . . . . . .

79 80 80 81 81 82 82 84 87 89 91 91 92 94 94 95 96 97 97 99 101 103 103

II Experimental Results

105

6

107 108 108 110 111 111

Experimental Setup: Mechanical Failure Under Compression 6.1 Introduction to the Mechanical Response of Materials To External Stresses 6.1.1 Determination of the Failure Point . . . . . . . . . . . . . . . . . . 6.1.2 Plastic and Viscoelastic Models . . . . . . . . . . . . . . . . . . . . 6.1.3 Fracture and Collapsing of Porous Materials . . . . . . . . . . . . . 6.2 Acoustic Emission and Labquakes . . . . . . . . . . . . . . . . . . . . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

. . . . .

xiii

CONTENTS

6.3

6.4

6.5

7

6.2.2

Production of Aftershocks and Unified Scaling Law . . . . . . . . . . . . . . 113

6.2.3

Practical Application of Labquakes . . . . . . . . . . . . . . . . . . . . . . . . 113

Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 6.3.1

Compression Device . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

6.3.2

Piezoelectric Transducer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

6.3.3

Data Processing Unit . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

Distortions in the Detection of AE Signals . . . . . . . . . . . . . . . . . . . . . . . . 117 6.4.1

Acoustic Coupling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

6.4.2

Superposition and Occlusion of Avalanches . . . . . . . . . . . . . . . . . . . 118

6.4.3

Saturation at the Amplitude Limit . . . . . . . . . . . . . . . . . . . . . . . . 118

6.4.4

Background Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

6.4.5

Modulation from c(ν) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

6.4.6

Anisotropic Attenuation of Acoustic Waves . . . . . . . . . . . . . . . . . . . 120

6.4.7

Response of the Transducer to a Simple Hit . . . . . . . . . . . . . . . . . . . 120

Interpretation of Recorded Magnitudes . . . . . . . . . . . . . . . . . . . . . . . . . . 121 6.5.1

H1: Universal Avalanche Shape . . . . . . . . . . . . . . . . . . . . . . . . . 121

6.5.2

H2: Long Transducer Response . . . . . . . . . . . . . . . . . . . . . . . . . . 122

6.5.3

H3: Convolution of Universal Shape and Falloff . . . . . . . . . . . . . . . . . 122

6.5.4

H4: Superposition of Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 127

7.1

Synthesis of Vycor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

7.2

Samples and Experimental Conditions . . . . . . . . . . . . . . . . . . . . . . . . . . 128

7.3

Time Sequences and Mechanical Properties . . . . . . . . . . . . . . . . . . . . . . . 129

7.4

Relation between Avalanche Properties . . . . . . . . . . . . . . . . . . . . . . . . . . 132

7.5

Scale Free Distribution of Energies . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136

7.6

Temporal Correlations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 7.6.1

Rejection of Poisson Null-hypothesis . . . . . . . . . . . . . . . . . . . . . . . 142

7.6.2

Fitting of ETAS Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145

7.6.3

Unified Scaling Law for Waiting Times . . . . . . . . . . . . . . . . . . . . . 151

Concluding Remarks and Comparison with Earthquakes . . . . . . . . . . . . . . . . 154

Acoustic Emission of Porous Minerals during the Uniaxial Compression 8.1

xiv

Scale Invariance of Energies . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

Failure of Vycor (SiO2 ) Under Compression

7.7 8

6.2.1

157

Failure under Compression of Goethite (FeO(OH)) . . . . . . . . . . . . . . . . . . . . 158 8.1.1

Sample Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158

8.1.2

Experimental Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158

8.1.3

Compression Sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159

8.1.4

Distribution of AE-energies . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161

8.1.5

Clustering Between AE events . . . . . . . . . . . . . . . . . . . . . . . . . . 164

CONTENTS 8.2

8.3 9

Failure under Compression of Alumina(Al2 O3 ) . . . . . . . . . . . . . . . . . . . . . 167 8.2.1

Synthesis of the Samples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168

8.2.2

Experimental Setup and Filtering of Signals . . . . . . . . . . . . . . . . . . . 169

8.2.3

Compressive Strength as a Function of Porosity and Rate . . . . . . . . . . . 169

8.2.4

Analysis of Acoustic Activity . . . . . . . . . . . . . . . . . . . . . . . . . . . 171

8.2.5

Dependence of the Energy Exponent on Porosity . . . . . . . . . . . . . . . . 174

8.2.6

Endogenous Productivity of Events . . . . . . . . . . . . . . . . . . . . . . . 178

Towards a Unified Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180

Crackling Noise in Martensitic Transformations 9.1

183

Introduction to Martensitic Phase Transitions . . . . . . . . . . . . . . . . . . . . . . 183 9.1.1

Stress-Driven Transitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184

9.1.2

Thermally-Driven Transition . . . . . . . . . . . . . . . . . . . . . . . . . . . 184

9.1.3

Magnetostructural Transitions . . . . . . . . . . . . . . . . . . . . . . . . . . 185

9.1.4

Sources of Crackling Noise . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185

9.2

Scale Invariance in the Structural Transition of FePd Alloys . . . . . . . . . . . . . . . 186

9.3

Acoustic Emission and Calorimetry in the Martensitic Transformation of a CuZnAl Alloy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 9.3.1

Calorimetric Signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189

9.3.2

Acoustic Emission Events . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191

9.3.3

Rate Evolution and Filtering of AE signals . . . . . . . . . . . . . . . . . . . . 191

9.3.4

Scale Invariance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193

9.3.5

Time Correlations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195

9.4

Evidence of Correlations in the Magnetostructural Transition of a NiMnGa SMA . . . 200

9.5

Remarks and Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201

10 Avalanches in the Athermal 3D-RFIM

203

10.1 Introduction to the 3D Gaussian Random Field Ising Model . . . . . . . . . . . . . . . 204 10.2 Simulation of Out-of-equilibrium Magnetization Processes . . . . . . . . . . . . . . . 206 10.2.1

Initialization of the Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206

10.2.2

Magnetization Loop . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206

10.2.3

Adiabatic Propagation of Avalanches . . . . . . . . . . . . . . . . . . . . . . 207

10.2.4

The Sorted-List Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207

10.3 Role of Disorder in the Metastable Dynamics . . . . . . . . . . . . . . . . . . . . . . 209 10.3.1

Ordered and Nearly-Ordered Lattice (R & 0) . . . . . . . . . . . . . . . . . . 211

10.3.2

Supercritical Disorder (R < R∗c ) . . . . . . . . . . . . . . . . . . . . . . . . . . 213

10.3.3

Subcritical Disorder (R > Rc ) . . . . . . . . . . . . . . . . . . . . . . . . . . . 213

10.3.4

Critical Disorder (R∗c ) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 214

10.4 Scale Free Avalanches Close to the Critical Point . . . . . . . . . . . . . . . . . . . . 214 10.4.1

Sweeping Through the Critical Field Hc . . . . . . . . . . . . . . . . . . . . . 216 xv

CONTENTS 10.4.2 Spanning Avalanches and Lattice Animals in Finite Size Simulations 10.4.3 Classification of Avalanches . . . . . . . . . . . . . . . . . . . . . . 10.4.4 Evaluation of Exponents by means of MLEM . . . . . . . . . . . . . 10.5 Correlations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10.5.1 Precursory and Successive Activity to Hc (L). . . . . . . . . . . . . . 10.5.2 Precursor and Successive Activity to HMS . . . . . . . . . . . . . . . 10.5.3 Extrapolation to the Thermodynamic Limit . . . . . . . . . . . . . . 10.5.4 Evidence of Independence Between Avalanches . . . . . . . . . . . . 10.6 Crackling Noise Does Not Imply Self-Excitation . . . . . . . . . . . . . . . . 11 Conclusions

xvi

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

. . . . . . . . .

216 220 222 228 229 230 233 234 235 237

Chapter

1

Introduction To Avalanche Dynamics Avalanche dynamics represent a complex phenomenon related to the presence of instabilities like First Order Phase Transitions or pinning-depinning transitions in interactive systems governed, not by thermal fluctuations nor by annealed noise, but by a balance between the interactions and the degree of quenched disorder (section 1.3). The interplay between the interactions and disorder may give rise to scale-free avalanche processes that are named ‘crackling noise’ (section 1.3.3). Instead of being an unusual behaviour, many natural systems exhibit crackling noise. In the last decades explanations have been proposed to the ubiquity of scale-free avalanches in nature such as the paradigms of Self Organized Criticality, the sweeping of instabilities or the self-training towards disorder (section 1.5). Apart from the statistics of individual avalanches, until the last years little efforts had been oriented towards the study of temporal correlations between avalanches, apart from those processes involving real hazards such as seismology or solar flares.

1.1

The Complex System Paradigm

Historically, physicists have relied in the simplification of the natural systems as a valid first approach to understand the mechanics underlying physics. The usual point of view was that nature is sophisticated, or complex, because of the superposition of independent fundamental laws that can be understood individually in order to give an explanation to its whole. Complex problems were solved from a reductionist approach, relying on the possibility to understand the whole problem from the solution of individual smaller elements. As counterpart, reductionist science requires a deep knowledge of the elements constituting any complex system, that cannot be extrapolated to a slightly different problem. Any variation of the initial conditions or the constituent elements must be resolved again from scratch, since a reductionist approach doesn’t predict how the system will by altered. During the last quarter of the twentieth century a new paradigm appeared, fed in part by the advances in computational abilities that allowed affordable simulations of non-linear models. The 1

1.1. THE COMPLEX SYSTEM PARADIGM study of complex systems deal with the emergence of synchronization [Strogatz, 2000], heard behaviour [Reynolds, 1987, Cont and Bouchaud, 2000], pattern formation [Meinhardt and Meinhardt, 1982] and fractal and multifractal geometries [Mandelbrot, 1983, Stauffer and Aharony, 1994, Cowie et al., 1995] , that cannot be explained only from the fundamental laws governing the elementary parts of the system. In 1972, P. W. Anderson [Anderson et al., 1972] presented a summary on how a new kind of physics was currently breaking the rules of classical statistical mechanics. The title of the article ("More is Different") summarized the idea that some complex physical phenomena emerge from the collective behaviour of the constituent elements that are described by a simple set of physical laws. Complexity emerges from the addition of interactions between these elements. The whole is more than the sum of its parts. The laws emerging in complex systems can be treated within statistical physics, with the only condition that, in many cases, the ergodic principle is broken [Bantilan Jr and Palmer, 1981, Palmer, 1982, Bouchaud, 1992] and requires the development of a new set of analytical tools. This characteristic enables the development of statistical laws from a holistic approach: The complex system can be understood from universal laws that are able to explain its particularities by setting-up tunable variables, making unnecessary a reductionist description of the system. In many physical systems, the goodness of holistic approaches is related to the presence of scale-free behaviour, characterized by the divergence of typical scales and the ubiquity of power-law statistical distributions [Bak, 1996]. However, even in such conditions, the holistic approach may be too naive, and some particular cases need to be studied separately in a reductionist manner to understand the system in detail. This discussion is specially controversial in the case of earthquakes [Ito, 1993, Kagan, 1999, Mucciarelli, 2008, Werner, 2008]. The results presented in this work attempt to give an holistic picture of some avalanche phenomena studied as point processes. However, as will be shown, some results are not conclusive and leave a series of open questions. The complex system paradigm is based upon the early revolution of condensed matter physics which explains the emergence of thermodynamic phase transitions from the microscopical properties of individual particles, either in the classical or quantum mechanics paradigm. The holistic approach to the fundamental laws governing matter helped to understand the emergence of ferromagnetism from exchange interactions [Heisenberg, 1926, Aharoni, 1996], the origin of empirical models such as Van der Waals theory of fluid phase transitions [Dzyaloshinskii et al., 1961] or anomalous phenomenology such as superconductivity [Landau and Ginzburg, 1950] or superfluidity [Landau, 1941, Leggett, 1999]. In the last decades the techniques imported from condensed matter have gone beyond the boundaries of physics and have been applied to other sciences such as biology [Beggs and Plenz, 2003, Moe et al., 1964], sociology [Lagi et al., 2011, Barabasi, 2005], economics [Chowdhury and Stauffer, 1999, Cont, 2001, Sornette, 2009, Mantegna and Stanley, 1999], epidemiology [Andersson and Brit2

CHAPTER 1. INTRODUCTION TO AVALANCHE DYNAMICS ton, 2000, Pastor-Satorras and Vespignani, 2001], evolution [Yule, 1925, Gould, 1972] or computer science [Pastor-Satorras and Vespignani, 2007]. These disciplines dealt historically with complex problems involving collective behaviour and different sources of noise or disorder. The holistic approach to these problems found profound analogies to well studied physical phenomena [Stauffer et al., 2006, Chakrabarti et al., 2007].

1.2

Metastability and Avalanche Dynamics

One of the major accomplishments of Condensed Matter Physics is the explanation of thermodynamic phase transitions as a property emerging from interactions between particles. Thermodynamic systems typically exhibit first order phase transitions (FOPT) identified by the coexistence of two different states of matter (or phases) with clearly different thermodynamic properties, i.e. displaying a discontinuity in a magnitude identified as the order parameter of the transition (m). The system at the FOPT is partitioned between two equilibrium states in a fraction controlled by the latent heat (L) supplied by the environment. On the contrary, continuous phase transitions are characterised by the divergence of the susceptibilities such as specific heats C = dU /dT and correlation lengths ξ [Privman, 1990] close to a so-called critical point at temperature Tc : C ∼ |T − Tc |−α ξ ∼ |T − Tc |−ν

(1.1)

No latent heat nor phase coexistence is associated with continuous phase transitions and, thus, all extensive variables are continuous across the phase transition. The divergence of the specific heat is a trademark of the critical point, characterised by scale-free phenomena and power-law distributions [Kadanoff et al., 1967]. One of the most used approaches from Condensed Matter Physics is the development of Mean Fields Models, swapping the finite-range of the interactions by an all-to-all interaction and, thus, neglecting the underlying metric or topological space. Under the Mean Field approach, first order and continuous phase transitions can be understood from Landau’s Theory [Landau, 1936]. The inclusion of interaction between the elements of the system gives rise to a double-minimum free energy landscape f (m, T − Tc ) that can be expressed as a power series expansion truncated to low orders close to the critical temperature Tc . The simplest defined energy landscape depends only on the external field H, the temperature T and the order parameter m defining the FOPT of the system, usually computed as the average state of the individual particles. Landau’s theory takes advantage of the symmetries of each system and the properties of the critical point to reduce the number of linearised terms in the power series. The minimum free energy landscape needed to depict a system close to a continuous phase transition reads: 1 f (m, T − Tc ) = f (Tc ) + α(T − Tc )m2 + βm4 − Hm 2

(1.2) 3

1.2. METASTABILITY AND AVALANCHE DYNAMICS In the absence of field, odd-power orders disappear to manifest the thermodynamic stability of the system. The quadratic term serves as a first order approximation to the well-shape. The continuous phase transition at the critical point Tc and characterised by a divergence of the susceptibility −1  2 d f χm ∼ dm2 (0, 0) , imposes that the dependence on T − Tc has to participate in the quadratic m term on m. Interactions between particles act in the term β constructing two energy wells below Tc that characterise the two phases. The driving external field H appears in the coupling term Hm and is responsible of the FOPT by tilting the landscape towards one of the phases. Above the critical point Tc , a single energy well is preserved, there is no phase transition and the system responds homogeneously and smoothly to the driving of H. Since the Landau mean field model is built upon a power expansion, the properties of the critical point, defined by the exponents of the divergence in the characteristic length-scales, are universal and depend only on the symmetries of the system, i.e. the surviving terms in the expansion. Thus, mean field solutions of different systems sharing the same symmetries can be described with the same set of critical exponents defining a so-called Mean Field Universality Class [Hohenberg and Halperin, 1977]. Below Tc the phase separation characteristic of FOPT appears as the double-well potential, allowing the coexistence of states in the thermodynamic equilibrium. In the absence of thermal fluctuations or when the energy wall separating the phases is too high, the ergodic principle is weakly broken [Bouchaud, 1992] and fractions of the system are trapped in either one of the states, as shown in Fig. 1.1.a. The evolution of the external parameter H tilting the energy landscape will drive one of the states of the FOPT out of equilibrium, creating the phenomenon of metastability (Fig. 1.1.b). In the extreme athermal case, the metastable state is only able to transform when the energy wall vanishes. The energy needed to transform a metastable state is already supplied in the difference between its free energy and the stable state. Thus, in the event of transformation (guaranteed from a persistent driving of H) the process is sudden and independent of the driving rate (Fig. 1.1.c). Avalanche is the common name given to the sudden out-of-equilibrium transformation of a metastable state. Although the avalanche needs a driving parameter H to both grow the instability of the metastable state and trigger the transformation, H does not participate on the propagation of the avalanche. The phase transition is much faster than the evolution of H or any remaining characteristic time governing the system. This approximation is called adiabatic since the propagation does not require additional energy from the driving field. In the case of the Landau theory, a single avalanche transforms the metastable phase exactly at the vanishing of the free energy well in the athermal limit. From d 2 f /dm2 = 0, the triggering field (Hcoer ), magnetization reached before the avalanche (mcoer ) and the avalanche size (∆m) can be solved analytically: r 3/2  4 α 1α Hcoer = p (Tc − T ) ∆m = 3mcoer (1.3) mcoer = 3β β 3 The stall of the system inside metastable states in the athermal limit gives rise to a rate-independent 4

(d)

f

CHAPTER 1. INTRODUCTION TO AVALANCHE DYNAMICS

(d)

m

f

mcoer (c)

f

(c)

(b)

-mcoer

f

(b) (a) -Hcoer

H

(a) Hcoer

m

Figure 1.1: Hysteresis loop (left-panel) originated by the growth of a metastable state from a smooth H driving (a → b) until the energy wall disappear at (Hcoer , mcoer ) transforming the system in a sudden avalanche (c) towards the stable state m ≥ 2mcoer (d). hysteresis loop with coercive field Hcoer (see left panel in Fig. 1.1). The metastable behaviour of the double-well Landau free energy is illustrated in Fig. 1.1. Many natural phenomena exhibit intermittent or jerky response to external stimuli that can be described as avalanche dynamics. However, the values of triggering fields are far from being unique: are distributed in a broad range of H and the propagation of the avalanche can be pinned, stalling the phase transition at intermediate values of m. As consequence, the prediction of avalanche triggering fields and the magnitude of these avalanches is an open problem characteristic of each avalanche process. Although in this chapter avalanches have been introduced through a FOPT, not all avalanche processes can be described within this paradigm. Many avalanche processes are irreversible and involve the presence of absorbing states [Dickman et al., 1998, Frette et al., 1996]. These avalanche dynamics can be defined as pinning-depinning transitions [Fisher, 1983] or Self Organized Critical processes (section 1.5.2). All avalanche processes share a set of ideal conditions introduced in the Landau theory example and summarised in the following relation: τr ≪ τH ≪ τT

(1.4)

there must exist a clear separation between the characteristic time-scales of the driving rate trig5

1.3. CRITICALITY INDUCED BY DISORDER gering avalanches (τH ) and a much faster avalanche propagation or relaxation time (τr ), in order to define the adiabatic limit and be able to identify individual avalanches. Both time-scales must be faster than any characteristic time of thermal activation of metastable states (τT ), preventing the driving towards an equilibrium state and thus defining the athermal limit. Besides the origin of the underlying instability and the adiabatic separation between temporal scales, the variability of avalanche processes found in nature has its origin in the heterogeneity of the systems. To understand the complexity of avalanche dynamics, quenched disorder must be introduced in the models.

1.3

Criticality Induced by Disorder

Since avalanche phenomena is characterised by athermal conditions, all the stochastic properties of the avalanche process should be related to an ensemble of parameters presenting some sort of quenched disorder. This kind of disorder is stationary over time, and the ahermal limit weakly breaks the ergodic principle [Bouchaud, 1992]. Thus, avalanche dynamics cannot be solved with the classical tools of equilibrium statistical physics. In thermal systems, the presence of disorder can limit the extent of characteristic lengths and prevent the divergence of the thermodynamic magnitudes in the vicinity of the continuous phase transition [Harris, 1974]. Considering the athermal limit, the avalanche process is governed by the proportion between the interactions and the magnitude (or the variability) of disorder, giving rise to a new phase separation and a new class of criticality. The presence of a critical degree of disorder separate avalanche dynamics in three different behaviours in the thermodynamic limit [Sethna et al., 2001] that act as fixed points in the renormalization group (section 1.4). One of the best frameworks to illustrate this transition is the Random Field Ising Model with metastable dynamics [Sethna et al., 1993] that will be introduced in Chapter 10. In the following subsections a qualitative review is given to illustrate the three avalanche regimes.

1.3.1

Low Degree of Disorder: Snapping Noise

Whenever the degree of disorder is low, the variation of energy walls generated from disorder in the free-energy landscape is much lower than the typical energy exchanged by the interaction. Thus, the avalanche propagation is rarely pinned or stalled, and the phase transition will typically be performed by a massive quick avalanche with a ballistic growth that will span through the whole system, no matter how big it is. Thus, in the thermodynamic limit, the avalanche process will always consist in a single massive avalanche and possibly a series of finite and typically small avalanches, the few ones that can still get pinned by disorder. This regime is usually referred as snapping noise for the similitude with a sudden break [Sethna et al., 2001]. 6

CHAPTER 1. INTRODUCTION TO AVALANCHE DYNAMICS

Because massive avalanches activate the major part of the system, irreversible snapping noise exhibit quasi-periodicity in the typical recurrence times [Ben-Zion et al., 1999, Sornette, 2009] similar to the stated predictability of Hcoer in eq. 1.3. Snapping noise is characteristic of systems displaying synchronization such as many self-organized biological systems like the synaptic avalanches in neuronal networks [Beggs and Plenz, 2003] or the cardiac cycle [Moe et al., 1964] that depend on the right tuning of disorder [Osorio et al., 2010, Christensen et al., 2015] to function. Other self-organized systems better described by Kuramoto’s oscillators [Corral et al., 1995] may need a relaxation time to acquire the synchronized state but at the end also rely on the strength of the interactions in front of elements of disorder. The signal must be spread to the whole system to reach the synchronized state.

1.3.2

High Degree of Disorder: Popping Noise

Whenever correlations are strongly damped by either strong disorder or weak interactions, the particles of the system tend to transform individually or through the formation of small clusters. The clusters are usually related to the topological space defining the range of interactions [Stauffer and Aharony, 1994] and, thus, strongly dependent on the model if the process is being simulated [Kuntz et al., 1999]. Typical avalanches in this regime, called popping noise, are finite and small. Since these avalanches have a characteristic length-scale, in the thermodynamic limit popping noise cannot be distinguished from a smooth response of the system.

1.3.3

Critical Degree of Disorder: Crackling Noise

Between the previous two regimes, there may exist a critical degree of disorder originating energywall variations compatible with the strength of interactions. Avalanches nucleated within this energy landscape can propagate with a branching ration close to 1, i.e. close to the percolation threshold [Stauffer and Aharony, 1994] and are usually referred as crackling noise. These avalanches will be slow, memoryless, scale-free and fractal. Thus, they are inherently unpredictable. Crackling noise characterises new universality classes related to this athermal critical point. In the last decades, a major research has been oriented towards the determination of these new universality classes [Dahmen and Sethna, 1996, Dahmen et al., 2009] and critical models have been proposed to explain the scale-free nature of many natural avalanche phenomena (section 1.5). Most of the research, however, has been based on the extension and magnitude of scale-free properties of the avalanches, arising naturally from the divergence of correlation lengths. The study of the distribution of triggering fields and temporal correlations in a slowly driven crackling noise avalanche processes have been set aside as a minor problem. Some studies have been performed [Bouchaud, 1992, Boffetta et al., 1999, Bel and Barkai, 2005, Cerruti and Vives, 2009, Zare 7

1.4. DETERMINATION OF CRITICAL PARAMETERS and Grigolini, 2012, Corral, 2014] and they have a prominent importance in seismology [Utsu et al., 1995, Bak et al., 2002, Corral, 2003] but the topic is sill far from being understood in a unified framework [Corral, 2014].

1.4

Determination of Critical Parameters

Systems close to criticality can be studied by means of evaluation techniques based on both the scalefree nature of magnitudes and its critical divergence with the distance to the critical point, stated in eq. 1.1. In general, we will consider the distance to the critical point as a variable ǫ instead of T − Tc , since criticality is not limited to thermal fluctuations as stated in section 1.3. Critical points are characterized by the divergence of correlation lengths that originates scale-free phenomena. In short [Stanley, 1987], whenever a variable is invariant under scale transformations of a certain magnitude x, this scale-free variable must be an homogeneous function of x such that is transformed after scaling x → λx to: φ(x) → φ(λx) = g(λ)φ(x) = λβ φ(x) (1.5) leading to a power-law relation with scales x such as: φ(x) = xβ φ˜

(1.6)

The emergence of power-law relations is a natural outcome from scale-invariance exactly at the critical point. For example, power-laws appear in the probability density function (pdf) of variables (s) such as cluster sizes or avalanche energies: D(s) = s −τ D˜

(1.7)

Magnitudes diverging close to critical points (φ(ǫ) ≈ ǫ −α φ˜ when ǫ → 0) as stated in eq. 1.1 such as susceptibilities are related to power-law distributions compatible with eq. 1.7. Thus the constant D˜ will, in general, depend on both ǫ and x: ˜ ǫ) ˜ ǫ) = s −α ′ D(s, D(s, ǫ) = s −τ D(s,

(1.8)

In order to fulfil such equivalence [Privman, 1990], in the vicinity of a critical point the scale-free distributions takes the general form: ˜ −σ ) D(ǫ, s) = s −τ D(sǫ ′ ˜ D(ǫ, s) = ǫ −α D(sǫ −σ )

(1.9)

The constant D˜ in eq. 1.7 becomes a scaling function truncating the power-law behaviour with the characteristic scale s ∼ ǫ σ . Only for ǫ = 0 the value of D˜ is independent of s and, thus, D is scale free, and the related statistical moments defining susceptibilities can diverge. Since both expressions must be equivalent, the exponents are related such as α ′ = στ.

8

CHAPTER 1. INTRODUCTION TO AVALANCHE DYNAMICS Two important lectures can be generalized from this example: First, magnitudes at the critical point are fully determined by power-law exponents (α, β, τ, σ etc.) and scaling functions of the type ˜ −σ ). Secondly, there exist analytical relations between exponents α from eq. 1.1, defining the φ(xǫ divergence of magnitudes at the critical point, and the scale-free exponents β and σ. Furthermore, due to physical constrains, some of the remaining exponents related to different magnitudes are not independent [Stanley, 1987, Dahmen and Sethna, 1996] reducing the number of free parameters determining the universality class of the system. Different approaches can be taken to determine the set of independent critical exponents from the critical point of the system.

1.4.1

The Renormalization Group

The renormalization group (RG) [Wilson, 1971, Binney et al., 1992] studies the way in which system space maps into itself under coarse-graining [Sethna, 2006] i.e. allows to study the critical point from the behaviour of the system under scale transformations a → λa, where x = a now represents a characteristic length, related to the range of interactions [Cardy, 2012] (the minimum distance between the sites of a lattice, for example). The RG is based upon the divergence of correlation lengths (ξ) stated in eq. 1.1. Since a is a length-scale, it is straightforward from dimensionality that the scaling exponent of correlation lengths βξ = 1 [Privman, 1990] and thus the scaling relation for the particular case of ξ reads: ˜ ν) ξ(ǫ, a) = aξ(aǫ (1.10) ˜ ν) ξ(ǫ, a) = ǫ −ν ξ(aǫ The scale-free behaviour of extensive magnitudes is related to the scale transformation of ξ. The ideas from the RG allow the development of analytical tools used to obtain theoretical knowledge of continuous macroscopic phase transitions [Nattermann, 1997].

1.4.2

Collapsing of Scaling Functions

Whenever the external variables are well controlled, one can use equations of the type 1.9 to identify the set of critical exponents (α, β, σ) as well as the critical point where ǫ = 0. The scaling function can be obtained from collapsing of the empirical values of the new variables (φ and xǫ −σ ) such as, for example: ˜ −σ ) = φ(ǫ, x)ǫ α ′ φ(xǫ (1.11) Different relations between x and ǫ should collapse in the same φ˜ if the right critical parameters are selected. Furthermore, the collapse will allow the evaluation of the scaling function, also sharing universality [Zaiser, 2006, Papanikolaou et al., 2011, Laurson et al., 2013]. Notice that several systems need to consider a more sophisticated parameter space defining the distance to the critical point, what is called multivariate scaling functions [Pérez-Reche and Vives, 2003]. The Random Field Ising Model (Chapter 10) serves a paradigmatic example of bivariate 9

1.4. DETERMINATION OF CRITICAL PARAMETERS scaling where the distance to the critical field is defined in the space of disorder (R) and external field (H).

1.4.3

Finite Size Scaling

A particular case of collapsing can be performed to test the dependence of the critical magnitudes on the size of the system. Finite Size Scaling (FSS) is an analytical technique based on the collapsing of critical magnitudes as function of the length of the system L → λL. Close to the critical point, correlation lengths of an infinite system (ξL→∞ ) will diverge and thus, typically: ξ∞ ≥ L. In finite size systems, L will cut-off long-distance correlations so that an appreciable finite-size rounding of critical-point singularities appears [Privman, 1990] governed by the relation ξ/L. Thus, scale-free variables will now also scale with the size of the lattice such as:   d ˜ ν)   φ(ǫ, L) = L φ(Lǫ φ(ǫ) →   ˜ ν)  φ(ǫ, L) = ǫ −dν φ(Lǫ

(1.12)

where ν is the scaling exponent from eq. 1.10. Notice that φ˜ and φ˜ may include additional scalefree dependences, such as the ones stated in eq. 1.9 in the case of size distributions. This relation can be tested from the statistical analysis of different sized systems. The resulting scaling function can be collapsed to fit both ν and d. FSS can be used to define fractal dimensions and is specially useful to obtain the properties of the critical point as an extrapolation of finite size simulations to the thermodynamic limit (L → ∞).

1.4.4

Fitting of Power-law Distributions

The exact determination of the critical point is difficult in many natural phenomena, since the thermodynamic parameters themselves may be difficult to measure. This situation complicates the performance of scaling techniques. However, one can trust the scale invariance hypothesis summarized in section 1.2 and the ubiquity of criticality in nature suggested in section 1.5 to justify the presence of power-law distributions. If we consider that the system is indeed very close to criticality, the scaling function D˜ plays a minor role, and can be considered as constant for a broad range of scales. Thus, the distributions of events should be compatible with a power-law hypothesis, corresponding to the critical exponent β or τ from eq. 1.6. Notice that, for a general system, the scaling parameter τ is not defined as the fitted power-law exponent of a distribution. Both values will only coincide exactly at the critical point. In general, the ˜ fitting will be distorted by the shape of D.

10

CHAPTER 1. INTRODUCTION TO AVALANCHE DYNAMICS

1.5

Origin of Criticality

The separation between snapping, popping and crackling noise has transcendental implications in avalanche dynamics. Critical disorder introduces a transition between microscopic and macroscopic behaviour. In the thermodynamic limit (large systems) the microscopic detail of the avalanche process disappears as can be shown by means of RG techniques [Sethna, 2006]. The only non-trivial avalanche process is found exactly at the critical point, between a smooth response regime (popping noise at the thermodynamic limit) and a regime governed by a single massive avalanche (snapping noise at the thermodynamic limit). Avalanche dynamics pinned by disorder (and thus, with multiple jumps) are ubiquitous in nature. Is not our claim that all non-trivial avalanche phenomena is crackling noise. Several evidences show that characteristic scales participate in natural avalanche processes [Knopoff and Kagan, 1977] affecting both the microscopic and macroscopic limits of the process close to the observational window. Furthermore, even snapping and popping noise can exhibit a broad range of scales (although not unbounded) [Sethna et al., 1993] close to criticality, and thus observational ranges can, in many cases, be still below the characteristic scales of the process. But crackling noise is also characterised by the presence of power-law distributions, and many natural phenomena exhibit spontaneously fat-tailed distributions that are compatible with the powerlaw hypothesis [Aaron Clauset, 2009]. Since crackling noise should represent a rare fixed critical point, inaccessible at the thermodynamic limit, the ubiquity of crackling noise in nature requires an explanation. One can naively suggest that systems close to criticality are the ones that are difficult to miss. Such processes dominate all scales because of their scale-free nature and occlude other finite natural phenomena. Some authors claim that, in many cases, what is perceived as a power-law behaviour is instead a superposition of different finite-sized phenomena causing a false sensation of scale-invariance [Sornette, 2004, Pueyo et al., 2007]. Otherwise, the ubiquity of power-laws can also be explained as a consequence of the scale invariance of an underlying topological space such as networks grown via preferential attachment [Barabási and Albert, 1999], or scale-free distribution of triggering field values [Sornette, 2004]. However, empirical evidences show that many processes exhibiting power-law behaviour are indeed critical [Sethna et al., 2001]. Different but similar explanations have been proven to be valid assumptions to explain the ubiquity of critical behaviour beyond an accidental fine tuning of the disorder, as summarised in the following subsections.

1.5.1

Sweeping Through a Critical Field

The emergence of power-law behaviour can be indeed related to the presence of the critical point, but does not require to be stationary. Instead, the relation between interactions and disorder can be continuously evolving or fluctuating, crossing a region close to criticality. Larger scales in the avalanche 11

1.5. ORIGIN OF CRITICALITY process are only reached in the vicinity of the critical point, exhibiting then a power-law behaviour that is indeed a trademark of criticality. In fact, the integrated distribution over the process should also be a power-law [Sethna et al., 1993]. This idea, called the sweeping of an instability, is suggested to be a plausible explanation to many natural phenomena exhibiting crackling noise [Sornette, 1994]. In fact, the power-law distributions analysed in Chapter 10 are obtained from metastable magnetization ramps in a Random Field Ising Model including far-from-critical avalanches, and can be explained under this paradigm. The sweeping of an instability is even better suited for processes involving absorbing states. Micromechanical mean field models [Dahmen et al., 2009] contrasted with several empirical data [Salje and Dahmen, 2014, Antonaglia et al., 2014] have shown a slow convergence towards criticality before the mechanical failure of plastic materials, originated by the occurrence of the first avalanches large enough to span the systems.

1.5.2

Self Organized Criticality

Self-Organized Criticality (SOC) designates a broad ensemble of dynamical processes that are able to drive the systems towards a critical state without any external tuning of the internal variables [Bak et al., 1988, Bak, 1996]. These phenomena rely on the presence of absorbing states [Dickman et al., 1998] and are typically associated with conservative processes [Manna et al., 1990] although it is claimed not to be a necessary condition [Janosi and Kertesz, 1993]. SOC have been observed empirically [Frette et al., 1996, Jaeger et al., 1990, Oddershede et al., 1993] and analytically in many simple models [Bak et al., 1988, Manna, 1991, Drossel and Schwabl, 1992, Olami et al., 1992]. In relation to the scope of this work, it is worth mentioning that some SOC models have been suggested to exhibit temporal correlations in some avalanches processes [Helmstetter et al., 2004, Deluca et al., 2014]. SOC has become a popular research topic in the last decades, and interpreted as the most plausible explanation to the ubiquity of power-laws in nature. The existence of criticality as consequence of self-organization has been suggested to lay behind the majority of natural processes [Bak, 1996]. Fortunately not all nature lays in a critical state. Otherwise the universe will be unpredictable, and physical laws will all be governed by uncertainty and scale invariance. As stated in section 1.3.1 biological systems self-organize in a different way, allowing the collective synchronous response of interactive systems. In the other hand, it has been recently suggested that the same self-organization mechanisms leading to criticality can lead systems beyond the critical state and grow instabilities that are released in outlying strong events, commonly known as dragon-kings [Sornette, 2009]. Dragonkings are too frequent to be expected by a power-law distribution and thus constitute a new family of events, similar to snapping avalanches. Furthermore, this new emerging behaviour presents some statistical properties that allow short-term prediction, such as an increase or decrease in autocorrelations, or response to external fluctuations [Sornette and Ouillon, 2012, Brinkman et al., 2015]. 12

CHAPTER 1. INTRODUCTION TO AVALANCHE DYNAMICS

1.5.3

Slow Training Towards Criticality

A slower kind of self-organization could also explain another empirical result. Ordered materials tend to disorder spontaneously when driven through transformations. This effect have been reported in the structural transformation (see Chapter 9) of shape memory single crystals after cycling [PérezReche et al., 2004a]. The inclusion of disorder in a perfectly ordered material weakens the energy barriers and reduces the metastability of the system. As consequence, a disordered crystal is closer to an equilibrium state and preferred over other configurations. However, the cycling of structural transformations do not eternally increase the disorder but, instead, achieve a stationary disorder configuration that allows reproducible cycling [Pérez-Reche et al., 2004a]. This disorder is shown to correspond to a critical configuration [Pérez-Reche et al., 2007].

1.6

Sources of Crackling Noise Analysed in this Work

This manuscript summaries the results obtained from the analysis of crackling noise from different phenomena and numerical models. In simulations, avalanches are well identified events, perfectly determined from the outputs of the codes. In our experimental data, avalanches are identified from the jerky behaviour of a macroscopic magnitude in front of a smoothly driving stimulus. In this work we also compare the results obtained from the analysis of different magnitudes corresponding to the same avalanche process. Chapter 3 reviews some fundamental ideas about seismology and revisits some well studied statistical laws such the scale-free nature of earthquakes, the production of aftershocks and the so-called Unified Scaling Law (USL) of waiting times. Such laws are illustrated with the analysis of some seismology catalogues of public domain [ANNS, ] The emission of acoustic waves is one of the most characteristic trademarks of avalanches involving a change in volume or displacement of a solid. The most relevant results presented in this work come from the analysis of the acoustic emission detected during the uniaxial compression of porous materials, performed with the experimental device introduced in Chapter 6. Chapter 7 reviews in detail the results obtained from a reconstructed silica(SiO2 ) [Salje et al., 2011a, Baró et al., 2013a] glass with the commercial name of Vycor®. Chapter 8 completes the study with the results from tests over natural samples of goethite ore (FeO(OH)) [Salje et al., 2013] and artificially sintered alumina (Al2 O3 ) ceramics with a controlled degree of porosity [Castillo-Villa et al., 2013]. Structural phase transitions called martensitic transformations [Bhattacharya, 2003] exhibit crackling noise [Vives and Planes, 1994, Pérez-Reche et al., 2001] that can be detected by means of high sensitivity calorimetry [Gallardo et al., 2010], acoustic emission [Mañosa et al., 1989], or magnetic signals in the case of magnetostructural transitions [Barkhausen, 1919]. Chapter 9 presents a review 13

1.6. SOURCES OF CRACKLING NOISE ANALYSED IN THIS WORK of the results obtained from the thermal structural phase transition of different materials: the acoustic emission from a single crystal and a polycrystalline sample of FePd [Gallardo et al., 2010, Baró and Vives, 2012]; the acoustic emission and calorimetry from a polycrystalline sample of CuZnAl [Baró et al., 2014]; and the acoustic emission and magnetic signals of a NiMnGa polycrystal [Baró et al., 2013b]. Finally, data is also obtained from the simulation of metastable dynamics in the 3D-Gaussion Random Field Ising Model (RFIM) implemented following a robust computational technique [Kuntz et al., 1993]. The properties of the avalanches are extracted directly as an output of the code [Baró and Vives, 2012].

14

Chapter

2

Introduction to Temporal Point Processes Several physical avalanche phenomena have long known to display different sort of temporal correlations [Omori, 1894, Bi et al., 1989, Paczuski et al., 1996, Boffetta et al., 1999, Beggs and Plenz, 2003, Lillo and Mantegna, 2003]. However, it wasn’t until the last years that scientists in different areas developed a strong interest in the study of correlations as a viable tool to forecast catastrophic events. Thanks to the latest experimental results a new idea of universality has been arisen related with the empirical scaling laws found in the distribution of waiting times in point processes or quiet times in temporal sequences [Álvaro Corral, 2009, Corral, 2014]. The study of avalanches as point processes allows a clear identification of temporal and spatio-temporal correlations, while keeping most of the information about each avalanche captured in a set of scalar values, called marks in a point process. This approach allows a deeper understanding of the microscopic properties of the system and the avalanche process, and is specially useful in the development of tools for hazard assessment and prediction of avalanches. However, the inherent ergodicity breaking and out-ofequilibrium dynamics difficult the development of a unified model to depict the stochastic process of avalanches within physical principles. The development of such framework, comparable to the partition functions and energy functional in the Landau theory of equilibrium statistical mechanics, will probably be the major milestone of avalanche dynamics in the following decades. Up to now, a solid understanding of p.p. is necessary to extract information and avoid confusions about the results drawn from different analytical techniques regarding temporal correlations in avalanche dynamics. This chapter defines the basic properties and concepts related to point processes. Ref. [Álvaro Corral, 2009] addresses with more detail the topics introduced in this chapter, such as the implications of the USL to the p.p. hypothesis, associated to different avalanche process exhibiting interesting temporal patterns.

15

2.1. STOCHASTIC POINT PROCESSES

Temporal Sequence

V(t) A1 D1

t A(t)

Point Process

A1 , D1

A2 , D2

A3 , D3

S1

S2

S3

A4 , D4 S4

t

Figure 2.1: Schematic identification of events in a point process from the thresholding of a temporal signal. The duration Di and amplitude Ai of each event are stored as marks.

2.1

Stochastic Point Processes

One of the most usual frameworks to study the statistical laws of avalanche dynamics beyond bulk distributions is to consider the avalanches as an ordered sequence of events i.e. a so called point process. From a physical perspective, a point process (p.p.) can be understood as a sequence of instantaneous events in a temporal line. This approach is strictly valid whenever avalanches are instantaneous events, such as in the adiabatic limit considered in in the athermal RFIM as shown in Chapter 10. In such situation, the time of occurrences —or the triggering fields in the case of the RFIM— define the instant Si associated to each avalanche i. In addition, several magnitudes can be recorded as marks of each event in the p.p., containing most of the original information for each avalanche. On the contrary, most of the experimental results analysed in the current work are obtained from temporal sequences of acoustic, calorimetric or magnetic signals. An avalanche is defined each time (Si ) the signal overpass an imposed threshold, as shown in the schematic example represented in Fig. 2.1. Avalanches identified as extreme events by thresholding of the signal intensity (Chapters 6 and 9) have a characteristic duration and cannot be considered as instantaneous events. A given temporal signal can still be analysed as a point process whenever the typical time between two consecutive avalanches is much larger than the duration of the time during which the signal is maintained above the threshold. Thus we can consider the extreme events as instantaneous [Corral, 2014]. In our results we will typically find a broad distribution of interevent times and we 16

CHAPTER 2. INTRODUCTION TO TEMPORAL POINT PROCESSES must be cautious on applying this hypothesis. However, as shown in this chapter, we can still treat the data as a point process as long as we consider the duration of each event inside the definition of the point process (see Fig. 2.1). From a mathematical point of view, a p.p. ψ consists in a Borel set –our event sequence– defined in a Rd+1 space where d is the metric space embedding the p.p. and the +1 dimension corresponds to a temporal dimension. Thus, we can define both metric distances and time intervals between the events constituting the p.p.. The temporal dimension in a p.p. displays special properties arising from its natural ordering. We define a temporal point process as any sequence of non-negative ordered random time events ψ = {Si }i∈N∗ defined in the real time-line such as: ∀i ∈ N∗ , Si < Si+1 . In addition to time ti and position xi , in marked point processes each event is associated with one or a whole set of variables or marks Ai , Bi ..., usually scalars, that can be associated with a sense of size or magnitude, giving an additional dimension to the complexity of the stochastic process. In general, the set of variables have an stochastic relation with the previous history of the point process and creates what is usually referred as a doubly stochastic point process. In this work we are interested in pure temporal marked point processes, since all our experimental results (Chapter 7, 8 and 9) contain some measurement of size and duration, and none of them contains spatial information. Most of the discussion of this chapter will neglect spatial dependencies, except for those traditionally defined as spatial point processes in literature. In order to understand the information extracted form a given point process ψ ..= {Si } we need to introduce a few concepts such as: the duration process of waiting times Wi ..= Si − Si−1 ; the counting process N (t); the historical sequence of the p.p. at a given time t , Ht ..= {Si < t} ; and the intensity µ(t|Ht ) that absolutely defines the p.p..

2.1.1

Intensity of a Point Process

Point processes are stochastic processes that can be understood in probabilistic terms. Hence, p.p. are fully determined by an intensity parameter µ(t, x) that defines the rate of occurrence of an event in a given region around point (x, x + dx) and a given time interval (t, t + dt). In general, p.p. are nonmarkovian [Häggström, 2002] and, thus, the probability of occurrence of an event at time t is given by the intensity µ(t, x|Ht )dtdΩ depending explicitly on the history of the current p.p.: Ht = {Si < t}. Considering a purely temporal point process —without spatial information— the intensity of a p.p. is defined as [Daley and Vere-Jones, 2007]: ! # " Prob(event in (t, t + ∆t] |Ht ) m((t, t + ∆t] |Ht ) µ(t|Ht ) = E lim = lim lim ∆t M(Ht )∆t ∆t→0 ∆t→0 M→∞

(2.1) 17

2.1. STOCHASTIC POINT PROCESSES where M(Ht ) is an arbitrarily large number of repetitions of the process with the same given history Ht , and m((t, t + ∆t) |Ht ) is the number of these sequences that find an event inside the interval (t, t + ∆t). Since Ht is an stochastic process itself, generated from the same p.p., it is impractical to perform the evaluation of hµ(t, Ht )i. We would need to find an infinite set of copies of ψ, equals up to t, to define µ and we will need an infinite number of copies for each resulting history Ht+dt .

2.1.2

Waiting Times

Point processes can also be defined from its inter-event or waiting times defined as Wi = Si − Si−1 for i > 1 and W1 = S1 . Since the p.p. is ordered, each value of Si ∈ ψ can be redefined as P Si = i1 Wj . The sequence of waiting times is sometimes called duration process and is defined as {Wi } : ∀i ∈ N∗ , Wi = Si − Si−1 . Given eq. 2.1, it is possible to find the probability of waiting times for the event Si ∈ ψ such as PWi (Wi = δ|t, Ht )dt. Consider a discrete time-line with k intervals ∆t passed since the last event. Thus, the variable δ → δj = j∆t and µ(t) = µ(ti−1 + j∆t) = µj . In the discrete time-line, the point process can be defined as a binomial process of Bernoulli trials with a positive-outcome probability p = µ(t)∆t = µj ∆t and a negative-outcome probability 1 − p. Thus, the probability of waiting times can be simplified as a the probability of not finding any event in the interval δ ∈ (0, k∆t) and finding at least one inside the interval (k∆t, (k + 1)∆t): pWi (Wi ∈ [k∆t, (k + 1)∆t)) = µk ∆t

k h Y j=0

Pk i 1 − µj ∆t = µ(t)e j=0 ln[1−µj ∆t ] ∆t

(2.2)

We can obtain the distribution of waiting times for a general point process in the real line by performing the continuum limit ∆t → dt. By considering the first expansion term in ln(1 − µ(t)dt) = −µ(t)dt + O(dt 2 ) and that any explicit dependence in time can be separated in terms of t and δ: pWi (δ|t, Ht )dδ ≈ µ(t|Ht )e



Rt Si−1

µ(t ′ ,Ht )dt ′



(2.3)

This expression will be useful to develop a general method for sampling synthetic point processes as is shown in section 2.5.

2.1.3

Counting Process

Any p.p. can also be interpreted as a counting process, by registering the number of events until a given time t:  Z t X X X    ds  (2.4) δ(s − S ) N (t) = I(Si < t) = Θ(t − Si ) =  i  i∈N∗

18

i

0

i

CHAPTER 2. INTRODUCTION TO TEMPORAL POINT PROCESSES We can consider again the same approach taken in eq. 2.3 as the continuum limit of a binomial process where a signal is obtained with probability µ∆t. From the counting process we can redefine the intensity µ of a p.p. as the instant probability of finding an event at a time t: ! N (t + ∆t|Ht+∆t ) − N (t|Ht ) µ(t, Ht ) = lim E = lim [E (r| t, ∆t)] ∆t ∆t→0 ∆t→0

(2.5)

N (t+dt)−N (t)

Notice that, in experimental data, the occurrence of local r(t) = is stochastic given dt the intensity parameter µ. It is absolutely wrong to state that: r(t) = µ(t) and, thus, beyond the coincidence in the first moment (Eµ = Er) , the distribution of r can be completely different to the distribution of µ obtained from temporal or spatial integration, as explained in more detail in section 5.3. P Considering eq. 2.3 and the definition of waiting times such as Si = i 1 cannot be normalized without a theoretical lower bound xmin in order to converge, usually related to the scale of the smallest microscopical element that constitutes the phenomena. As a first naive example, the distribution of rainfall precipitation amount was found to follow a universal power-law with exponent τ ≈ 1.2 [Peters et al., 2010]. However, a rainfall cannot be smaller than the volume of a single stable droplet nor larger than the total precipitable water of the atmosphere, imposing cutoffs to the distribution [Peters et al., 2010]. In the same way, the GutenbergRichter Law for earthquakes [Gutenberg and Richter, 1944], introduced in Chapter 3, states that the magnitude of earthquakes (m) is distributed according to P(m) = exp(−bm) denoting a power-law in seismic moments with an exponent b quite stationary in the whole world [Wesnousky, 1999]. But an earthquake cannot release more elastic energy than the atomic bonds of a dense block of material that conforms the contact zone of the fault [Knopoff and Kagan, 1977, Kagan and Knopoff, 1984]. Apart from the physical constrains, we may find limitations inherent to data acquisition. In experiments one finds unavoidable noise deforming the power-law distribution in the small-event 50

CHAPTER 4. ESTIMATION OF POWER-LAW EXPONENTS region and different kinds of instrument saturation in the large-event region. It is difficult to find instruments (amplifiers, voltmeters, etc.) allowing direct measurements with a range higher than 5 decades. In simulations one also finds unavoidable limitations: for instance, one has a minimum lattice parameter or particle size that alters the small-event distribution and finite-size effects deforming the large events. Since simulations with more than 105 particles are usually too resource demanding, it is also difficult to find power-law distributions extending many decades in numerical works. Therefore, the determination of both upper and lower boundaries of the power-law is as much important as the exponent corresponding to the power-law. While the last informs on the universal aspects of the physical principals governing the system, the former presents both a macroscopic view regarding the maximum scale reached by the phenomena and a detail level where the origin of the scale-free behaviour can be identified from an atomistic description.

4.2

Maximum Likelihood Estimators of Power-Law Exponents

Few years ago most of the community adopted the maximum-likelihood (ML) estimation method [Goldstein et al., 2004, Bauke, 2007, Aaron Clauset, 2009] as the safest way to treat data. Using ML methods they have nicely illustrated how to test the power-law character of data and how to obtain good estimations, error bars and goodness of the fit for the critical exponents. This section itemizes the expressions that must be used to estimate power-law exponents by means of maximum likelihood, depending on the nature of the data analysed. Let {Xi } (i = 1, N ) be a set of statistically independent measurements that we want to fit to a power-law probability density with exponent γ : g(x)dx =

x−γ dx ζ(γ )

(4.5)

where ζ(γ ) denotes a normalization function that depends on the upper and lower physical limits xmin and xmax . We define the Likelihood function as the probability that the set of measurements {Xi } could be obtained by the proposed model: L(γ ) =

N Y

g(Xi )

(4.6)

i=1

The Maximum Likelihood estimation method chooses the value γ = γˆ that maximizes the Likelihood function. The solution and performance of this method depends on the boundary conditions of the power-law through the normalization term ζ(γ ; Γ) for a Γ interval of integration. In a general form, the relation to be solved for any ζ(γ ; Γ) is the following: X ∂ ln L(γ ; Γ) ζ ′ (γˆ ; Γ) 0= =− (4.7) ln(Xi ) − n ˆ Γ) ∂γ ζ(γ; γ =γˆ {X∈Γ}

51

4.2. MAXIMUM LIKELIHOOD ESTIMATORS OF POWER-LAW EXPONENTS Where n is the number of measurements that fall inside the interval Γ and γˆ denotes the estiˆ Γ) for the derivative mated value for γ by maximizing the likelihood. We use the nomenclature ζ ′ (γ; ∂ζ(γˆ ;Γ) of the normalization function ∂γˆ . The expression of the normalization factor depends on the kind of data that we are analysing as well as the definition of the data range that is considered for the fitting.

4.2.1

Continuous Power-law Distribution

Some ideal cases, such as theoretical critical states [Sornette, 2004] and unbound numerical implementations [Sethna, 2006], are expected to render perfect power-law distributions in an infinite range. Under these conditions the estimation of the power-law exponent can be solved analytically. Lets consider a power law limited only by a lower boundary xmin imposed for the convergence of the distribution. We can evaluate the exponent of the power law over an arbitrarily domain Γ = (xlow , ∞) where xlow is supposed to be taken xlow > xmin . Then the normalization function reads: Z∞ 1−γ xlow −γ ζ(γ ) = x dx = (4.8) γ −1 xlow And the term to solve in the maximum likelihood expression 4.7: ˆ Γ) ζ ′ (γ; 1 = − ln (xlow ) − ˆ Γ) γˆ − 1 ζ(γ;

(4.9)

Since the value of γˆ only appears in a term as an exponent, it can be obtained analytically by the expression: γˆ = 1 + P

n Xi {X∈Γ} ln xlow





(4.10)

Taking into account that the likelihood function can be approximated to a Gaussian close to it’s maximum, the error of the fitting associated to this exponent can be also estimated as the standard deviation: s !−1/2 ˆ Γ)2 1 ζ ′′ (γˆ , Γ) ζ ′ (γ, ∂2 L =√ (4.11) − σ∼ ˆ Γ) ˆ Γ)2 ∂γ 2 ζ(γ, n ζ(γ, γˆ − 1 In this case the value can be found analytically in a first approach as σγˆ = √ +O(1/N ) [Aaron Clauset, N 2009]. This estimation method is only strictly valid when the data follows a pure power-law, unbound for high values. However, most real data exhibit also some kind of upper boundary to the scale-free behaviour. In some cases this method will return good results, specially when the population N of γ −1  & N . But as a general rule of thumb, it should be events near the upper cut-off is scarce: xxmax min better to include an upper limit to the evaluation. 52

CHAPTER 4. ESTIMATION OF POWER-LAW EXPONENTS

4.2.2

Continuous Distribution inside an Interval

In order to evaluate a power-law exponent when the distribution is bounded by an upper as well as lower cut-off we propose to perform ML estimations restricting the data within a lower and a higher  h arbitrary boundaries: Γ = xlow , xhigh . These cut-offs cannot be placed in an empty interval and thus any significant information must be stored between the minimum and maximum values of the data set, inside the interval (Xmin , Xmax ). Then, the normalization factor will be a function of the cut-offs: Z xhigh x−γ dx (4.12) ζ(γ ; xlow , xhigh ) = xlow

The best estimation of γˆ is consequently found by maximizing the likelihood function (eq. 4.9) where: 1−γ

1−γ

xhigh ln xhigh − xlow ln xlow ζ ′ (γ ; Γ) 1 = − 1−γ 1−γ ζ(γ ; Γ) 1 − γ −x x high

(4.13)

low

In contrast to eq. 4.10 this relation cannot be solved analytically and needs to be trusted to a numerical method. The false position method [Press et al., 1990] for finding the root of eq. 4.7 is a valid ζ ′ (γ ; Γ) choice as is shown in section 4.2.6. Since all the dependence in γ is inside the term , the sum ζ(γ ; Γ) depending on the data set and the iteration over estimation values γˆ are independent computational processes, speeding up the computational time. The estimation of the error bars for any kind of data found numerically is performed by following the same Gaussian approach used in eq. 4.11 and taking advantage of the false position method. The details are shown in section 4.3.2.

4.2.3

Discrete Distribution of Natural Numbers

We can also apply the Maximum Likelihood Method when the data is given as the total number of occurrences f k of natural numbers {k} ∈ N. The corresponding power law, usually called Zeta or Zipf law [Zipf, 1945], reads: k −γ p(k) = (4.14) ζ(γ ) i h Following the same procedure as above, when restricting to data within an interval Γ = klow , khigh , Pkhigh the normalization function is ζ(γ ; klow , khigh ) = k k−γ and the derivative of the likelihood funclow tion will read: Pkhigh −γ khigh X ∂ ln L klow k ln(k) (4.15) f k ln(k) + n P =− khigh −γ ∂γ k k=klow klow P This expression must also be solved numerically. Again we have aPterm − k f k ln(k) that depend k−γˆ ln(k) on the casuistic of the given set of data and an independent term kP k−γˆ containing all the dek ˆ but in this case an additional summation must be performed over pendence on the exponent to fit γ, {k} for each iteration on the estimated value γˆ . 53

4.2. MAXIMUM LIKELIHOOD ESTIMATORS OF POWER-LAW EXPONENTS

4.2.4

Binned Data

Suppose that, instead of being natural numbers, the data-set correspond to real numbers, but its resolution is truncated to a certain level. The values {Xi } are given as discrete values {bk } when bk−1 ≤ Xi < bk . In this case we are unable to evaluate the complete distribution and we must adapt the ML method in order to work with the probabilities of falling inside each interval : 1−γ

1−γ

bk

Z p(bk−1 ≤ Xi < bk ) =

p(x)dx =

(bk

bk−1

− bk−1 ) ζ(γ )

(4.16)

in the case of a power-law distribution. In the general case of evaluation between an upper and lower cut-off: blow = b0 , bhigh = bM , constructing a resulting set of k bins {b0 , b1 , ..., bM } : 1−γ

1−γ

1−γ

ζ(γ ) = b0 − bM . Whereas considering only a lower cutoff: ζ(γ ) = b0 , which in some cases is solvable analytically, as shown in the following section. Notice that, since we work with binned data, we cannot select xlow nor xhigh that doesn’t coincide with border values of a bins. The likelihood function for data registered in bins reads: ln L(γ ; Γ) =

P



1 xj ∈Γ ln ζ(γ ;Γ)

= n ln ζ(γ ; Γ) +

h

P

1−γ

bi

1−γ i

− bi+1

i∈Γ f i ln

h

1−γ bi

Θ(xj − i)Θ(i − xj + 1)

(4.17)

1−γ i − bi+1

Where, as in the case of natural values, we will be supplied with the number of appearances mXi |bk−1 ≤Xi 0 and P 1 ′ x + 1 = γ: ∞ j=0 (klow +j)γ . Thus, ζ(γ ) and all their derivatives are monotonous, and also ζ (γ )/ζ(γ ) has a unique solution following the same reasoning used for continuous data.

4.3

Exponent Estimated as Function of Boundaries

As stated in section 4.1.4, because of physical limitations, even in the case of a power-law distribution, the scale-free hypothesis may only be valid inside an interval and the value of the fitted exponent may depend on the evaluation range. In order to find the range of validity for the powerlaw hypothesis we can take advantage of the power-law expressions exposed in the previous section. These distributions are only defined inside the evaluation range Γ. If the data is scale-free inside Γ, we can take the sub-data set of points inside this interval and estimate its exponent as a power-law truncated for either xlow (eqs. 4.10,4.18,4.20) or simultaneously for xlow and xhigh (eqs. 4.13,4.15,4.19). This methodology has been widely used [Aaron Clauset, 2009] in the case of pure power-laws with the analytic expression (eq. 4.10). The proposed technique consists in testing the robustness of the ML exponent when the lower cut-off xlow is swept from Xmin to Xmax . By this method one studies the deformation of the fitted exponent due to undesired effects in the region of small events. Typically, in perfect power-laws, the fitted exponent γˆ increases quickly for low values of xlow , where the power-law is truncated by empirical resolution, until it reaches region that excludes the problems of missing data above a resolution limit x0 . For higher values xlow > x0 the exponent stays ˆ low ) = γ˜ that corresponds to the best fit of the power-low exponent for at an stationary value γ(x x > x0 [Aaron Clauset, 2009]. At high values of xlow the fitted exponent increases again and diverges, because most of the enclosed data-points are affected by the presence of a higher cut-off, typically well represented by eq. 4.33. Since we expect the presence of the upper cut-offs, or other anomalies in the region of large events, we perform ML estimations by restricting the original data within the imposed higher cut-off xhigh as well as the imposed lower cut-off xlow [Peters et al., 2010, Baró and Vives, 2012]. The expressions used to find the best exponents correspond to the solutions exposed in sections 4.2.2, 4.2.3, 4.2.4 56

CHAPTER 4. ESTIMATION OF POWER-LAW EXPONENTS and 4.2.5. The scanning proposed to xlow must then be performed simultaneously to xhigh obtaining ˆ low , xhigh ). The visualisation technique proposed by the author of this thesis a matrix of values γ(x is the Maximum Likelihood Exponent Maps consisting in a color map representing the values of the fitted matrix as well as it’s intervals of confidence.

4.3.1

Representation of Maximum Likelihood Exponent Maps

Maximum Likelihood Exponent Maps (MLEM) is a visual mechanism to help determine the exponent and range of validity of a power-law hypothesis. Fig. 4.1 shows an schematic interpretation of the expected results and a first set of illustrative examples is shown in Fig. 4.3. Its use is extended in this work for the evaluation of scale-free exponents in distributions of avalanches found in different phenomena. ˆ high , xlow ) we plot a color map in a space with x − axis = xhigh By scanning the values of γ(x and y − axis = xlow . The color indicates the estimated fitted exponent for each point. We plot also isometric contour lines (white lines) intended to help discerning the steepness of the exponent variations at a certain resolution level (usually ∆γˆ = 0.1). Since the xlow values are restricted to xlow < xhigh , the resulting map only covers a triangular section in the xlow , xhigh space. Scanning of xlow on a fixed xhigh similar to the technique used for xhigh = ∞, correspond to vertical slices in the MLEM as shown (i) in Fig. 4.1. We usually represent also these slices to strengthen our claims over power-law behaviour, but taking always a value xlow ≤ Xmax corresponding the range of validity of the power-law. Diagonal slices ((ii) in Fig. 4.1) correspond to the evaluation of the exponent with a fixed ratio xlow = d xhigh , where 0 < d < 1. The fitted exponent is unreliable for high values of d . 1 (iii) consequence of the small size n of the restricted sample set, implying huge statistical fluctuations. This statistical error decreases with d, as we move away from the diagonal bisection of the map. At some point, the statistical error gets lower than the resolution represented by our contour lines. This threshold is represented in the maps as a black thick line, that usually oscillates around a fixed d except for high values of x where it crosses the totality of xhigh slices. The lower the value of d the more data is selected until the limit d = Xmin /Xmax (iv) that corresponds to the estimation of γ in the range x ∈ {Xmin , Xmax }, i.e. with all the data set. The presence of scale-free behaviour is denoted by the existence of a flat plateau with an homogeneous color which is free of contour lines. The extend of this plateau indicates the range in which the exponent is independent of the cut-offs. In pure power-laws we expect this region to extend beyond the statistical error threshold denoted by the black thick line. However, the extension of a plateau does not guarantee the validity of the power-law hypothesis. In order to confirm the powerlaw hypothesis we have to check it’s statistical validity by means of the p-values of acceptance as 57

4.3. EXPONENT ESTIMATED AS FUNCTION OF BOUNDARIES

(i)

xlow

(iii)

(v) d

(ii)

(iv) b

xhigh Figure 4.1: Schematic structure of a typical MLEM: (i) points corresponding to a sweeping of xlow at fixed xhigh ; (ii) diagonal cuts xlow = d xhigh at a fixed 0 < d < 1; (iii) region dominated by statistical fluctuations, returning unreliable exponents; (iv) Exponent fitted with the whole data-set. will be explained in section 4.3.3. We can also represent the p-value of fitted exponents as a color map with a log-scale. We mark a red contour line with p-value = 0.5 (half of trials will return a worst likelihood) , denoting full acceptance, and p-value = 0.05 (one over twenty trials will return a worst likelihood) that we will associate with full rejection of the power-law hypothesis. Whenever a power-law hypothesis can be accepted, we can identify the right-most bottom corner of the valid plateau with a stable exponent and high p-value, corresponding to a lower value of d, as the largest coverage range compatible with the power-law hypothesis with its fitted exponent.

4.3.2

Estimation of the Error Bars from Numerical Results

Instead of using the Mean Square Error (MSE) defined in eq. 4.11 we can extract numerically a more precise measurement of the error associated to the exponent when we impose a higher and lower 58

CHAPTER 4. ESTIMATION OF POWER-LAW EXPONENTS

∂ ln(L) ∂γ

2

L) 1 = −σ 2 ∂ ∂(ln 2γ

0.5 b

γ b

2

L) 1 = −σ 2 ∂ ∂(ln 2γ

0.5

2σ Figure 4.2: Graphical representation of the numerical evaluation of the confidence interval for the exponent estimated by Maximum Likelihood. Solid curve represent an hypothetical ∂L/∂γ , black dots denote the two closest evaluated exponents, and the dashed straight line corresponds to the fitted Gaussian.

boundary to the evaluation. The false position method provides a good numerical estimation of the second derivative of ln L at the maximum likelihood point. Being M and M − 1 the last iterations performed by the root-finding routine enclosing the estimated value γˆ , the second derivative of ln L can by approximated by: ∂ ln L ∂ lnL − 2 ∂γ M ∂γ M−1 ∂ ln L ∼ (4.25) γˆM − γˆM−1 ∂γ 2 max The value of ln L(γ ) around Lmax = L(γˆ ) can be expanded as: 1 ∂2 ln L ˆ 3] ˆ 2 + O[(γ − γ) (γ − γ) ln L(γ ) = ln Lmax + 2 ∂γ 2 max

(4.26)

This expansion guarantees the normal behaviour of the Likelihood Function —which was already ensured by the Central Limit Theorem when n is large enough— and under general conditions [Eadie 59

4.3. EXPONENT ESTIMATED AS FUNCTION OF BOUNDARIES et al., ]. Close to Lmax we can approximate the likelihood function as:   #−1/2   "  (γ − γˆ )2  ∂2 ln L   max ; σγˆ = − L ∼ Lmax exp     ∂2 ln L −1  ∂γˆ 2 max  2 ∂γ 2

(4.27)

max

Thus, the second derivative provides an approximation to the standard deviation of the estimated exponent. Fig. 4.2 shows an alternative graphical justification of this method. Given that L is norR γ+σ ˆ ln L mally distributed, the values of γ = (γˆ ± σ) correspond to the values of ln L = γˆ ∂∂γ = 0.5. This integral is linearly approximated as the area enclosed in the rectangular triangle of base σ and 2 ∆(∂ ln L/∂γ ) ln L . σ. Then: 0.5 = 0.5σ 2 ∂ ∂γ height ∆γ

4.3.3

Goodness of the Fit

The appearance of a plateau with a stable exponent is not enough to guarantee the existence of a real power law. In order to obtain a precise measurement of the validity of the power-law hypothesis inside a given interval we need to test it’s goodness of fit. The best suited technique is claimed to be the performance of Kolmogorov-Smirnov (KS) tests [Aaron Clauset, 2009, Deluca and Corral, 2013] over the empirical data against the proposed power-law with the fitted exponent. The KS test consists in the maximization of the distance between the numerical cumulative distribution (CCD) and it’s theoretical curve. In other words, we need to obtain the maximum distance de (γˆe , Se (x)) = max |S(x; γˆe ) − Se (x)|, where γˆe is the empirical exponent estimated inside a given interval and S(x; γˆe ) and Se (x) are the theoretical and empirical cumulative distribution obtained inside this interval. The detailed development of the expression in the case of a power-law hypothesis reads:   ˆ   ˆ x 1−γe − xhigh 1−γe n(x) x xlow de (γˆe , Se (x)) = max low  − (4.28)  xhigh 1−γˆe N e 1− x low

where Ne is the total number of signals found inside the interval xlow ≤ x < xhigh and n(x) ∈ {0, Ne } √ the number of signals i inside the interval x < Xi < xhigh . The KS statistic variable Ne de is expected to follow a Kolmogorov distribution [Wang et al., 2003]. The goodness of fit is then extracted from the ‘likelihood’ of the obtained value de (γˆe , Se (x)). For a power-law sampled with Ne i.i.d. √  data points, the null-hypothesis is accepted at a level pe = Prob Ne de < K given a Kolmogrov distribution. However, the validity of this test is restricted to the case of independence between γˆ and Se (x). Since, in MLEM, γˆ is fitted from the same values generating Se (x), the goodness of the fit will be always better than performed over an independent value of γ . In order to evaluate well the p-value given that γˆe = γ inside a given interval one may ask: How will the same test perform in a set of synthetic data generated according to γe and evaluated with their own exponent γs inside the same 60

CHAPTER 4. ESTIMATION OF POWER-LAW EXPONENTS interval ? . Instead of providing an analytical solution (probably non-existent) some authors [Deluca and Corral, 2013] propose an unbiased improvement by performing a numerical sampling of powerlaw distributions with this γˆe = γ . Then, the power-law exponent is estimated for each one of the synthetic sets, and again the maximum KS distance is performed with ds (γˆs , Ss (x)) where now γˆs is the exponent estimated for each set of variables. Then, the right estimator for the goodness-of-fit can be obtained from the numerical probability Pr ( de (γˆe , Se (x), Ne ) < ds (γˆs , Ss (x), Ne ) ) = ps

(4.29)

Authors of this Monte-Carlo method [Deluca and Corral, 2013] proposed to reject the power-law hypothesis with the fitted exponent when ps < 0.05 i.e. when less than 1 over 20 true power-laws will be rejected. The massive generation of synthetic power-laws and the maximum likelihoods estimation of their power-law exponent makes the Monte-Carlo method computationally exhausting on a problem 1+ 2 that already scales as N γ−1 (∼ N 3−6 ). For this reason, we propose to keep using the false p-value pe typically higher by a factor less than 2 to the real p-value ps . As a save threshold, we will accept power-law hypothesis whenever pe > 0.5 and totally reject the hypothesis when pe < 0.05 . Although the p-value interval with indetermination (not accepting nor rejecting) is large, numerical results show that the two thresholds of acceptance and rejection tend to be very close in variations of xlow and xhigh .

4.4

Tests over Synthetic Data

There are two obvious reasons to reject the power-law hypothesis. The first one is the already explained physical and measurement limitations inherent in any empirical data. The second one is the possibility of the data not being sampled from a power-law distribution. There are several fat-tailed distributions that can be mistaken with a power-law. The following section explores the behaviour of MLEM applied to some of these distributions. A good exercise before using the MLEM on empirical data drawn from unknown distributions is to test its performance over synthetic data sets sampled from known distribution functions. As mentioned in section 1.5, power-laws are natural in pure critical systems, but empirical data usually correspond to states close to criticality, where the scale-free behaviour is truncated by an exponential cut-off. Also, other families of fat-tailed distributions appear spontaneously in several natural phenomena. This section shows the expected MLEM whenever the data is indeed distributed according to a log-normal distribution (section 4.4.3) or an extreme value distribution (section 4.4.4) in order to avoid mistakes in empirical data.

61

4.4. TESTS OVER SYNTHETIC DATA The MLEM represented in these examples correspond to the fitting of power-laws within an interval of continuous and positive defined variables (eq. 4.13). We consider that the distribution inside an interval (xlow , xhigh ) can be described as: g(x|γ , xlow , xhigh )dx = (γ − 1)

x−γ 1−γ

1−γ

xlow − xhigh

dx

xlow < x < xhigh

(4.30)

Solutions of eq. 4.13 are found by means of the iterative false position method with initial exponent values γˆ0 = 0.5 and γˆ1 = 3.5. The precision demanded for the iterative routine is set to ∆γˆ = 0.005. All synthetic data samples are obtained using the RANECU generator [James, 1990] for uniform random numbers: U [0, 1), and transformed to the desired distribution by different methods. The inversion method [L’Ecuyer, 2012] used in most of the cases is based on the identity that states that the numbers sampled from the inverse cumulative distribution function: CCDF −1 (U[0, 1)) = pdf (x). A Box-Muller algorithm [Box and Muller, 1958] is used in the case of the log-normal distribution, in order to obtain Gaussian distributed numbers from the uniform numbers obtained from the RANECU generator. The population of the sampled distribution is selected from the typical number of data points registered in our experiments N = 104 − 105 . The higher the number of signals, the smoother the distribution is, and the MLEM ( as well as p-value maps ) are more stable and able to discern the validity region of the power-law hypothesis beyond the statistical noise. Notice that we didn’t developed maximum likelihood tests to fit those distributions that depend on two or three parameters. We expect that this study will help us, not only to estimate the power-law exponent and it’s range of applicability, but also as an exploratory data analysis to decide whenever the power-law is the best fitting or another distribution can be estimated, before performing any comparative test [Akaike, 1974].

4.4.1

Synthetic Pure Power-Law

As a first test on the usefulness of the ML exponent maps, we performed an exhausting study over different sets of synthetic data [Aaron Clauset, 2009] generated according to a power-law probability density within theoretical limits xmin ≤ x < xmax : g(x)dx = (γ − 1)

x−γ 1−γ

1−γ

xmin − xmax

dx

(4.31)

We used this data set to illustrate the confidence regions in the MLEM and compare to the estimation method introduced by eq. 4.10, by taking the limit xhigh → ∞ that depends on the population of the data sample [Baró and Vives, 2012]

62

CHAPTER 4. ESTIMATION OF POWER-LAW EXPONENTS For the proposed probability density (eq. 4.31) the transformation function that converts uniform random numbers z into the desired ones by inversion method is given by: 1 h 1−γ 1−γ 1−γ i 1−γ x = F −1 (z) = xmax − z(xmax − xmin )

(4.32)

We performed several generations of i.i.d. data points according eq. 4.31 with γ = 1.55, xmin = 1 and xmax = 108 . Fig. 4.3 shows the resulting maps corresponding to four samples of increasing sizes N as indicated by the legends. The first observation is that the size N of the sample is crucial in order to obtain a clean plateau corresponding to the correct exponent. Only when N > 104 the plateau extends for several ‘square’-decades. We have indicated the position of the maximum value Xmax obtained in the sample by a vertical black line. The variations of the fitted exponent observed from this line to the right are simply a consequence of changing the imposed upper cutoff xhigh in a region without available data due to the lack of statistics. This lack of data between Xmax and xhigh is relevant for the fitting method and, a priori, should be taken into account. Furthermore, the imposed cutoff xhigh can be moved above the theoretical limit xmax or set to be ∞, recovering eq. 4.10. This will then be equivalent to fitting the synthetic data (generated with a theoretical upper limit) with a model without such a limit. In the maps shown in Fig. 4.3 we allowed the imposed cut-off xhigh to grow above Xmax until the theoretical cut-off xmax , but for real data this theoretical cut-off will be unknown. In order to illustrate the bias in the estimation of the error introduced as eq. 4.11 we marked the threshold where the standard deviation of the ML estimate is lower than ±0.05 by using both eq. 4.27 and eq. 4.11. The low-error regions correspond to the areas below the black- and red-curved lines, respectively. Both estimations of the error differ, even in the region of small events. The estimation that takes into account the existence of both a low and a high cut-off, obtained from eq. 4.27, gives a better separation between the regions with meandering contour levels and the smooth, flat plateau. The p-values of the estimated exponents are represented in Fig. 4.4. Red lines mark the acceptance (pe > 0.5) and rejection (pe < 0.05) thresholds as stated in section 4.3.3. The black line representing the error σγ > 0.1 is kept to illustrate the range of acceptance of the power-law hypothesis in all cases. The p-values stay above the pe > 0.5 for most of the xlow , xhigh values, and only a few points fall to lower values, much higher still than the rejection threshold. Fig. 4.5 shows the behaviour of the fitted exponent γˆ as a function of the lower cut-off xlow for the same synthetic data samples as in Fig. 4.3. We have compared three different ML estimations corresponding to three choices of xhigh : 1. The green circles correspond to the ML estimation of the exponent obtained by fixing xhigh = 63

4.4. TESTS OVER SYNTHETIC DATA

ε N=105

107

N=106

2.5 2.4 2.3

xlow

105 10

2.2 2.1

3

2.0 1.9

101 108

1.8 N=103

1.7

N=104

1.6

106 xlow

1.5 1.4

104

1.3

102

1.2 1.1

0

10 0 10

102

104 106 xhigh

100

102

104

xhigh

106

108

1.0

Figure 4.3: MLEM obtained from synthetic power-laws. The data sets correspond to a theoretical probability density function with exponent γ = 1.55 and bounds xmin = 1 and xmax = 108 . The label N indicates the size of the sample set. The two curved continuous lines indicate the limits above which the standard deviation becomes greater than 0.05. The red (gray) dashed curved line corresponds to the estimation using eq. 4.11 and the black curved line to eq. 4.27. The vertical black straight line marks the highest value in the sample Xmax . xmax = 108 , the theoretical upper limit. This is nothing more than the profile of the ML exponent map along the vertical right border at 108 in Fig. 4.3. This would be the correct way to perform the ML estimation if one could have a priori knowledge of the true upper cut-off. 2. The blue triangles (higher symbols) correspond to the ML estimation obtained by neglecting the existence of an upper limit in the power-law distribution, i.e. fixing xhigh = ∞. This would precisely correspond to the method proposed in Ref. [Aaron Clauset, 2009]. 3. The red squares (lower symbols) correspond to the ML estimation obtained by fixing the higher cut-off of the data to the maximum value found in the sample xhigh = Xmax . It corresponds to the profile of the map along the vertical black lines in Fig. 4.3. 64

CHAPTER 4. ESTIMATION OF POWER-LAW EXPONENTS

p-value 7

xlow

10

N=10

5

N=10

1.0

6

105

0.5 3

10

101 7

xlow

10

N=103

N=104

105

0.1

103 101 101

103

105

xhigh

107

101

103

105

107

0.05

xhigh

Figure 4.4: Maps showing the p-values obtained by the KS test explained in 4.3.3 over the same data sets shown inFig. 4.3. Red lines correspond to the thresholds 0.05 and 0.5 used in this work for full rejection and full acceptance respectively. Black lines show the fluctuation threshold.

For small N , Method 3 underestimates the exponent because it neglects the fact that no data has been observed between Xmax and xmax . However, for N > 104 , one can see that Method 3 renders exponents that are very similar to the correct ones (Method 1). For N = 106 Method 3 is clearly better than Method 2, which neglects the existence of an upper boundary to the distribution: it can be seen that the two coinciding estimation methods (red squares and green circles) exhibit a larger plateau (by more than 1 decade) than the blue triangles. Therefore, it does not make much sense to increase the higher cut-off xhigh above the maximum value in the data sample Xmax unless we have independent information of the theoretical limit xmax . From now on, in the maps presented in this thesis we will scan the higher and lower cut-offs only within the population limits Xmin and Xmax . Thus, the vertical right border of the maps will coincide with the vertical black line plotted in Fig. 4.3. In addition, we will use the error estimation proposed by eq. 4.27 and plot, on the maps, the black thick line separating the region with error bars greater 65

γ

γ

4.4. TESTS OVER SYNTHETIC DATA

1.9 6 1.8 N=105 N=10 1.7 1.6 1.5 Xhigh= 108 1.4 Xhigh= ∞ 1.3 1.2 Xhigh= Xmax 1.1 1.9 4 1.8 N=103 N=10 1.7 1.6 1.5 1.4 1.3 1.2 1.1 0 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 Xlow Xlow

Figure 4.5: Comparison of the behaviour of the exponent fitted to synthetic data as a function of the lower cut-off xlow when Xhigh is fixed. Red squares correspond to xhigh = xmax = 108 (the theoretical limit of the synthetic data), green circles to xhigh = Xmax (the maximum value found in the sample) and blue triangles to xhigh = ∞. than ±0.05.

4.4.2

Synthetic Power-Law with Exponential Cut-off

Empirical data and numerical results of systems close to a critical state often display a power-law distribution with an upper cut-off that can be well fitted by an exponential decay. We can safely state that this kind of distribution is the first candidate to depict any distribution of scale-free avalanches . The exponential decay imposes a characteristic scale to the distribution, controlled by a cut-off value xc . For x ≪ xc the distribution follows a power-law, whereas for x ≫ xc : g(x) ∼ exp(−x/xc ). The joint pdf reads: g(x|xmin , xc )dx = (γ − 1)

x−γ 1−γ

xmin

! x exp − dx xc

(4.33)

We can test the typical response of MLEM over this kind of data by sampling a set of synthetic power-laws with exponential cut-off. In order to sample random variables we used a mix-up between inverse sampling and the rejection method. We sample power-law variables {Xi } according to eq. 4.32 and then reject them with a probability p = exp(−Xi /xc ). The resulting distribution can be seen at Fig. 4.6.a, fitting within an expected precision to the analytical expression in eq. 4.33. 66

CHAPTER 4. ESTIMATION OF POWER-LAW EXPONENTS

x -1

105

2

3

4

5

6

a)

-1

10

xlow 10 10 102 103 104 105 106 0

1

b)

2 1.5 γ 1 0.5 0

0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 1. 2 1. 4 1. 6 1. 8 2. 0 2. 2

ε

1

10 10 10 10 10 10 10

P(x)

10 100 10-1 10-2 10-3 10-4 10-5 10-6 10-7 10-8

0

c)

0.1

p-value

0.5

1

d)

104 xlow

103 102 101 100

10-1

100 101 102 103 104 105 106 100 101 102 103 104 105 106 xhigh xhigh

Figure 4.6: Numerical results of MLEM applied over a synthetic set of variables distributed according to a power-law with an exponential cut-off (eq. 4.33) with γ = 1.55, xc = 104, xmin = 1 and N = 105 random numbers. (a) Analytical probability density function (red), numerical histograms of frequencies (green) and power-law approximating the function in the limit of x ≪ xc (blue). (b) xlow sweeping of estimated γˆ (xlow ) according to eq. 4.13. at a fixed xhigh corresponding to the best fit according to section 4.3.3 selected from MLEM. (c) MLEM of estimated ˆ low , xhigh ). (d) p-value Maps for the estimation of γ = γ. ˆ γ(x

Fig. 4.6.c shows the MLEM extracted from the synthetic data. The imposed power-law value stands for most of the range of evaluation except for high values of xlow . However, the p-values represented in Fig. 4.33.b reject the power-law hypothesis as soon as the upper boundary xhigh ∼ 0.1xc = 1000. The sweeping of xlow for this upper cut-off is shown in Fig. 4.33.b, showing that, 67

4.4. TESTS OVER SYNTHETIC DATA indeed, the exponent value is stable around γ = 1.55 inside this range.

4.4.3

Synthetic Log-normal Distribution

Log-normal distributions are generated as a Gaussian distribution of a logarithmic variable, thus: (ln x − µ)2 1 1 g(x|µ, σ) = √ exp − x 2πσ 2σ 2

! (4.34)

The synthetic data has been generated from a Box-Muller algorithm returning Gaussian numbers. Afterwards, the inversion method is applied from the logarithmic relation such as: X = exp (µ + σGauss(0, 1)). Among the distributions analysed in this chapter, the log-normal distribution is the only one that never tends towards a power-law behaviour. However it can be easily mistaken by one, specially when the values of σ and µ are similar and large comparing with the characteristic ln x. Under these circumstances, it is possible to observe a false power law for many decades with an exponent close to one but slowly growing. √ Close to the mean value, in the limit x − µ ≪ 2σ, the distribution tends towards g(x|µ, σ) → 1√1 . Trying to fit a power-law to this data may return an exponent γ → 1. x 2πσ

In the limit ln x − µ ≫ 2σ 2 , instead of a power-law, the distribution g(x|µ, σ) → x−| log(x)| .

√1 x 2πσ

− log x 2σ 2

and

can be approximate by g(x) ∼ The effective exponent will increase with the order of x (with the geometric growth of x). This behaviour can be observed in the MLEM test performed over synthetic data represented in Fig. 4.7. The data generated corresponds to a log-normal with parameters µ = σ = 10. In all the domain the fitted exponent doesn’t depart much from γ = 1, plotted as a black dotted line in Fig. 4.7.a, although one observes the growing trend described above, also perceivable in Fig. 4.7.c. Cuts sweeping xlow at different fixed xhigh show a good plateau denoting a good agreement with power-law hypothesis, but the exponent of the plateau will increase linearly with the order of magnitude of xhigh . Notice that, no matter how slow is the increase of the exponent, a log-normal distribution is only able to fool the p-value for the power-law hypothesis within a certain range of d such that xhigh = dxlow . In the data analysed, for d > 103 , the p-value is able to refuse the power-law hypothesis. Log-normal distribution is one of the most ubiquitous in complex systems involving both linear and non-linear dynamics in geology, biology, medical and environmental sciences and even linguistics [Limpert et al., 2001]. As an straightforward example, the log-normal distribution serves as attractor to any multiplicative process: Xj = Fj Xj−1 . Since it can be rewritten as ln Xj = Pj Pj ln X0 + k=1 ln Fk , if the central limit theorem is satisfied for ln Fk , then k=1 ln Fk tends to a normal distribution. However, CLT is only valid within 1σ and the deductions from multiplicative 68

CHAPTER 4. ESTIMATION OF POWER-LAW EXPONENTS

x -2

0

10

10

0

10

2

10

4

10

6

8

10

10

a)

-2

10 P(x)

-2

10

106

108

b)

2 γ 1

-6

10

0.5

10-10

0

0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 1. 2 1. 4 1. 6 1. 8 2. 0 2. 2

10-8

106

10

xlow 102 104

1.5

10-4

ε

0

c)

0.1

p-value

0.5

1

d)

xlow

104 102 100 10-2 10-2

100

102 104 xhigh

106

8 -2 1010

100

102 104 xhigh

106

108

Figure 4.7: Numerical results over a synthetic set of variables distributed according to a lognormal distribution (eq.4.34). (a) Analytical probability density function (red), numerical histograms of frequencies (green) and power-law approximating the function in the limit of x ≪ xc . (b) xlow sweeping of estimated γˆ (xlow ) according to eq. 4.13. at a fixed xhigh = 200 corresponding to the best fit according to section 4.3.3 selected from MLEM. (c) MLEM of estimated ˆ low , xhigh ). (d) p-value Maps for the estimation of γ = γ. ˆ γ(x

processes are sometimes too naive and lead to an underestimation of extreme events. A clear example of such unfortunate assumption is the failure of the abused solution for risk management based on market returns developed by Black Scholes and Merton [Black and Scholes, 1973, Mandelbrot, 1997, Mandelbrot, 2008]. 69

4.4. TESTS OVER SYNTHETIC DATA

4.4.4

Synthetic Extreme Value Distributions

Extreme Value Distributions (EVD) are another family of fat-tailed distributions usually found in finance [Frechet, 1927, Bensalah, 2000, Finkenstadt and Rootzén, 2003], telecommunication engineering [Finkenstadt and Rootzén, 2003], material sciences [Weibull, 1939], reliability theory, and quality control [Eliazar, 2012]. Similarly to the normal and log-normal distributions, EVD have their own central limit theory and are stable distributions under normalization when a transformation of the kind x′ = α max(xi ) + β is applied [Fisher and Tippett, 1928, Gnedenko, 1943]. The maximum of a sample of i.i.d. random variables can only converge to an extreme value distribution after performing the right change of variables. The general formula for extreme value cumulative distribution reads

F(x|µ, σ, ξ) = e

# " 1 x−µ − − 1+ξ ( σ ) ξ

(4.35)

As an example, let’s consider the distribution of the maximum value xc obtained from a powerlaw with pdf g(x; γ ) and cumulative distribution function F(x; γ ): pdf (xc ; γ , N ) = N g(x; γ ) [F(x; γ )]

N −1

γ − 1 xc =N xmin xmin

!−γ   1 − 

xc xmin

!1−γ N −1   

(4.36)

This expression is already very similar to a Fréchet distribution, but not exactly the same. Instead, is the first term of the development of an exponential function: γ − 1 xc pdf (xc ; N ) = N xmin xmin

N −1   !−γ  !1−γ  !   xc 2−2γ    x c  + O   exp −    x  x  min min

(4.37)

The iterative maximization process will indeed fall towards a Fréchet distribution. All the possible combination of parameters building an extreme value distribution can be sampled from inversion method. In this section the performance of the MLEM is tested for the three different families of extreme value distributions corresponding to the cases ξ < 0, ξ > 0 and ξ → 0. Synthetic Weibull Distribution The Weibull distribution corresponds to the family of extreme value distributions with exponents ξ = −1/α < 0. The Weibull distribution [Weibull, 1939] appears naturally as the stable distribution of minimum values obtained in a set of i.i.d. variables from any distribution with well defined moments. Taking µ = 0, for the sake of simplicity, the pdf for a Weibull distribution reads:   α x α−1 −( x )α g(x|σ, α) = e σ σ σ for x > 0 and 0 otherwise.

70

(4.38)

CHAPTER 4. ESTIMATION OF POWER-LAW EXPONENTS The complementary cumulative distribution function can be used to obtain random numbers sampled according to a Weibull distribution from the inverse sampling method. The CCDF reads: x α

F(x|σ, α) = e −( σ )

(4.39)

In the case of the Weibull distribution, the power law limit behaviour appears as for low val α−1 h  i 1 + O σx . Under these conditions: γ = 1 − α. Being α ues of x as limx→0 g(x|σ, α) = ασ σx positive defined, it is not possible to obtain a false plateau for low values of x with an exponent γ > 1. Fig. 4.8 shows the results of the power-law fitting over a Weibull distribution with α = 0.55, σ = 1000 and N = 10000 data points. An artificial cutoff is imposed at xmin = 10−3 for better visualization. The Weibull distribution can be interpreted as a specific kind power-law with upper cut-off similar to the exponential one (eq. 4.33) where the decay is bounded to the exponent and γ < 1. For α > 1 the distribution never behaves as a power-law decay in any region. For this reason, this distribution will fit well a power-law hypothesis within a given range, and may return clear powerlaw exponents up to xhigh = 0.1σ. Above this threshold the low p-value is able to reject a power-law. Fig. 4.38.b shows a sweeping over γ (xlow ) at the maximum range xhigh = 0.1σ still compatible with the power-law hypothesis. The fitted exponent tends to γˆ = 1 − α, and is represented in both Fig. 4.8.a and Fig. 4.8.b. as a black dashed line. Synthetic Fréchet Distribution Given the case of an extreme value distribution with ξ = 1/α > 0 we obtain a Fréchet distribution. This distribution is also a stable distribution, but in this case serves as an attractor for maximum values: The maximum value in a Fréchet distributions returns a Fréchet distribution. This distribution is suggested to be the most appropriate for fat-tailed financial data [Mandelbrot, 2008]. The Fréchet cumulative distribution function reads: F(x|µ, σ, α) = e −(

x−µ −α σ

)

(4.40)

and the pdf can be simplified as: g(x|µ, σ, α) =

  α x − µ −1−α −( x−µ )−α e σ σ σ

(4.41)

Contrary to a Weibull distribution, the power law behaviour appears as a limit for high values  x−µ −1−α h  α i of x as limx→∞ g(x|σ, α) = ασ σ 1 + O σx . Under these conditions: γ = 1 + α and, since α > 0, we will expect γ > 1 that will converge for high values. We can fit here a α = 0.55 to compare with a power-law with exponent γ = 1.55. Fig. 4.9 shows the resulting MLEM obtained from a Fréchet distribution with α = 0.55, σ = 10000 and N = 10000. The p-value accepts a power-law behaviour as a right approximation between xlow = 0.1σ and xhigh = Xmax . Only for low values of x the power-law behaviour is distorted. 71

4.4. TESTS OVER SYNTHETIC DATA

x 1 2 3 4 5 0 10 10 10 10 10 10 10 10 10 10 -3

-2

-1

0

a)

-1

10

xlow 10 10 10 10 101 102 103 104 105 -3

-1

0

b)

2 1.5

P(x)

10-2

γ 1

10-3

0.5

10-5

0

0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 1. 2 1. 4 1. 6 1. 8 2. 0 2. 2

10-4

ε

-2

0.1

p-value

0.5

1

105 104

c)

d)

103 xlow

102 101 100 10-1 10-2 10-3 5 -3 -2 -1 0 10-3 10-2 10-1 100 101 102 103 104 1010 10 10 10 101 102 103 104 105 xhigh xhigh Figure 4.8: Numerical results over a synthetic set of variables distributed according to a Weibull distribution (eq. 4.38). (a) Analytical probability density function (red), numerical histograms of frequencies (green) and power-law approximating the function in the limit of x ≪ σ. (b) xlow ˆ low ) according to eq. 4.13. at a fixed xhigh = 100 corresponding to the sweeping of estimated γ(x ˆ low , xhigh ). (d) best fit according to section 4.3.3 selected from MLEM. (c) MLEM of estimated γ(x p-values Maps for the estimation of γ = γˆ .

Notice that this result resembles the effect of a smooth lower-cutoff, characteristic of the device resolution in many experimental data. A criteria for consider the Fréchet as a valid approach to the obtained data is to certify that this departure to the power-law is clearly above any possible effect caused by the resolution of the device. Most of our experimental data show this effect, but clearly in the region expected by the resolution. Thus, even though the MLEM resembles those from a Fréchet, we cannot consider it as a valid hypothesis. 72

CHAPTER 4. ESTIMATION OF POWER-LAW EXPONENTS

x 0

2

10 10-2 -3 10-4 10-5 10-6 10-7 10-8 10-9 10 10-10 10-11 -12

10

109 108

10

8

10

10

10

0

10

2

10

xlow 10 106 4

108

1010

b)

2 1.5 γ 1 0.5 0

0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 1. 2 1. 4 1. 6 1. 8 2. 0 2. 2

ε

10

6

a)

P(x)

10

4

c)

0.1

p-value

0.5

1

d)

107 xlow

106 105 104 103 102 101 100

101 102 103 104 105 106 107 108 1091010101 102 103 104 105 106 107 108 1091010 xhigh xhigh

Figure 4.9: Numerical results over a synthetic set of variables distributed according to a Frechet distribution (eq.4.41). (a) Analytical probability density function (red), numerical histograms of frequencies (green) and power-law approximating the function in the limit of x ≪ σ. (b) xlow ˆ low ) according to eq.4.13. at a fixed xhigh = corresponding to the best sweeping of estimated γ(x ˆ low , xhigh ). (d) fit according to section 4.3.3 selected from MLEM. (c) MLEM of estimated γ(x ˆ p-value Maps for the estimation of γ = γ.

Gumbel Distribution The Gumbel distribution —also known as log-Weibull distribution— appears as a special case of extreme distribution where ξ → 0. Gumbel distributions can be obtained from the minimum of an underlying sample following a normal or exponential distribution. Considering a generic µ > 0, the pdf of the Gumbel distribution reads: 73

4.5. EXPONENT MAPS OF GUTENBERG RICHTER EXPONENTS

g(x; µσ) =

1 µ−x −exp µ−x σ e σ σ

(4.42)

and the cumulative distribution function: F(x; µ, σ) = e − exp(−

x−µ σ

)

(4.43)

For x ≫ µ this distribution decreases exponentially such as the Weibull distribution: g(x; µσ) → x−µ σ , while for |x − µ| ≪ σ the distribution is undistinguishable from a Gaussian distribution: 1 x−µ 2 g(x; µσ) → 1 e −1− 2 ( σ ) . 1 − σe

σ

The logarithm of a Gumbel distribution corresponds to a Weibull distribution. Hence it really never tends to a power-law. At best it can return a uniform distribution g(x) ∼ 1/eσ for values of |x − µ| ≪ 2σ. By taking a value of σ much grater than µ this behaviour can extend to a high range where p-values will accept a power-law behaviour, but the fitted exponent will be close to 0. Fig. 4.10 shows the results of a power-law estimation over a Gumbel distribution with µ = 1.5, σ = 3.0 and N = 10000. The parameters have been selected such as σ > µ and σ, µ still small enough to allow lots of data points inside the exponential regime, up to one additional order of magnitude higher, resembling a fat-tailed distribution. Fig. 4.10.d shows how the power-law estimation accepts the power-law hypothesis for x ≪ σ while the fitted value is close to 0, as represented in Fig. 4.10.b. The straight line in Fig. 4.10.a corresponds to an exponential decay. Given that most of the empirical exponents found in this work have values between 1 and 2, the only fat-tailed non-power-law distributions that may return plateaus compatible with the power law hypothesis are the log-normal distribution, the Fréchet distribution and the power-law with exponential cut-off.

4.5

Exponent Maps of Gutenberg Richter Exponents

As an illustrative example of the performance of MLEM over an experimental power-law distribution, we present the results of the tests applied to the Gutenberg-Richter exponent introduced in section 3.2.1, using the three seismological catalogues generated by three different driving forces (section 3.2). The distribution of energies for the three catalogues is shown in Fig. 4.11-Left-Top. An exploratory analysis suggest the existence of a power-law behaviour with an exponent close to the expected ε = 1.67. Since each process is affected by different limitations, the range of acceptance of the power-law behaviour is different in each catalogue. The distribution of catalogue 1, corresponding to the subduction process in the Japan trench, contain stronger earthquakes, but the distribution is truncated for low values of energy.

74

CHAPTER 4. ESTIMATION OF POWER-LAW EXPONENTS

0

102

-1

10

10

xlow 100

101

102

b)

2 1.5 γ 1 0.5 0

0. 0 0. 2 0. 4 0. 6 0. 8 1. 0 1. 2 1. 4 1. 6 1. 8 2. 0 2. 2

ε

-2

a)

P(x)

10 10-1 10-2 10-3 10-4 10-5 10-6 10-7

0

x 5 10 15 20 25 30 35 40

c)

0.1

p-value

0.5

1

d)

101 xlow

100

10-1 10-2 10-2

10-1

100 xhigh

101

102

10-2

10-1

100 xhigh

101

102

Figure 4.10: Numerical results over a synthetic set of variables distributed according to a Gumbel distribution (eq. 4.42). (a) Analytical probability density function (red), numerical histograms of frequencies (green) and power-law approximating the function in the limit of x ≫ σ (blue). (b) xlow sweeping of estimated γˆ (xlow ) according to eq. 4.13. at a fixed xhigh = corresponding to the ˆ low , xhigh ). (d) best fit according to section 4.3.3 selected from MLEM. (c) MLEM of estimated γ(x ˆ p-value Maps for the estimation of γ = γ.

In Fig. 4.11-Right the GR Law is represented in MLEM and in 4.11-Left-bottom the estimation is restricted to Ehigh = Emax . We have kept the same energy scales for the three catalogues in order to clearly reveal the different size N of the statistical samples. Fig. 3.3-Left-Bottom shows the behaviour of the fitted exponent ε as a function of the lower cutoff Elow when the higher cut-off Ehigh is fixed to the maximum value in the sample set. The first observation is that there is an almost perfect plateau in the exponent fitted for catalogue 2 (middle diagram), with a value close to the expected theoretical 75

4.5. EXPONENT MAPS OF GUTENBERG RICHTER EXPONENTS

10

-10

18

Japan

16

N=14509

10

Japan Cal-Nev El Hierro

10

14

1.8

12

10

P(E)

10

10

10

-15

8

10

1.6

6

10

10

-20

18

Cal-Nev

16

N=453372

10 10

1.4

14

10

Elow (J)

10

2.0

10

Elow (J)

10

ε

-5

-25

12

10

10

10

10

-30 6

8

10

12

14

16

18

10

20

6

10

18

El Hierro

16

N=12158

10 10

1.0

14

10

Elow (J)

ε

10 10 10 10 10 10 10 10 10 E (J) 3 2.5 2 1.5 1 0.5

1.2

8

4

12

10

0.8

10

10

8

10

6

10

4

6

8

10

12

14

16

18

10 10 10 10 10 10 10 10 10 Elow (J)

20

6

8

10

12

14

16

18

10 10 10 10 10 10 10

0.6

Ehigh (J)

Figure 4.11: Left-Top: Distribution of earthquake-energies P(E)dE from the catalogues of Japan (1), California-Nevada (2) and El Hierro (3). Right: MLEM for the power-law exponent ε according to the distribution in eq. 4.11. White contour lines are separated by 0.1 units. The region above the black line corresponds to estimated statistical error bars greater than ±0.05. Left-Bottom: Dependence of the exponent as a function of the lower cut-off Elow for a fixed higher cut-off equal to the maximum value in the sample set. σ-confidence interval is plotted as a broad lighter stripe. The horizontal dashed lines show the theoretical expected value ε = 1.66 with an error bar ±0.10.

76

CHAPTER 4. ESTIMATION OF POWER-LAW EXPONENTS exponent ε = 1.67. Despite some deformation, plateaus are also observed for the other two sets: Japan (top diagram) and El Hierro (bottom diagram). A second important observation is the deformation of the plateau towards low exponent values for the Japan data in the region of Elow < 1010J. A plausible explanation for this deformation is that the statistics for small earthquakes in the Japan catalogue is incomplete, or the interplay of different seismogenic mechanisms distorts the scale-free behaviour for some magnitude range. The same tendency can be observed in California-Nevada, but for much lower minimum cut-offs (Elow < 108 J), almost coinciding with the lower border of the map in Fig. 4.11-Right (middle diagram). Surprisingly, the fitted exponent for the Japan data oscillates slightly, as can be seen in Fig. 4.11-Right as contour lines with a parabolic horizontal shape starting from the right border, as well as in Fig. 4.11-Left-bottom between Elow ∼ 1011 − 1012J. We are unable to provide an explanation for this behaviour, but it could be caused by the different methods used to estimate magnitudes and/or energies depending on the earthquake magnitude range.

4.6

Further Use

In the following chapters, MLEM are exhaustively used for the analysis of the distributions of sizes, durations (eq. 4.15) and energies (eq. 4.13) of avalanches in different systems, as well as the amplitude of ultrasonic noise (eq. 4.19). Usually, an analysis of the xlow scanning given a xhigh = Xmax will be enough. However, in some cases will be necessary the sweeping of xhigh to understand massive distortions in the region of high values, as in the case of numerical simulations of the 3D-Gaussian Random Field Ising Model within periodic boundary conditions. As shown in Chapter 10, the MLEM have been successfully used to test a classification rule to isolate spanning massive avalanches that distort the scale-free behaviour, helping to identify the right critical exponents and the value of critical disorder. Beyond this initial use, the use of the technique has been spread to the evaluation of empirical scale-free exponents: from the Gutenberg-Richter Law for earthquakes to it’s analogous power-laws in Acoustic Emission, Barkhausen Noise and calorimetric jerks in the experiments of compression of porous materials (Chapters 7 and 8), and structural and magneto-structural phase transitions in Shape Memory Alloys (Chapter 9). In all these cases, the range of validity of the power-law hypothesis is truncated by characteristic scales in physical systems, or limitations in the resolution of the recording device.

77

4.6. FURTHER USE

78

Chapter

5

Techniques to Evaluate Temporal Correlations in Avalanches This chapter details the tools used in the current work to analyse the correlations present in some empirical (Chapters 7,8 and 9) and numerical (Chapter 10) data, and intended to be applicable to other stochastic processes. Some of this data corresponds to avalanche processes interpreted as temporal marked point processes where all the available variables are the time of the event Si its duration Di and several marks such as the energy Ei , size Si or magnitude of magnetic or acoustic signals Ai . In order to interpret well the results it is important to identify a valid set of tools able analyse this kind of data. Thanks to these tools, together with the background provided in Chapter 2, one is able to avoid some of the common mistakes when dealing with stochastic point processes such as the following: • The evolutionary rate measured as hdn/dti(t) over cycling, or series of repetitions of the same experiment, can include endogenous intensity terms apart from the temporal evolution. One must be sure that the process is non-homogeneous Poisson before stating that hdn/dti(t) = λ(t) depending only on t since, in general, will depend implicitly on the history of the process. • On the contrary, a raw analysis of aftershock sequences (section 5.4) may not only depict the productivity term in a self-exciting process, but also the variations of the exogenous intensity and both phenomena can be easily mistaken. • A non-exponential distribution of waiting times (section 5.3), can be either introduced by a renewal function, non-markovian correlations or by the modulation of the rate in a non-homogeneous Poisson process. The existence of a power-law in the distribution of waiting times does not guarantee the presence of clustering nor other correlations as will be shown in section 5.3. • The Bi test (section 5.2) applied to reject renewal processes is only valid when the data is not affected by any artificial thinning. Otherwise, the Bi test is only valid for those hypothetical processes invariant under thinning (such as the Poisson). On

79

5.1. MAXIMUM LIKELIHOOD OF A POINT PROCESS the other hand, the data obtained from the thresholding of a temporal signal needs to consider a delayed-Poisson renewal process as the hypothesis of independence between signals.

5.1

Maximum Likelihood of a Point Process

The first method considered whenever someone wants to classify a p.p. among the different types introduced in Chapter 2, is to apply the Akaike Information Crieterion (AIC) [Akaike, 1974] over the likelihood of the process. The likelihood function can be obtained from the probability of a point process introduced in eq.2.6. The logarithm of the likelihood function reads: log(L(T |HT , {Θ})) =

X si ti+1 |Ai , µ) = P(T |A)P(δ < T |µ)

(6.16)

As a first approximation, we will consider the strict validity of H1 and, thus: P(T |A) = δ(GT γ − A). The second term in eq. 6.16 is the cumulative distribution function of waiting times δ given a rate µ. Considering that the distribution of waiting times can usually be approximated as Poisson processes (see section 2.2): P(δ < T |µ) = [1 − exp(−µT )]. We can calculate the expected duration hD(A)i registered for a train of superposed signals of a given A. To do so, we simplify even more the problem by considering that the duration registered for n superposed events Dn = nD1 . This condition, 124

CHAPTER 6. EXPERIMENTAL SETUP: MECHANICAL FAILURE UNDER COMPRESSION together with the condition of all signals in the train having the same A, draw an extreme distortion behaviour in the A, D diagram that can be solved analytically as: Z hD(A)i =

T (A)

∞ X (k + 1)P(D > δ|µ)P(D|A)dA k=0

    1 k   1   1  ∞   γ1 X γ  γ    A A γ  A A    = (k + 1) 1 − exp −µ exp 2µ =    G G G G 

(6.17)

k=0

1

While H3 would tend to decrease the steepness of D(A) → (A/C) γ for the region large events, 1 the typical effect of H4 will be a rapid increase for D > µ, departing from D(A) ∼ (A/C) γ . The fulfilment of each of these hypothesis is not necessary strictly valid nor exclusive in empirical data. But as a general rule of thumb, we can identify any one of the hypothesis from the empirical relation A(D): • H1: whenever the relation A(D) follows a power-law. • H2: Whenever A(D) follows an exponential increase with a characteristic time τ. • H3: whenever the relation A(D) transits from an exponental to a power-law increase for durations D longer than a given characteristic time τ • H4: If the relation A(D) tends to an exponential also for large signals. In the following sections I will analyse the acoustic emission data obtained from different experiments on different materials. Any of the hypothesis described above will help us to check if the acoustic activity correspond to crackling noise generated from fracture faulting and understand the nature of the scalar values defining the point process (A, D, E).

125

6.5. INTERPRETATION OF RECORDED MAGNITUDES

126

Chapter

7

Failure of Vycor (SiO2) Under Compression This chapter presents the results obtained from uniaxial compression tests of samples of the silica (SiO2 ) mesoporous glass known with the commercial name of Vycor®. The manufacture of Vycor (section 7.1) form large coherent structures instead of the granular ‘disconnected’ structures. The simultaneous study of the mechanical evolution and AE reveal the presence of crackling noise associated to microcracks (section 7.3); a scale-free distribution of AE energy extending up to 6 decades (section 7.5); and the presence of strong temporal correlations associated to self-excitation mechanisms (section 7.6). The scale free behaviour of Vycor is suggested to arise from a critical process (section 7.7) that can be understood within the paradigm of mean-field models. The resulting statistical laws found in AE show profound analogies with the stated laws of earthquakes (Chapter 3) as it is summarised in section 7.7.

7.1

Synthesis of Vycor

Corning’s Vycor 7930® (registered trademark of Corning Incorporated, Corning, New York 14831) is a silica thirsty and chemically inert mesoporous reconstructed glass. It has a reported Young Modulus E = 2.5 Mpsi (17.2 GPa) at 25ºC [Corning, 2001]. The manufacture of Vycor, begins with the melted mixture of a soft alkali-borosilicate glass, constituted in a 65% of Silica (SiO2 ); 26% of Boric Oxide (BO2 ) and 9.0% of Na2 O, that is cooled down and worked to the desired shape. The sample is maintained above the annealing point where the mixture is kept in a phase-separable state with two immiscible glassy phases: one phase is rich in SiO2 and the other is rich in BO2 and Na2 O and highly soluble in acid [Elmer, 1992]. Acid leaching is performed by immersing the glass in a hot acid solution containing H+ or H3 O+ ions that dissolve the soluble phase. Vycor is the resulting skeleton made of 98% pure silica and traces of boric oxide after heating and cleaning. The structure can be described as a network of randomly 127

7.2. SAMPLES AND EXPERIMENTAL CONDITIONS oriented and interconnected pore segments with a pore diameter d ≈ 7.5 ± 0.5nm; and average length l ≈ 30nm resulting in a mean pore segment-length relation of d/l ≈ 0.23 [Koppensteiner et al., 2008]. While the manufacturer states that the remaining porosity is around 30% the samples analyzed in our lab present a bulk porosity of around 40%. According to the data supplied by the manufacturer, Vycor-7930 present a linear Coefficient of Thermal Expansion (CTE) αL = 7.50 × 10−7 C evaluated between 276 and 576 K. In our lab experiments we don’t expect Temperature fluctuations to exceed 10K. thermal expansion affects on the measurement with a factor < 0.01% of the total measurement (∆h(T ) ∼ 1µm) below the resolution of our best device: the laser extensometer.

7.2

Samples and Experimental Conditions

The samples of Vycor analysed in this work were manufactured by Corning Inc. in cylindrical shapes of ∼ 5mm height. Because of the limitation of the experimental setup, each one of the samples was cut in different prismatic shapes, sanded, and cleaned with a 30% solution of H2 O2 for 24h in order to erase undesired dust and organic contamination [Salje et al., 2011b]. Samples were dried under vacuum at 400K for another 24h. The porosity was determined via weighing (accuracy 0.0001g) and measuring dimensions (accuracy 1 mm). A first set of experiments (V02,V03,V05) were performed in November 2010. The samples were cut in prismatic shapes with triangular base with an area ∼ 15 mm2 [Salje et al., 2011b]. Several compression tests were performed to evaluate the dependence on the uniaxial compression driving rate. Acoustic emission signals were detected by a single transducer micro-80. The signal was preamplified 60 dB resulting in an effective upper level of detection of 80 dB. The detection threshold was set at 26 dB giving 54 dB of resolution (a factor ×500 in voltage). The HDT (hit definition time) was set to 100 µs. The contact extensometer was used. For the statistical analysis we reject all signals with a registered duration of D = 0, considered as electrical noise. A new set of experiments were performed between November 2013 and March 2014 with new samples of Vycor. Prismatic shapes were cut from a cylindrical sample, forming near triangular bases (V31, V32) with an area ∼ 6 mm2 and square bases (V61,V62,V58) with an area ∼ 25 mm2 , in order to evaluate the effect of the base-shape in the AE results. Some of these samples were imbibed in distilled water (V31, V62) or petroleum jelly (Vaseline) (V58) during the experiment. In this second set of experiments we were more confident about the noise level and allowed a lower voltage threshold of 23 dB, with 60 dB of pre-amplification, resulting on 57 dB of resolution (a factor ×700 in voltage). Each of the samples was compressed at different constant driving rate ranging from 0.2 to 12.2 kPa/s. The specific characteristics of the sample as well as it’s mechanical properties are displayed in Table 128

CHAPTER 7. FAILURE OF VYCOR (SIO2 ) UNDER COMPRESSION Sample

b-shape

h (mm)

A (mm2 )

ρ (g/cm3 )

V02 d V03 d V05 d V31 w V32 d V61 d V62 w V58 pj

triangle triangle triangle triangle triangle square square square

51 51 51 5.65 5.65 9.60 9.60 4.50

18.23 16.99 13.17 17.60 17.00 21.31 28.13 20.40

1.432 1.202 1.292 1.6092 1.457 1.474 1.4583 —

setup TfCeB TfCeB TfCeB TfLeBT TfLeBT TfLeBT TfLeBT TfLeBT

Th (dB)

PreAmp (dB)

R (kPa/s)

Pc (MPa)

26 26 26 23 23 23 23 23

60 60 60 60 60 60 60 60

12.2 1.6 0.2 6.0 6.2 7.0 6.4 1.1

29.95 28.13 28.47 46.60 43.39 28.99 28.70 50.10

Table 7.1: Summary of the physical properties of the samples analyzed and experimental setup. Samples were machined to a prismatic shape with b-shape base, resulting on h, A dimensions and measured density ρ. Specimens have been tested in dry (d) conditions, imbibed in water(w), or imbibed in petroleum jelly (pj). The setup is given in the following abbreviations: Tf : Teflon Slider ; Ce: Contact Extensometer; Le: Laser Extensometer; B Transducer inside the Bottom plate; T Transducer inside the Top plate. The threshold Th and pre-amplification preAmp are given in dB; the driving rate is measured as R = dP/dt at which the material is compressed and the failure point Pc is the pressure at which the sample collapsed. ( 1 No better resolution is provided, 2 weight measured in wet conditions, 3 weight measured in dry conditions) 7.1. Most of the results shown is this chapter correspond to the experiment V03 (with driving rate R =1.6kPa/s) from 2010 [Salje et al., 2011b, Baró and Vives, 2012, Baró et al., 2013b]. Properties of this experiment are common in other dry samples of Vycor. However, the contact extensometer used in this experiment was not very accurate so, for more precise discussions on h(t), we will use the experiments V61 and V32 performed with the laser extensometer.

7.3

Time Sequences and Mechanical Properties

Fig. 7.1 shows the sequence of recorded events for the experiment V03. From top to bottom it plots: the vertical distance h(t) between the plates denoting the height of the sample in mm; The energy of individual signals in aJ represented in logarithmic scale; The overall number N of signals recorded until time t; The activity rate in signals recorded per second, also in logarithmic scale, evaluated in δt = 100 s. Instead of the smooth compression expected by an elastic linear response, the sample evolves in the jerky way expected in brittle fracture [Salje et al., 2011b]. The height of the sample decreases in sudden jumps matching the presence of activity bursts and high energy acoustic signals. However, 129

7.3. TIME SEQUENCES AND MECHANICAL PROPERTIES

-1

r(s )

N

E(aJ)

h(mm)

5 4 3 2 1 8 10 6 10 4 10 2 10 40000 30000 20000 10000 2 10 1 10 0 10 -1 10 -2 10 0

3000

6000

9000 12000 15000 18000 t(s)

Figure 7.1: Time sequence of the mechanical evolution and properties of recorded events for the experiment V03. From top to bottom: the height h(mm) of the sample over time; The energy E of individual signals in aJ represented in logarithmic scale; The overall number N of signals recorded until time t; The activity rate in signals recorded per second, also in logarithmic scale.

low avalanche activity is also present during intervals without height drops. This underlying steady rate can by further augmented at thinner (h, r, t) resolution (Fig. 7.2). The self similar pattern found, not only in the sudden jumps in h(t), but also in it’s corresponding activity bursts r(t) is a general hallmark found in all experiments performed over dried samples of Vycor. This fracture behaviour suggests that the background activity corresponds to brittle micro-faulting: small cracks triggered by the excess of pressure over a local failure point, that can eventually grow to to higher scales. Otherwise, this underlying background could denote a slow deformation by thermal creep or fatigue. As consequence, the presence of these signals would be only related to the absolute applied load and thus this background activity would be independent of the driving rate. The thermal origin 130

CHAPTER 7. FAILURE OF VYCOR (SIO2 ) UNDER COMPRESSION

9000 Eone signal (t) Nsignals(t)

35000

Nsignals

30000

8000 7000 6000

25000 20000 143 15000

144 t (min)

145

5000 4000 3000

10000

2000

5000

1000

0 0

50

Eone signal (aJ)

40000

0 100 150 200 250 300 350 t (min)

Figure 7.2: Counting process N (t) and energy of individual signals Ei recorded during experiment V03. Inset: a close-up in time, number of signals and energy, seemingly self-similar to the original sequence. of these kind of signals can be identified by comparing to the activity at driving rate R = 0. In our experiments we start the acoustic recording a few minutes before start increasing the load. If we analyse the activity rate without filtering, we find a very low activity during that period: of about 1/5 of the activity recorded just after the beginning of the load increase. By applying a simple filter to reject the electric noise (consisting in the removal of all signals with D < 1µs), these signals disappear to less than 1/100 of the activity recorded after the beginning of the loading. Therefore, the background rate can be considered as null for R = 0 and, thus, the majority of the acoustic signals might correspond to micro-faulting instead of a thermal effect. By enhancing the height resolution in Fig. 7.1 we could identify three major drops in height accompanied by an increase in activity rate for lower pressure values ( a drop of ∆h1 ∼ 100µm at pc1 = 15, 13MPa, ∆h2 ∼ 30µm at pc2 = 16, 95 MPa and ∆h3 ∼ 40µm at pc3 = 19, 58MPa) before a last macroscopic drop (the failure point) ∆h4 ∼ 2250µm at Pc = pc4 = 28.13MPa able to shatter the sample into small debris. As shown with further materials (Chapter 8), this pattern of small steps in stress-strain diagram before a major collapse is typical in the collapsing of porous materials under constant uniaxial compression rate. However, the number, size and position of the precursory major drops seems to be an 131

7.4. RELATION BETWEEN AVALANCHE PROPERTIES stochastic variable, depending on the specific sample and experiment. One special pattern found in Vycor is that the typical duration of major drops is quite long (around 5s) compared to other materials (typically tenths of a second). This behaviour reminds of the properties of crackling noise (section 1.3.3), i.e. the growth of a critical crack. Also, the exact value of the last failure point Pc changes from one experiment to another and doesn’t seem to obey a clear dependence. All these variabilities suggest the involvement of extreme value statistics rending Weibull or Gumbel distribution of Pc (see section 4.4.4). We cannot evaluate this hypothesis because of the limited number of experimental repetitions that we can reasonably perform. Recent experiments (V31,V62) suggest that the presence of water inside Vycor smooths the jerkiness of the fracture process, reducing the number and magnitude of major drops before the last macroscopic drop.

7.4

Relation between Avalanche Properties

We can understand the nature of acoustic emission events and the distortions introduced by the experimental setup (section 6.3) by comparing the typical relations between energy, amplitude and duration of the registered signals to the basic hypothesis stated in section 6.5. We take for granted the problem with missing data. Probably most of the recorded signals are originated close to the transducer. If we consider the origin of the acoustic events being uniformly distributed in space, the attenuation of the signals won’t modify the distribution of amplitudes nor energies except for an exponential decay above the power-law regime. Fig. 7.3 shows the density of signals in the space of scalar variables ρ(AdB , D, E) represented as the marginal densities ρ(D, AdB ), ρ(D, E) and ρ(AdB , E). As stated in section 6.5, the relations between AV , D, E follow an average monotonous interdependence between the magnitudes. We can test the validity of the scaling hypothesis proposed for crackling noise [Sethna et al., 2001]. We compare the relations to the behaviour expected from the hypothesis (H1, H2, H3, H4) presented in section 6.5 in order to test the suitability of each hypothesis and understand the meaning of AV , D, E magnitudes. Some trivial anomalies are easily identified, such as the saturation level of amplitude at AdB = Amax = 80dB in both ρ(D, AdB ) and ρ(AdB , E), and the presence of background noise below the line E < 1aJ corresponding to signals with amplitude AdB =Th= 26dB and arbitrary duration in ρ(D, E). We proceed to analyse the relations between AV , D, E beyond these anomalies. The relation between A and D in a density map (middle row in Fig. 7.3) shows a non-trivial dependence between the two magnitudes, specially clear for high values of amplitude. According to H1, this relation should be linear in this representation, since: AV ∼ D γ (eq. 6.5). However, this behaviour is distorted for low amplitude values. Furthermore, by plotting the density maps of the relation between AdB and 132

CHAPTER 7. FAILURE OF VYCOR (SIO2 ) UNDER COMPRESSION

R=0.2 kPa/s

103

106

102

104

101

102 E=1 aJ

100 10

A(dB)

R=12.2 kPa/s

8

10 E(aJ)

R=1.6 kPa/s

0

10

2

10

4

10

0

10

2

10

4

10

0

10

2

10

4

103

80 70 60 50 40 30

Amax = 80 dB

102 101 Ath = 26 dB

100

102 104 D(µs) R=0.2 kPa/s

100

102 104 D(µs) R=1.6 kPa/s

100

102 104 D(µs) R=12.2 kPa/s

8

E (aJ)

100

10 106 104 102 100

100

103 102 101

30 40 50 60 70 80 30 40 50 60 70 80 30 40 50 60 70 80 A(dB) A(dB) A(dB)

100

Figure 7.3: Density map of energies (top) and amplitudes (bottom) in front of duration of the recorded signals. White lines represent the fitted behaviour expected by H3 with the parameters γ = 1.4, log10 (G) = 13.9, τ = 75µs.

133

7.4. RELATION BETWEEN AVALANCHE PROPERTIES E (bottom row in Fig. 7.3), we found a dependence log10 E = 0.1AdB + 0.1Th, corresponding to a power law relation between energy and amplitude in volts: E ∝ A2V , since AV ∼ 10AdB /20 . But, 2+ γ1

according to eq. 6.5: E ∝ AV . Either the exponent 1/γ , defining A(T ) relation, is smaller than the statistical fluctuations of A (γ ≫ 1) or the scaling hypothesis H1 is incomplete. Thus, a direct scaling hypothesis as H1 is unable to depict the distribution of signals.

P(D) (µs-1)

For low values of amplitude we observe a relation similar to AV ∼ exp(−T /τ) as the one expected by H2, identified in Fig. 7.3-middle as a lineal relation AdB (D). The expected value of characteristic time τ ∼ 100(50)µs can be obtained from the distribution of durations (see Fig. 7.4), considering that the extend of the flat plateau in the double-logarithmic plot gives a measurement of the characteristic τ. However, for values of T higher than τ, the distribution of durations decay as a power-law with exponent −2, manifesting the failure of H2.

103 102 101 100 10-1 10-2 10-3 10-4 10-5 0 10

D-2

101

102

103 D (µs)

104

105

106

Figure 7.4: Probability distribution of durations for the three initial experiments. There is a transition regime at D ≈ 100(50)µs from a uniform distribution to a power-law behaviour with exponent close to −2. Furthermore, in the dependence AV (D) and E(D), instead of a complete power-law relation, expected by H1, one observes a transition from an exponential relation around the characteristic time ∼ 100µs, compatible with H2. The distribution tends towards a power-law relation (H1) for large avalanches, instead of a divergence in D(AV ) as expected by H4, or an exponential relation expected for H2. Therefore the relation between amplitudes and durations manifest an interplay between the scaling regime T ∼ Aγ and a distortion caused by the response of the transducer for T . 100µs as described by H3. For low values of amplitude, durations T of original avalanches are −T h smaller than the typical decay time τ. Thus the relation D(A) can be approximated by Dτ = 20AdB log10 (e) and AV (D) ≈ Vth exp(D/τ) as represented in eq. 6.12. Considering this relation, and a power law distribution of amplitudes P(AV ) ∼ A−α V , the distribution of durations (Fig. 7.4) should follow an 134

CHAPTER 7. FAILURE OF VYCOR (SIO2 ) UNDER COMPRESSION exponential decay: for D ≪ τ

:

AV (D) ≈ Vth exp(D/τ)

P(D) ∼

;

1−α (1−α) Vth e τ D τ

(7.1)

τ Since this new characteristic time α−1 is larger than τ for 1 < α < 2, the distribution of D for values lower than τ is uniform in the double logarithmic scale. For D ≫ τ, the distribution of durations should follow a power-law [Pérez-Reche et al., 2004b]:

for D ≫ τ

:

AV (D) ≈ τD γ /G

0

P(A) (dB)

10

;

P(D) ∼ D (1−α)γ −1

(7.2)

τ = 1.78 R=0.2 kPa/s R=1.6 kPa/s R=12.2 kPa/s

-1

10

-2

10

-3

τ

10

2.4 2.2 2 1.8 1.6 20

30

40

50 60 A(dB)

70

80

Figure 7.5: Distribution of amplitudes (AdB ) in simple logarithmic scale for the experiments V02 (12.2 kPa/s), V03 (1.6 kPa/s), V05 (0.2 kPa/s). Straight line correspond to P(AdB ) ∼ 10(1−τ)AdB /20 with a value τ = 1.78(2) fitted by Maximum Likelihood Estimation in logarithmic scale in the interval (AdB , 73dB). Fig. 7.5 shows the distribution of amplitudes of the recorded signals and confirms the results reported in Ref. [Salje et al., 2011b]. The distribution follows a power-law extending nearly 40 dB (two decades in Volts). A fraction of signals is saturated at AdB = 80dB. The value of the power-law exponent: τ = 1.78(2) is compatible with the fat tailed distribution of durations with a power-law 135

7.5. SCALE FREE DISTRIBUTION OF ENERGIES exponent: α = 2.0 as expected by eq. 7.2 if γ ≈ 1.4. Thus, for strong signals we recover the scaling hypothesis, as suggested for H3. To manifest the good agreement between H3 and the experimental AE data, in Fig. 7.3 a numerical evaluation according to H3 is superimposed (white lines) with a parabolic [Papanikolaou et al., 2011] avalanche shape φ (eq. 6.5) and considering Th = 26. The fitted parameters are found from minimizing the square distance in a logarithmic metric space log E − log D − AdB . The fitting over the V03 experiment gave the values: γ = 1.4(1), log10 (G) = 13.9, τ = 75(25)µs. Notice that this fitted curve seems to depart by an offset from the data obtained in experiment V02 and V05. This effect could be consequence of the differences in acoustic coupling changing the best value of G.

7.5

Scale Free Distribution of Energies

Power-law distributions have already been reported for amplitudes (Fig. 7.5) and durations (Fig. 7.5). Scale invariance for energies may as well be expected. This result have an special significance since AE-energy is the scalar variable obtained from AE that may have a clearer correspondence with physical mechanisms at the source. As stated in section 6.4, any scale-free distribution will be truncated by our experimental device. We will find an experimental minimum energy for short avalanches slightly surpassing the threshold 2 (A = 27 dB and D = 1µ s) corresponding to Emin = R−1 Vmin Dmin ≈ 0.05aJ. On the other end, we expect to find a superior resolution limit, related to the saturation of the amplitude at Amax = 80dB, and the bad estimation of the strongest signals that will contain superposition of avalanches. According to the elasticity models displaying criticality, such as the micromecanical mean-field model for slip avalanches [Lyakhovsky and Ben-Zion, 2008], the Fibre Bundle Model (FBM) [Hidalgo et al., 2009] or the Random Fuse Model (RFN) [De Arcangelis and Herrmann, 1989], if the signals detected are indeed crackling noise, and the energy is well measured, we may expect a power-law distribution of energies: P(E) ∝ E −ε

(7.3)

with an exponent 1.0 < ε < 2.0 depending on the model. The distributions of energies for experiments V02, V03 and V05 shown in Fig. 7.6-left in a double logarithmic scale show a good agreement with a power-law hypothesis with an exponent close to ε = 1.39. The distribution also suits the idea that the exponent is independent of the experimental realization and the driving compression rate, verified in the range (0.2 − 12.2) kPa/s, and necessary to fulfil the point process approach as explained in Chapter 2.

136

CHAPTER 7. FAILURE OF VYCOR (SIO2 ) UNDER COMPRESSION 100 -2

1.6

10-4

R=0.2 kPa/s R=1.6 kPa/s R=12.2 kPa/s

1.5 ε

dP(E)

10

1.7

R=0.2 kPa/s R=1.6 kPa/s R=12.2 kPa/s ε=1.39

10-6

1.4

10-8

1.3

10-10 -1 0 10 10 101 102 103 104 105 106 107 108 E(aJ)

1.2 -1 10

10

0

10

1

2

10 E(aJ)

10

3

10

4

10

5

Figure 7.6: Left: Probability distribution function of energy E for the three initial experiments V02,V03,V05. Right: Fitted power-law exponents ε by maximum likelihood for the distribution of energies.

Fig. 7.6-right shows the maximum likelihood estimation performed with Ehigh = ∞ [Salje et al., 2011b, Baró and Vives, 2012, Baró et al., 2013b]. The fitted exponent reach a flat plateau starting at Elow = 1 aJ that stands in a value ε = 1.39(1) for at least 4 decades before statistical fluctuations and the saturation level of the acquisition device (section 6.3) distort the value. No exponential cutoff is identified for high energies. The exponent ε of the experiment at dP/dt = 12.2 kPa/s seems to fit the same value but the the plateau is less steady. Experiments at even higher driving rates will probably distort even more the distribution [PérezReche et al., 2004b] when the activity rate overcomes the typical recorded duration of the acoustic signals and an important amount of the recorded signals would correspond to the superposition of events. However, this set of experiments determines a safe range of compression rates to work with (below 10 kPa/s), without concerning about distortions on the statistics. Further experiments were performed following this criteria. Fig. 7.7 shows the distribution of Energies evaluated for experiment V03 by selecting different temporal intervals. The power-law distribution is stationary except for the statistical significance of the different intervals. The mean-field solution of micromechanical models of failure [Dahmen et al., 2009] as well as several experimental results [Ben-Zion et al., 2011] suggest that the power-law distribution must be truncated by an exponential power-law with a characteristic scale diverging at the vicinity of the failure point (see eq. 6.1). However, compression tests display several major height drops that can be interpreted as the failure point. A new theory including the multifragmentation of compression tests is currently being developed from the experimental data presented in this chapter. Thus, the stationary behaviour displayed in Fig. 7.7 could still be compatible with the ideas in meanfield models [Dahmen et al., 2009] by the involved multifragmentation or self-organization ideas.

137

7.5. SCALE FREE DISTRIBUTION OF ENERGIES

0

10

all 0s