FAPA2006 Abstract Template

5 downloads 0 Views 886KB Size Report
models, is inherent to the classical Cornell-McGuire approach to PSHA. Despite the latter .... under the Programme of Cooperation in Science and Technology ...
Extended Source Models versus Zone-Free Methods in Probabilistic Seismic Hazard Assessment Arun Menon Dipartimento di Meccanica Strutturale, Università degli Studi di Pavia, Via A. Ferrata 1, 27100 (PV), Pavia

Mirko Corigliano, Carlo G. Lai Centro Europeo di Formazione e Ricerca in Ingegneria Sismica (EUCENTRE), Via A. Ferrata 1, 27100 (PV), Pavia

Teraphan Ornthammarath Centre for Post-Graduate Training and Research in Earthquake Engineering and Engineering Seismology (ROSE School), Istituto Universitario di Studi Superiori (IUSS), c/o. EUCENTRE, Via A. Ferrata 1, 27100 (PV), Pavia

Keywords: PSHA, area sources, Cornell-McGuire method, zone-free method, kernel estimation method. ABSTRACT In probabilistic seismic hazard computations, the definition of seismogenic zones, particularly extended source models, is inherent to the classical Cornell-McGuire approach to PSHA. Despite the latter being by far the most extensively used method in seismic hazard analysis worldwide, seismogenic zoning, among various steps of a PSHA, is perhaps the most affected by epistemic uncertainty. The most contentious issue of the Cornell-McGuire approach is the spatially uniform activity rates that are automatically assigned to area sources. Several alternative methods, developed to do away with source zones altogether and to utilise the earthquake catalogue as the principal source of information for hazard computations, are available in the literature. The kernel estimation method for seismic hazard area modelling [Woo, 1996] for example, intends to do away with ambiguities in seismic zone definition as Euclidean zones of uniform seismological character, especially for regions where the association between seismicity and geology (e.g. seismogenic faults) is complex and uncertain. A magnitude-dependent, probabilistic smoothing procedure is applied directly to earthquake epicentres in the catalogue to construct a grid of point sources. The kernel-smoothed epicentres depict the spatial non-uniformity of seismicity in contrast to the rigid zonation in the Cornell-McGuire approach. The form of the kernel is governed by concepts of fractal geometry and self-organised criticality. This paper attempts to compare and discuss differences in results of PSHA at a specific site in a region of relatively low seismicity (Kancheepuram, Tamil Nadu, India) apart from a grid of sites in the same region (the Tamil Nadu State) using the two methods within a logic-tree framework.

1 1.1

INTRODUCTION Probabilistic Seismic Hazard Analysis

Probabilistic seismic hazard assessment (PSHA) developed based on A.C. Cornell’s paper in 1968, four decades later, is still one of the most-favoured approaches for seismic hazard analysis worldwide. The basic advantage of the procedure is that it integrates over all possible earthquake occurrences and ground motions to estimate a combined probability of exceedance that integrates the relative frequencies of occurrence of different earthquakes (magnitudes and source-site distances) and ground motion characteristics. Seismic hazard, moreover, can be deaggregated to show the contribution by magnitude, distance and ground-motion deviation [McGuire, 1995]. Though several refinements of

the procedure have materialised in the past decades, the core of the PSHA approach and its basic assumptions remain unchanged. An important milestone in the development of the approach was programming the method in FORTRAN by McGuire [1976], as a result of which this PSHA approach is historically referred to as the classical Cornell-McGuire approach. The classical Cornell-McGuire approach is characterised by two fundamental and rather controversial assumptions pertaining to the seismic activity rates and the earthquake recurrence model. Seismogenic zones are attributed spatially uniform activity rates. Delineation of seismogenic zones, per se, is a step affected by a large degree of subjectivity and hence, uncertainty, but indispensible under this approach. Assuming seismicity rates to be spatially uniform in such seismic source zones, especially when they are extended source zones (area sources), is rather questionable. Zones may

either be artificially expanded (diluting activity around the site) or artificially contracted (concentrating activity around the site). The second issue pertains to the adoption of the Poissonian earthquake recurrence model, which states that the occurrence of earthquakes follows a Poissonian stochastic process where the sequences of seismic events are considered temporally independent. Evidently foreshocks and aftershocks follow a probability distribution different from the sequence of main events and consequently they have to be filtered from the earthquake catalogue. Numerous alternative approaches are available in the literature aimed at circumventing these contentious assumptions in the Cornell-McGuire approach, but differences in the results using these approaches are not merely academic as pointed out by Bommer [2002]: as shown in Fig. 1, hazard maps for the upper-crustal seismicity of El Salvador through independent studies using the Cornell-McGuire approach, two zone-free methods and a kernel approach produced very significant differences [Bommer et al., 1998].

as Euclidean zones of uniform seismological character, especially for parts of the world where the association between seismicity and individual geological structures (e.g. faults) is rather complex. A magnitude-dependent, probabilistic smoothing procedure is applied directly to earthquake epicentres in the catalogue to construct the area source model. Regions which could be favourable for seismicity smoothing are: 1. Areas with an extensive, reliable historical earthquake catalogue (European countries, especially Italy; few Asian countries). 2. Areas where delineation of active faults is unclear. 3. Areas where the association of observed seismicity with faults is very ambiguous. Woo [1996] observes that, the kernel estimation method being intrinsically empirical, might be less applicable to regions with sparse historical and instrumental data. Nevertheless, Woo demonstrates with the example of seismic hazard assessment of Britain (modest intraplate seismicity, catalogue dating back ~900 years), that data sparseness may not be overly restrictive of the applicability of the method. 1.3

Figure 1: Seismic hazard maps for El Salvador, showing 475-year return period accelerations (g) produced by independent seismic hazard studies [Bommer et al., 1998].

Different authors (e.g. Woo [1996], Bommer [2002], McGuire [2001], etc.) have reiterated that the choice of a method must be based on the nature of required output, available time and resources (data, software) and of course, regional characteristics. The approach for modelling seismic sources should be adaptive to the region. In this context, the use of hybrid approaches with a variety of zonation models alongside zone-free models with a logic-tree is conceivable. 1.2

Kernel estimation method for PSHA

The kernel estimation method for seismic hazard area modelling proposed by Woo [1996] is one such method, which intends to do away with the ambiguities related to seismic zone definition

Illustrative examples

In the current paper, an attempt has been made to compare results of probabilistic analysis executed using a zonation model (CornellMcGuire approach) and a zone-free model (kernel estimation approach) at a single site and subsequently a grid of sites. Probabilistic seismic hazard assessment of the archaeological site of Kancheepuram in the southern Indian State of Tamil Nadu [Corigliano et al., 2009] and the entire State of Tamil Nadu [Menon et al., 2009] were carried out within the ambit of an IndoItalian joint research project on the seismic risk assessment of historic centres in India approved under the Programme of Cooperation in Science and Technology 2005-2007. Epistemic uncertainty in the hazard computations was addressed within a logic-tree framework with alternatives for the PSHA algorithm, catalogue completeness estimation analyses, maximum cutoff magnitudes, seismogenic zoning scenarios and ground motion attenuation relationships. The composite earthquake catalogue compiled for Kancheepuram spans a period of 500 years (1507-2007 A.D.), incorporating 277 earthquakes of Mw ≥ 3.0. The catalogue compiled for the macrozonation of Tamil Nadu State consists of 451 events of Mw ≥ 3.5. This catalogue comprises of only 2 seismic events prior to 1800 A.D. and

60 between 1800 and 1900 A.D.. The earliest earthquake record in southern Peninsular India is a minor event (Modified Mercalli Intensity IV), dating back to August 1507 A.D at Billankote [Iyengar et al., 1999]. The largest historical earthquake in the catalogue is the M 6.0 (Mw 5.7) event at Coimbatore in Western Tamil Nadu in 1900 and the largest instrumental era event is the Mw 5.5 Pondicherry earthquake of 2001. The aforementioned earthquake catalogues are rather sparse in the historical period (at least prior to 1800 A.D.). Not all historical events in the catalogue have been reviewed thoroughly for location and magnitude measure corrections and hence entail a degree of uncertainty. 2

2.1

GENERAL DESCRIPTION AND TREATMENT OF ACTIVITY RATES Cornell-McGuire approach

In the Cornell–McGuire approach, tectonic and geological information are combined along with an earthquake catalogue to define seismogenic regions within which earthquakes occur at rates defined by simple recurrence relations and are assumed to have the same probability of occurring at any location. Incorporating geological and tectonic data is useful when the observed earthquake data is poor or where return periods are long compared to the length of the earthquake catalogue. This method summarizes a whole earthquake catalogue into three parameters, namely, the activity rate, bvalue of the Gutenberg–Richter recurrence relationship and the assumed value of the maximum magnitude. These parameters are combined with a ground motion attenuation relationship using the total probability theory to define a single ground motion descriptor, which corresponds to a specified probability of exceedance. The method is computationally efficient and is particularly useful when there is large uncertainty regarding the locations of future earthquakes. On the contrary, the method results in the smoothing of the hazard over the seismogenic areas, which may result in significant spatial variations in hazard not being revealed. The general procedure for a CornellMcGuire PSHA comprises of four basic steps (see Fig. 2). Step-1: Identification and delineation of potential sources of seismicity that may affect the site/s of interest, represented as area sources, fault sources, or rarely, point sources, depending upon

the geological nature of the sources and the data available. Step-2: The temporal behaviour of earthquakes is assumed to follow a Poissonian stochastic process (memory-less) and determined for each source by establishing a magnitude-recurrence relationship over the range of magnitudes that are likely to be generated by each seismic source. Conventionally earthquake recurrence has been represented by the Gutenberg-Richter (G-R) law [1956] in Eq. 1, where M is the mean annual rate of exceedance of earthquakes with magnitudes greater than M, 10a is the average yearly number of earthquakes of magnitude greater than or equal to zero, and b is the relative likelihood of large to small earthquakes. However, other recurrence models (e.g. characteristic earthquakes), if appropriate, are certainly feasible.

logM   a  bM

(1)

This law of earthquake occurrence is a simple mathematical statement that larger events are less frequent than weaker events and that the difference in relative terms follows an exponential law. A b-value lower than about one indicates that the zone is characterized by the occurrence of a relatively large number of strong earthquakes, whereas a b-value greater than one implies that the number of large events is relatively small compared to those of smaller magnitudes. Step-3: Ground motion prediction equations (GMPE) are subsequently used to establish the conditional probability of exceedance of a predetermined ground motion value for each site given the occurrence of an earthquake at a particular magnitude and location.

Figure 2: Four steps of a Probabilistic Seismic Hazard Analysis [Kramer, 1996].

Step-4: The final step involves the integration over all possible earthquake magnitudes and locations to produce a function representing the probability of exceeding various levels of PGA, or, alternatively, the maximum elastic response of a single degree-of-freedom oscillator to ground motion at a specific site. Uniform hazard response spectra are derived from hazard curves by selecting oscillator response values for a specific exceedance frequency. Prior to the second step, a couple of indispensable operations have to be performed to process the earthquake catalogue. Due to the Poissonian occurrence model intrinsic to the Cornell-McGuire approach, foreshocks and aftershocks, which follow a probability distribution unlike the sequence of main events, have to be filtered from the earthquake catalogue. This declustering operation can be executed with the algorithm of Gardner and Knopoff [1974] developed for southern Californian earthquakes and extensively used worldwide. The time- and distance-window parameters differ based on the main event’s magnitude and hence this approach is also known as the dynamic time-spatial windowing method. Secondly, historical earthquake records are usually more complete for larger earthquakes than for smaller ones. Small earthquakes can go undetected for a variety of physical and demographical reasons. Time windows in which the catalogue is complete have to be defined. Catalogue incompleteness exists because, for historical earthquakes the recorded seismicity differs from the “true” seismicity. Among different approaches to evaluate catalogue completeness, the Visual Cumulative Method (CUVI) [Tinti and Mulargia, 1985] is a simple graphical procedure based on the observation that if earthquakes of a given magnitude are assumed to follow a stationary occurrence process, in a complete catalogue, the average rate of occurrence of seismic events must be a constant. Alternatively, an empirical, statistically simple method based on the stability of the magnitude recurrence rate [Stepp, 1973], may be used. 2.2

Kernel estimation method

In the kernel estimation method [Woo, 1996], seismic sources are used but they are not defined according to geological and tectonic criteria. Instead, a grid of point sources is defined about the site of interest, with the activity rate of each being determined according to the earthquake catalogue. The contribution of each catalogue earthquake to the seismicity of the region is

smeared over a distance which is magnitude dependent (see Fig. 3). Instead of defining the activity rate of each source using a recurrence relationship, such as that of G-R law, individual rates are defined for each magnitude interval (e.g. for events Mw 5.1-5.2). These rates are calculated from the density and proximity of catalogued events lying within that magnitude range, in the other words, this activity rate is calculated directly from the catalogue. The earthquake occurrence model in the zonefree method is not Poissonian. Hence, essentially, the earthquake catalogue replete with foreshocks and aftershocks, without discriminating main shocks from aftershocks/foreshocks, has to be used in the hazard computation [written communication, Woo, 2008].

Figure 3: Kernel smoothed epicentres in the zone-free seismic hazard assessment methodology by Woo [1996] for few sample earthquakes in the catalogue for Kancheepuram

The primary routine in the kernel estimation method is the calculation of activity rate density fields covering a dense square grid of sites spanning the region around the site at which the hazard has to be computed (alternatively, grid of sites). These activity rate density fields are magnitude-dependent and are determined for all magnitudes ranging upwards from the threshold of engineering interest (Mw = 3.5). This allows determining an activity rate for each location and event magnitude. In the zone free approach, the self-organized seismic activity rates from the earthquake catalogue then replace the standard GR recurrence relationships. The magnitudedistance dependent relationship is given by the kernel function, K(M, x), a magnitude-dependent multi-variate probability density function, which takes the form:

n 1   r  K 1    H 2   H 

2

  

(2)

where n is the exponent of the power-law; H is a bandwidth for normalizing distances and r is the epicentral distance. The exponent n depends on the proximity between epicentres, increasing with proximity. Its value, typically between 1.5 and 2, has only a moderate influence on the computed results (a value of 1.5 has been used for the current study). The bandwidth is a function of magnitude and represents the average minimum distance between epicentres of the same magnitude and taking the exponential form in Eq. 3: H  ce dM

(3)

where c and d are constants to be determined on the basis of the epicentres contained in the catalogue and M is moment magnitude (refer Fig. 4). Using the epicentres within the study area, the kernel parameters are estimated as follows: 1. Events are classified in groups according to their magnitude. 2. The distance to the nearest epicentre within the same magnitude range is determined for each event. 3. All minimum distances calculated from each magnitude range are averaged. 4. A least-square fit is conducted in order to obtain the two parameters c and d (0.6395 and 0.8498, respectively, in Fig. 4). Bandwidth (km)

100 H = 0.6395e

0.8498Mw

R2 = 0.8429

10 3

3.5

4

4.5

5

5.5

Mw

Figure 4: Magnitude-bandwidth relation to estimate parameter c and d in Woo [1996].

By summing over all catalogued events, the cumulative activity rate density is computed for each magnitude class (Mw:4.0-4.1, etc.) ranging from the minimum magnitude to the highest value observed in the catalogue. The kernel function is used to permit different magnitudes to have different distancedependencies in the equation which is used to determine the contribution of each catalogued event to the activity rate of each point source. This is done in order to account for the observation that smaller events exhibit a greater amount of spatial clustering than do larger events,

suggesting that small events are more likely than large events to occur in places where they have occurred in the past. Observational uncertainties in event magnitude and epicentral location in a region are also accounted for in terms of estimated scatter in the magnitude and distance measures for historical and instrumental periods. With reference to the PSHA studies described in the following sections, in order to estimate the uncertainty in the magnitude measure, the period of occurrence of the earthquake can be differentiated into historical and instrumental eras. For the former, the uncertainty of earthquake magnitude estimation in the catalogue is then estimated from the standard deviation of that specific magnitude measure (Ms, mb, Mw or ML) from different organizations. The uncertainty in the historical period is based on the standard deviation of a proposed intensity-magnitude relationship for Peninsular Indian earthquakes [Corigliano et al., 2009], which equals to 0.3 (see Table 1). Table 1: Standard deviations in estimation of earthquake magnitude measures. Magnitude measure Historical events Ms mb Mw ML

M 0.30 0.20 0.23 0.21 0.38

Lack of adequate number of earthquake reports in a study area hinders the calculation of the uncertainty related to epicentre estimation, as in magnitude estimation. Few events in the catalogue were identified to calculate the standard deviation linked to epicentre estimation, which worked out to 12 km. For the historical period, a reasonable value of 30 km was assumed, given that the degree of uncertainty in determining the epicentres of historical events from felt reports is definitely larger. Once the activity rate of each magnitude increment is defined for each point source in the mesh, an attenuation equation is used and the hazard is calculated by summing over each point source as in the Cornell-McGuire method. The kernel method was applied using the computer program KERFRACT [Woo, 1996], while the Cornell-McGuire method was performed using EZ-FRISK® 7.25 [Risk Engineering Inc.] and CRISIS2007 [Ordaz et al., 2007].

3 3.1

DEMONSTRATIVE EXAMPLES Single site

A new composite earthquake catalogue for Kancheepuram was compiled from different sources by including historical and instrumental events within a circular area of 250 km radius with Kancheepuram at its centre (12.83°N, 79.70°E) as shown Fig. 5. The entire circular area was considered as one zone in the first seismogenic zoning scenario. A review of the literature on the regional seismotectonic and geological setting besides observed seismicity, has led to the definition of a second seismogenic scenario comprising of three different seismic zones: the Southern Granulite Terrain Craton (SZ1), the Transition Zone (SZ2), and the Cuddapah Basin (SZ3) (refer Fig. 6).

(a)

Figure 5: Seismicity around Kancheepuram from 1507 to 2008 A.D. (with foreshocks and aftershocks).

being the PSHA algorithm (Cornell-McGuire and zone-free methods), seismogenic zoning scenarios, completeness period estimation methods, GMPE, and the maximum magnitude in the hazard computations (see Fig. 7). PSHA algorithm

Seismogenic Zoning Scenario

Completeness Maximum Magnitude Analysis Method Mmax (0.50)

CornellMcGuire (0.60)

Scenario I (0.50)

CUVI (0.60)

ADSS05 (0.25) CMSZ1VM1A1 RKI07 (0.25) CMSZ1VM1A2 CB08 (0.25) AS97 (0.25)

Mmax +0.3 (0.50) Mmax (0.50)

Stepp (0.40)

Mmax +0.3 (0.50)

Scenario II (0.50)

Woo (0.40) CM, W

GMPE

Totally 35 branches

RKI07 (0.33) CB08 (0.33) AS97 (0.33)

Stepp (1.0) SZ1, SZ2

V, S

M1, M2

WSA2

A1, A2, A3, A4

Figure 7: Parameters and weighting factors adopted in the logic-tree for the horizontal ground motion component.

Seismic hazard curves and uniform hazard (b) obtained from the Cornell-McGuire and spectra kernel estimation methods are compared subsequently. Fig. 8 compares the hazard curves computed by the zone-free method to those by the Cornell-McGuire approach using seismogenic zoning scenario 1 (circular area), while the Cornell-McGuire approach in Fig. 9 used seismogenic zoning scenario 2 (3 source zones). GMPE developed from a seismological model of Peninsular India [Raghu Kanth and Iyengar, 2007] has been used in the above comparisons. The hazard curve by the zone-free approach is clearly an upper bound for longer return periods (or for lower probabilities of exceedance). Annual rate of exceedance (1/year)

1E+000 1E-001 1E-002 1E-003 1E-004 CM-SZ1-V-M1 CM-SZ1-V-M2 CM-SZ1-S-M1 CM-SZ1-S-M2 ZF

1E-005 1E-006 0.01

Figure 6: The second seismogenic zone scenario composed of three different seismic source zones.

Epistemic uncertainties in the hazard computations were accounted for within a logictree framework, with the controlling parameters

0.02 0.05 0.1 0.2 0.5 Peak Ground Acceleration (g)

1

Figure 8. Comparison between the hazard curves from CM and the zone-free approaches using attenuation equation RKI07.

The expected PGA values at Kancheepuram from the zone-free approach compared to those obtained by applying the Cornell-McGuire method for reference return periods of 95. 475, 975 and 2475 years (see Table 2), show higher values consistently, though marginal.

1E-001 1E-002 1E-003

0.22

1E-004 CM-SZ2-V-M1 CM-SZ2-V-M2 CM-SZ2-S-M1 CM-SZ2-S-M2 ZF

1E-005 1E-006 0.01

0.02 0.05 0.1 0.2 0.5 Peak Ground Acceleration (g)

1

Figure 9. Comparison between the hazard curves from CM the zone-free approaches using attenuation equation RKI07.

Comparison of the branches of the logic tree for the two different approaches (weighted combination of all GMPE, all maximum magnitudes and completeness analysis methods), shows that the zone-free method provides consistently higher probabilities of exceedance for the same level of PGA (refer Fig. 10).

0.18 0.16 0.14 0.12 0.10 0.08 0.06 0.04 0.02 0.00 0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 Structural period (s)

Figure 11. Comparison between the uniform hazard curves from CM and the zone-free approaches (475-year Tr). 0.40

1E+000

2475-year return period Cornell-McGuire Zone-free

0.36 1E-001

Spectral acceleration (g)

Annual rate of exceedance (1/year)

475-year return period Cornell-McGuire Zone-free

0.20

Spectral acceleration (g)

Annual rate of exceedance (1/year)

1E+000

1E-002 1E-003 1E-004 1E-005 Cornell-McGuire Zone-free (KERFRACT)

0.32 0.28 0.24 0.20 0.16 0.12 0.08 0.04

1E-006

0.00 0.01

0.02 0.05 0.1 0.2 0.5 Peak Ground Acceleration (g)

1

Figure 10. Comparison between the hazard curves from CM the zone-free approaches.

A similar observation for the uniform hazard spectra (UHS) for the 475-year and 2475-year return periods is valid (refer Fig. 11 and Fig. 12) and spectral shapes are comparable. Table 2. Comparison of weighted average PGA from CM and ZF branches for logic-tree for Kancheepuram. Probability of exceedance in 50 yrs. 40% (Tr = 95 yrs.) 10% (Tr = 475 yrs.) 5% (Tr = 975 yrs.) 2% (Tr = 2475 yrs.)

Mean PGA (g) CornellZone-free McGuire 0.047 0.037 0.083 0.077 0.113 0.100 0.150 0.136

0.0 0.2 0.4 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 Structural period (s)

Figure 12. Comparison between the uniform hazard curves from CM and the zone-free approaches (2475-year Tr).

3.2

Grid of sites

In continuation of the above study, probabilistic seismic hazard maps were produced for the South Indian State of Tamil Nadu, including the Union Territory of Pondicherry. The PSHA computations were performed using the Cornell-McGuire and zone-free methods within the State boundary (8.0°-13.6°N; 76.2°80.4°E) at a grid interval of 0.2°, approximately 20 km. Seismic hazard contour maps for the horizontal component of ground motion for different structural periods (PGA, spectral

acceleration at 0.1, 0.5 and 1s) and return periods of 475, 975 and 2475 years, on stiff site and level ground conditions. Based on the literature, eleven zones were identified as potential seismogenic zones for the study (see Fig. 13). A composite catalogue spanning 946 years from 1063 to 2008 A.D., incorporating 2060 earthquakes with Mw ≥ 3.5 (refer Fig. 13), was compiled. Epicentres within a reduced geographical area delineated by the seismogenic zones were used for the hazard computations. The total number of epicentres, which lie within the 11 zones before the declustering process was 451, whereas 388 after the removal of foreshocks and aftershocks (86% main events). The largest instrumental event near Tamil Nadu was the Mw 5.5 Pondicherry earthquake (2001) and the largest historical earthquake was a M 6.0 (Mw 5.7) event near Coimbatore in western Tamil Nadu (1900).

The logic-tree adopted for the PSHA computations, incorporating different PSHA algorithms, completeness analysis methods, maximum magnitudes and GMPE is shown in Fig. 14. Fig. 15 and 16 show the probabilistic seismic hazard contour maps for Tamil Nadu for the 475year return period, produced using the CornellMcGuire and the zone-free approaches, respectively. The hazard map produced through the zoning approach, in Fig. 15, shows the influence of the delineated area sources on the computed hazard. In particular, the effect of the boundaries between different seismogenic sources and associated uniform seismicity rates from the G-R frequencymagnitude recurrence relationships, is evident. Marginal changes in the boundaries could easily effect the seismicity rates by altering the area over which the seismicity rates are spread, therefore altering the hazard rates.

Figure 13: Seismicity in southern Peninsular India from 1063 to 2008 A.D. and the eleven seismogenic zones identified for the Cornell-McGuire approach of PSHA. PSHA algorithm

Completeness Maximum Magnitude Analysis Method CUVI (0.60)

Mmax (0.50)

GMPE RKI07 (0.33) CMVM1A1 CB08 (0.33) AS97 (0.33)

Mmax +0.3 (0.50)

Cornell-McGuire (0.50)

Mmax (0.50)

Stepp (0.40)

Mmax +0.3 (0.50)

Totally 18 branches

CUVI (0.60)

Woo (0.50)

CM, W

RKI07 (0.33) CB08 (0.33) AS97 (0.33)

WVA1

Stepp (0.40)

V, S

M1, M2

A1, A2, A3

Figure 14: Parameters and weighting factors adopted in the logic-tree for the horizontal ground motion component.

Figure 15. Probabilistic seismic hazard contours for Tamil Nadu for the 475-year return period using the CornellMcGuire zoning approach.

On the other hand, the seismic hazard contour map produced using the zone-free method, apparently, does justice to the observed seismicity in the study region. The regions of

relatively high seismic hazard (see Fig. 16) are completely congruent with areas of high seismic activity (see north-eastern and north-western parts of Tamil Nadu in Fig. 13). Similarly, the low seismic hazard levels in the southern parts of the State, reflect the sparseness of observed seismicity in the area.

can be seen in this region (see Fig. 13). In contrast, the hazard map produced by the zonefree approach attributes a hazard level between 0.028-0.069 g, which is compatible with the seismic activity regime observed here. This example highlights the bias that can be introduced by the highly subjective issue of seismogenic zoning intrinsic to the CornellMcGuire approach. 4

Figure 16. Probabilistic seismic hazard contours for Tamil Nadu for the 475-year return period using the zone-free approach.

Interestingly, the effect of seismogenic zone SZ 6, lying west of Tamil Nadu State, on southwestern Tamil Nadu (characterised by SZ 7 or relatively low seismicity) in the hazard map produced using the Cornell-McGuire approach is revealing. The extensive SZ6 is characterised by relatively high seismicity (b = 0.67, a = 2.38), especially due to a concentration of moderate earthquakes in its north-eastern (M 6.0 Coimbatore earthquake, 1900) and central areas (four events M > 4.0 around Palghat and Idukki). The southern area of SZ 6 has witnessed only a few small earthquakes. However, the uniform seismicity rate attributed to SZ 6 introduces a bias on the seismic hazard on the south-western regions of Tamil Nadu. In fact, the hazard level here is between 0.08-0.1 g (475-year return period) according to the Cornell-McGuire approach. However, very few small earthquakes

DISCUSSION AND CONCLUSIONS

Molina et al. [2001] compared the kernel estimation method with the Cornell-McGuire approach, using synthetic and real data for PSHA Norway and Spain, demonstrating the weakness of the assumption of near uniform activity as a representation of the true activity rate, inherent to the Cornell-McGuire method. They conclude that the kernel estimation method is directly linked to the historical record (hence, empirical), that there is a higher degree of transparency in the model and results, that the use of the historical catalogue without background seismicity should provide lower bound results and prescribe the CornellMcGuire approach for regions with poor or short historical record of earthquakes. Beauval et al. [2006] suggest that the epicentre smoothing method could be used as a lower bound estimator for seismic hazard analysis and in particular for regions of low seismicity. Combining the two models, together with deterministic approaches, is recommended. Bommer et al. [1998] show that the reliability of the results obtained using the zone-free method depends heavily on the quality of the seismic catalogue and point to the need of hybrid approaches (probabilistic and deterministic) to deal with specific situations of seismic hazard. In the current study, where a comparison of the zoning and zone-free approaches has been carried out for a single site and a grid of sites, PSHA results are clearly different, especially at the grid of sites. The composite earthquake catalogue compiled for the current research, though relatively extensive in time (~ 500 years), may not be completely reliable, as several events, especially in the historic period, have not been individually examined for magnitude measure, location, etc.. Though Woo [1996] clearly states that shortage of regional data is a detriment to the quality of seismic source modelling, he indicates that the data sparseness may not be excessively restrictive on the applicability of the method (illustrated for Britain). Hence, in the current case, the zone-free approach is not used as the

only option for the PSHA computations, but integrated within a logic-tree framework along with the classical Cornell-McGuire approach, which may seem appropriate for an area of low seismicity with low geological and seismotectonic information. In the opinion of the authors, the use of the zone-free method within a logic-tree framework along with the Cornell-McGuire approach, is indeed an effective way of tackling PSHA by assigning due importance to the historical record of observed seismicity and simultaneously to any geological and tectonic information available to develop reasonable seismogenic zoning scenarios. Under this perspective, in a region where linking geological and tectonic data to observed seismicity (largely due to the brevity of the temporal window of observation of earthquake records) is difficult, such a hybrid approach seems a natural choice. Deaggregation of the PSHA results, in order to indentify the controlling earthquake at the site of interest, is apparently an essential indicator of the divergences between the zoning and zone-free approaches, as reiterated by different studies (e.g. Molina et al., 2001; Beauval et al., 2006). Deaggregation analysis of the PSHA results conducted by the zone-free method has not been carried out in the current study. Identifying the potential of this tool to enhance the quality of the results of a PSHA, deaggregation analysis would be carried out in the future. ACKNOWLEDGEMENTS The seismic hazard assessment at the archaeological site of Kancheepuram and the State of Tamil Nadu were part of the activities carried out within the framework of a two-year research project funded by the Italian Ministry of Foreign Affairs and the Indian Department of Science and Technology on the seismic risk assessment of the historical urban nucleus and monumental structures at Kancheepuram in Tamil Nadu in Southern India. The authors would like to acknowledge their gratitude to these Governmental Institutions for their support. REFERENCES Bommer, J.J., Queen, C.M., Salazar, W., Scott, S., and Woo, G., 1998. A case study of the spatial distribution of seismic hazard (El Salvador), Natural Hazard, 18, 145-166. Bommer, J.J. 2002. Deterministic vs. probabilistic seismic hazard assessment: An exaggerated and obstructive

dichotomy, Journal of Earthquake Engineering, 6, Sp. Issue 1, 43-73. Beauval, C., Hainzl, S. and Scherbaum, F., 2006. Probabilistic seismic hazard estimation in lowseismicity regions considering non-Poissonian seismic occurrence, International Journal of Geophysics, 164, 543-550. Corigliano, M., Lai, C.G., Menon, A. and Ornthammarath, T., 2009. Seismic hazard at the archaeological site of Kancheepuram in Southern India, submitted to the Journal of Earthquake Engineering (11/12/2008, under review). Cornell, C.A., 1968. Engineering seismic risk analysis, Bulletin of the Seismological Society of America, 58, 1583-1606. Gardner, J.K. and Knopoff, L., 1974. Is the sequence of earthquakes in southern California, with aftershocks removed, Poissonian?, Bulletin of the Seismological Society of America, 64 (5), 1363-1367. Gutenberg, B. and Richter, C.F., 1956. Earthquake magnitude, intensity, energy and acceleration, Bulletin of the Seismological Society of America, 46, 105-145. Iyengar, R.N., Sharma, D. and Siddiqui, J.M., 1999. Earthquake history of India in medieval times, Indian Journal of History of Science, 34(3), 181-237. Kramer S.L., 1996. Geotechnical earthquake engineering, Prentice Hall, Upper Saddle River N.J., 653 pp. McGuire, R.K., 1976. FORTRAN computer program for seismic risk analysis. U.S.G.S. Open-File Report 76-67, Denver, 90pp. McGuire, R.K., 1978. FRISK: computer program for seismic risk analysis using faults as earthquake sources. U.S.G.S. Open-File Report 78-1007, Denver, 71pp. McGuire, R.K., 1995. Probabilistic seismic hazard analysis and design earthquakes: Closing the loop”, Bulletin of the Seismological Society of America, 85(5), 1275-1284. McGuire, R.K., 2001. Deterministic vs. probabilistic earthquake hazard and risks, Soil Dynamics and Earthquake Engineering, 21, 377-384. Menon, A., Ornthammarath, T., Corigliano, M. and Lai, C.G., 2009. Probabilistic seismic hazard macrozonation of Tamil Nadu in Southern India, submitted to the Bulletin of the Seismological Society of America (13/03/2009, under review). Molina, S., Lindholm, C.D. and Bungum, H., 2001. Probabilistic seismic hazard analysis: Zoning free versus zoning methodology, Bollettino di Geofisica Teorica ed Applicata, 42(1-2), 19-39. Ordaz, M., Aguilar, A. and Arboleda, J., 2007. CRISIS2007-Version 1.1: Program for computing seismic hazard, Instituto de Ingenieria, UNAM, Mexico. Raghu Kanth, S.T.G. and Iyengar, R.N., 2007. Estimation of seismic spectral acceleration in Peninsular India, Journal of Earth System Science (Indian Academy of Science), 116(3), 199-214. Stepp, J.C., 1973. Analysis of completeness of the earthquake sample in the Puget Sound area, in “Seismic Zoning”, ed. by S.T. Harding, NOAA Tech. Report ERL 267-ESL30, Boulder, Colorado. Tinti S., Mulargia F., 1985. Completeness analysis of a seismic catalog, Annales Geophysicae, 3, 407-414. Woo, G., 1996. Kernel estimation methods for seismic hazard area source modeling, Bulletin of the Seismological Society of America, 86(2), 353-362.