Overview of the Working Group for the Development of Regional Earthquake Likelihood Models (RELM)

Edward H. Field

Published January 1, 2007, SCEC Contribution #10907

Seismic hazard analysis (SHA) requires two different types of models: (1) an earthquake rupture forecast, which gives the probability of all possible earthquake ruptures of concern throughout the region over a given time span; and (2) a ground-motion model that provides an estimate of shaking at a site for each earthquake rupture. This special issue of Seismological Research Letters (SRL) presents a variety of the first type—earthquake rupture forecasts. But let's begin with some history.

The 30-year, time-dependent forecast published by the 1995 Working Group on California Earthquake Probabilities (WGCEP 1995), also known as the Phase-2 report of the Southern California Earthquake Center (SCEC), predicted twice as many magnitude 6.5 to 7.0 earthquakes as had been observed historically. This apparent earthquake deficit was not good news for an insurance industry still licking its wounds from the 1994 M 6.7 Northridge earthquake. In fact, the report presumably contributed to the “insurance availability crisis” where, faced with a legal requirement to offer an earthquake option with any homeowners policy, 93 percent of companies doing business in California decided to restrict or halt coverage altogether (http://www.earthquakeauthority.com/CEAFactSheet.htm).

In the lively debate that ensued over the earthquake deficit problem, it's interesting to note that the first shot was fired by the primary author of the report itself (Jackson 1996), and that another author of the report fired back with an opposing view (Schwartz 1996). This exchange served to highlight an even more fundamental problem—that there is no agreement on how to build a time-dependent earthquake forecast, and it is therefore impossible to define a single, consensus model.

Faced with this reality, SCEC decided to take a different approach by forming a southern California working group for the development of Regional Earthquake Likelihood Models (RELM). Rather than trying to build a single, consensus model, working group participants were encouraged to join with like-minded individuals and build the model they saw fit. The hope was that this free-market approach would spur healthy competition and avoid forcing consensus where none exists.

Developing multiple models when we lack agreement is also important for defining uncertainties in our hazard estimates. That is, basing a seismic hazard analysis on a single model is akin to estimating an unknown probability distribution from a single sample—you really don't know how reliable it is until you have other samples. Therefore, and as discussed at length by the Senior Seismic Hazard Analysis Committee (SSHAC 1997), proper seismic hazard analysis demands that all viable models be considered in the analysis (or more practically, that a minimum number of models that span the range of viability and importance are included). Furthermore, applying multiple models will avoid dramatic changes in hazard estimates—something the users understandably loathe. Specifically, if you start with one model and add another, you “best estimate” (say, the median hazard) will change more dramatically than if you start with many models and go through a gradual process of elimination. Of course building multiple models is nontrivial, and doing so usually, and ironically, reveals more uncertainty than we originally thought we had. There is also the issue of our not knowing what we don't know (i.e., viable models that no one has yet identified). Nevertheless, the hope has been that the RELM approach would be an important step toward providing additional alternative models.

Another hope has been that comparisons of RELM models would reveal what types of scientific studies are needed to resolve the differences (and thereby reduce hazard uncertainties) as well as identify what classes of models are exportable to regions where the options are fewer due to data limitations. Perhaps the most important aspect of the RELM effort has been to establish formal tests of the models against existing and future observations. Even if these tests do not provide conclusive results anytime soon, one must start sometime and somewhere, and doing so should at least reveal what it will take to make definitive judgments.

This special issue of SRL presents the first-generation RELM forecasts. Twelve papers here give a brief description of the 18 different models, with more details given in either references or cited URLs. There is also a paper outlining a general testing methodology (Schorlemmer et al. 2007, this issue) and another paper on the test center where the RELM models have been submitted and locked down (Schorlemmer and Gerstenberger 2007, this issue).

The goal of this overview is not to answer all the questions posed in this introduction (such as the uncertainty of current hazard estimates), nor to pass judgment on each model (even though virtually every paper justifies assumptions using declarative statements that others would argue with). In fact, both RELM and SRL have been fairly liberal in accommodating models as long as they are formally submitted and adequately documented. Those interested in a review of time-dependent earthquake-forecasting methodologies will not find one here, as others have recently filled this niche (e.g., Steacy et al. 2005, and references therein). What this paper does provide is a brief overview of the project, the models submitted, and the plans for the formal tests. Further consideration of the relative merits and hazard implications of the various models will presumably be published in the years to come.

Citation
Field, E. H. (2007). Overview of the Working Group for the Development of Regional Earthquake Likelihood Models (RELM). Seismological Research Letters, 78(1), 7-16. doi: 10.1785/gssrl.78.1.7.