Editor: Jeanne Hardebeck
Authors: Stephan Husen & Jeanne Hardebeck
Abstract: Earthquake location catalogs are not an exact representation of the true earthquake locations. They contain random error, for example from errors in the arrival time picks, as well as systematic biases. The most important source of systematic errors in earthquake locations is the inherent coupling of earthquake locations to the seismic velocity structure of the Earth. Random errors may be accounted for in formal uncertainty estimates, but systematic biases are not, and must be considered based on knowledge about how the earthquakes were located. In this chapter we discuss earthquake location methods and methods for estimating formal uncertainties, systematic biases in earthquake location catalogs, and give readers guidance on how to identify good-quality earthquake locations.
Authors: Arnaud Mignan and Jochen Woessner
Abstract: Assessing the magnitude of completeness Mc of instrumental earthquake catalogs is an essential and compulsory step for any seismicity analysis. Mc is defined as the lowest magnitude at which all the earthquakes in a space-time volume are detected. A correct estimate of Mc is crucial since a value too high leads to under-sampling, by discarding usable data, while a value too low leads to erroneous seismicity parameter values and thus to a biased analysis, by using incomplete data. In this article, we describe peer-reviewed techniques to estimate and map Mc. We provide examples with real and synthetic earthquake catalogs to illustrate features of the various methods and give the pros and cons of each method. With this article at hand, the reader will get an overview of approaches to assess Mc, understand why Mc evaluation is essential and an a non-trivial task, and hopefully be able to select the most appropriate Mc method to include in his seismicity studies.
Download full article
Authors: Laura Gulia, Stefan Wiemer, and Max Wyss
Abstract: Man-made contaminations and heterogeneity of reporting are present in all earthquake catalogs. Often they are quite strong and introduce errors in statistical analyses of the seismicity. We discuss three types of artifacts in this chapter: The presence of reported events, which are not earthquakes, but explosions; heterogeneity of resolution of small events as a function of space and time; and inadvertent changes of the magnitude scale. These problems must be identified, mapped, and excluded from the catalog before any meaningful statistical analysis can be performed. Explosions can be identified by comparing the rate of day-time to night-time events because quarries and road construction operate only during the day and often at specific hours.
Spatial heterogeneity of reporting small events comes about because many stations record small earthquakes that occur near the center of a seismograph network, but only relatively large ones can be located outside the network, for example offshore. To deal with this problem, the minimum magnitude of complete reporting, Mc, has to be mapped. Based on the map of Mc, one needs to define the area and the corresponding Mc, the choice of which leads to a homogeneous catalog. There are two approaches to the strategy for selecting an Mc and its corresponding area of validity: If one wishes to work with the maximum number of earthquake per area for statistical power of resolution, one needs to eliminate from consideration areas of inferior reporting and use a small Mc(inside), appropriate for the inside of a network. However, if one wishes to include areas outside of the network, such as offshore areas, then one has to cull the catalog by deleting all small events from the core of the network and accept only earthquakes with magnitude larger than Mc(outside). In this case, one pays with loss of statistical power for the advantage of covering a larger area.
As a function of time, changes in hardware, software, and reporting procedure bring about two types of changes in the catalog. (1) As a function of time the reporting of small earthquakes improves because seismograph stations are added or detection procedures are improved. (2) The magnitude scale is inadvertently changed due to changes in hardware, software, or analysis routine. The first problem is dealt with by calculating the mean Mc as a function of time in the area chosen for analysis. This will usually identify steps of Mc downward (better resolution with time) at fairly discrete times. Once these steps are identified, one is faced with choosing a homogeneous catalog that covers a long period, but with a relatively large Mc(long time). This way one gains coverage of time, but pays with loss of statistical power because small events, which are completely reported during recent times, have to be eliminated. On the other hand, if one wishes to work with a small Mc(recent), then one must exclude the older parts of the catalog in which Mc(old) is high.
To define the magnitude scale in a local or regional area in such a way that it corresponds to an international standard is not trivial, nor is it trivial to keep the scale constant as a function of time, when hardware, software, and reporting procedures keep changing. Resulting changes are more prominent in societies characterized by high intellectual mobility, and may not be found in totalitarian societies, where observatory procedures are adhered to with military precision. There are two types of changes: simple magnitude shifts and stretches (or compressions) of the scale. Here, we show how to identify changes of the magnitude scale and how to correct for them, such that the catalog approaches better homogeneity, a necessity for statistical analysis.
Authors: Jochen Woessner, Jeanne L. Hardebeck, and Egill Hauksson
Abstract: Seismicity catalogs are one of the basic products that an agency running a seismic network provides, and is the starting point for most studies related to seismicity. A seismicity catalog is a parametric description of earthquakes with each entry describing one earthquake, for example each earthquake has a location, origin time, and magnitude. At first glance, this seems to be an easy data set to understand and use. In reality, each seismicity catalog is the product of complex procedures that start with the configuration of the seismic network, the selection of sensors and software to process data, and the selection of a location procedure and a magnitude scale. The human-selected computational tools and defined processing steps, combined with the spatial and temporal heterogeneity of the seismic network and the seismicity, makes seismicity catalogs a heterogeneous data set with as many natural as human induced obstacles. This article is intended to provide essential background on how instrumental seismicity catalogs are generated and focuses on providing insights on the high value as well as the limitations of such data sets.