Open main menu
Home
Random
Recent changes
Special pages
Community portal
Preferences
About Wikipedia
Disclaimers
Incubator escapee wiki
Search
User menu
Talk
Dark mode
Contributions
Create account
Log in
Editing
Earthquake prediction
(section)
Warning:
You are not logged in. Your IP address will be publicly visible if you make any edits. If you
log in
or
create an account
, your edits will be attributed to your username, along with other benefits.
Anti-spam check. Do
not
fill this in!
=== Trends === Instead of watching for anomalous phenomena that might be precursory signs of an impending earthquake, other approaches to predicting earthquakes look for trends or patterns that lead to an earthquake. As these trends may be complex and involve many variables, advanced statistical techniques are often needed to understand them, therefore these are sometimes called statistical methods. These approaches also tend to be more probabilistic, and to have larger time periods, and so merge into earthquake forecasting.{{Citation needed|date=March 2020|reason=for 'more probabilistic', for 'larger periods' and for 'earthquake forecasting' adaptation better than earthquake prediction}} ==== Nowcasting ==== {{Other uses|Nowcasting (disambiguation){{!}}Nowcasting}} Earthquake ''nowcasting'', suggested in 2016<ref name=":7">{{Harvnb|Rundle|Turcotte|Donnellan|Ludwig|2016}}</ref><ref>{{Harvnb|Rundle|Giguere|Turcotte|Crutchfield|2019}}</ref> is the estimate of the current dynamic state of a seismological system, based on [[natural time analysis|natural time]] introduced in 2001.<ref>{{Harvnb|Varotsos|Sarlis|Skordas|2001}}</ref> It differs from forecasting which aims to estimate the probability of a future event<ref name=":8">{{Harvnb|Rundle|Luginbuhl|Giguere|Turcotte|2018b}}</ref> but it is also considered a potential base for forecasting.<ref name=":7"/><ref name=":9">{{Harvnb|Luginbuhl|Rundle|Turcotte|2019}}</ref> Nowcasting calculations produce the "earthquake potential score", an estimation of the current level of seismic progress.<ref>{{Harvnb|Pasari|2019}}</ref> Typical applications are: great global earthquakes and tsunamis,<ref>{{Harvnb|Rundle|Luginbuhl|Khapikova|Turcotte|2020}}</ref> aftershocks and induced seismicity,<ref name=":9"/><ref>{{Harvnb|Luginbuhl|Rundle|Hawkins|Turcotte|2018}}</ref> induced seismicity at gas fields,<ref>{{Harvnb|Luginbuhl|Rundle|Turcotte|2018b}}</ref> seismic risk to global megacities,<ref name=":8"/> studying of clustering of large global earthquakes,<ref>{{Harvnb|Luginbuhl|Rundle|Turcotte|2018a}}</ref> etc. ==== Elastic rebound ==== Even the stiffest of rock is not perfectly rigid. Given a large force (such as between two immense tectonic plates moving past each other) the Earth's crust will bend or deform. According to the [[elastic rebound]] theory of {{Harvtxt|Reid|1910}}, eventually the deformation (strain) becomes great enough that something breaks, usually at an existing fault. Slippage along the break (an earthquake) allows the rock on each side to rebound to a less deformed state. In the process energy is released in various forms, including seismic waves.<ref>{{Harvnb|Reid|1910|p=22}}; {{Harvnb|ICEF|2011|p=329}}.</ref> The cycle of tectonic force being accumulated in elastic deformation and released in a sudden rebound is then repeated. As the displacement from a single earthquake ranges from less than a meter to around 10 meters (for an M 8 quake),<ref>{{Harvnb|Wells|Coppersmith|1994|loc=Fig. 11|p=993}}.</ref> the demonstrated existence of large [[strike-slip]] displacements of hundreds of miles shows the existence of a long running earthquake cycle.<ref>{{Harvnb|Zoback|2006}} provides a clear explanation.</ref>{{efn|1={{Harvtxt|Evans|1997|loc=§2.2}} provides a description of the "self-organized criticality" (SOC) paradigm that is displacing the elastic rebound model.}} ==== Characteristic earthquakes ==== The most studied earthquake faults (such as the [[Nankai megathrust]], the [[Wasatch Fault]], and the [[San Andreas Fault]]) appear to have distinct segments. The ''characteristic earthquake'' model postulates that earthquakes are generally constrained within these segments.<ref>{{Harvnb|Castellaro|2003}}.</ref> As the lengths and other properties{{efn|1=These include the type of rock and fault geometry.}} of the segments are fixed, earthquakes that rupture the entire fault should have similar characteristics. These include the maximum magnitude (which is limited by the length of the rupture), and the amount of accumulated strain needed to rupture the fault segment. Since continuous plate motions cause the strain to accumulate steadily, seismic activity on a given segment should be dominated by earthquakes of similar characteristics that recur at somewhat regular intervals.<ref>{{Harvnb|Schwartz|Coppersmith|1984}}; {{Harvnb|Tiampo|Shcherbakov|2012|loc=§2.2|p=93}}.</ref> For a given fault segment, identifying these characteristic earthquakes and timing their recurrence rate (or conversely [[return period]]) should therefore inform us about the next rupture; this is the approach generally used in forecasting seismic hazard. [[UCERF3]] is a notable example of such a forecast, prepared for the state of California.<ref>{{Harvnb|Field et al.|2008}}.</ref> Return periods are also used for forecasting other rare events, such as cyclones and floods, and assume that future frequency will be similar to observed frequency to date. The idea of characteristic earthquakes was the basis of the [[#Parkfield|Parkfield prediction]]: fairly similar earthquakes in 1857, 1881, 1901, 1922, 1934, and 1966 suggested a pattern of breaks every 21.9 years, with a standard deviation of ±3.1 years.<ref>{{Harvnb|Bakun|Lindh|1985|p=619}}.</ref>{{efn|1=Of course these were not the only earthquakes in this period. The attentive reader will recall that, in seismically active areas, earthquakes of some magnitude happen fairly constantly. The "Parkfield earthquakes" are either the ones noted in the historical record, or were selected from the instrumental record on the basis of location and magnitude. {{Harvtxt|Jackson|Kagan|2006|p=S399}} and {{Harvtxt|Kagan|1997|pp=211–212, 213}} argue that the selection parameters can bias the statistics, and that sequences of four or six quakes, with different recurrence intervals, are also plausible.}} Extrapolation from the 1966 event led to a prediction of an earthquake around 1988, or before 1993 at the latest (at the 95% confidence interval).<ref>{{Harvnb|Bakun|Lindh|1985|p=621}}.</ref> The appeal of such a method is that the prediction is derived entirely from the ''trend'', which supposedly accounts for the unknown and possibly unknowable earthquake physics and fault parameters. However, in the Parkfield case the predicted earthquake did not occur until 2004, a decade late. This seriously undercuts the claim that earthquakes at Parkfield are quasi-periodic, and suggests the individual events differ sufficiently in other respects to question whether they have distinct characteristics in common.<ref>{{Harvnb|Jackson|Kagan|2006|p=S408}} say the claim of quasi-periodicity is "baseless".</ref> The failure of the [[#Parkfield|Parkfield prediction]] has raised doubt as to the validity of the characteristic earthquake model itself.<ref name=":10">{{Harvnb|Jackson|Kagan|2006}}.</ref> Some studies have questioned the various assumptions, including the key one that earthquakes are constrained within segments, and suggested that the "characteristic earthquakes" may be an artifact of selection bias and the shortness of seismological records (relative to earthquake cycles).<ref>{{Harvnb|Kagan|Jackson|1991|pp=21, 420}}; {{Harvnb|Stein|Friedrich|Newman|2005}}; {{Harvnb|Jackson|Kagan|2006}}; {{Harvnb|Tiampo|Shcherbakov|2012|loc=§2.2}}, and references there; {{Harvnb|Kagan|Jackson|Geller|2012}}; {{Harvnb|Main|1999}}.</ref> Other studies have considered whether other factors need to be considered, such as the age of the fault.{{efn|1=Young faults are expected to have complex, irregular surfaces, which impede slippage. In time these rough spots are ground off, changing the mechanical characteristics of the fault.<ref>{{Harvnb|Cowan|Nicol|Tonkin|1996}}; {{Harvnb|Stein|Newman|2004|p=185}}.</ref>}} Whether earthquake ruptures are more generally constrained within a segment (as is often seen), or break past segment boundaries (also seen), has a direct bearing on the degree of earthquake hazard: earthquakes are larger where multiple segments break, but in relieving more strain they will happen less often.<ref>{{Harvnb|Stein|Newman|2004}}.</ref> ==== Seismic gaps ==== At the contact where two tectonic plates slip past each other every section must eventually slip, as (in the long-term) none get left behind. But they do not all slip at the same time; different sections will be at different stages in the cycle of strain (deformation) accumulation and sudden rebound. In the seismic gap model the "next big quake" should be expected not in the segments where recent seismicity has relieved the strain, but in the intervening gaps where the unrelieved strain is the greatest.<ref>{{Harvnb|Scholz|2002|loc=§5.3.3|p=284}}; {{Harvnb|Kagan|Jackson|1991|pp=21, 419}}; {{Harvnb|Jackson|Kagan|2006|p=S404}}.</ref> This model has an intuitive appeal; it is used in long-term forecasting, and was the basis of a series of circum-Pacific ([[Pacific Rim]]) forecasts in 1979 and 1989–1991.<ref>{{Harvnb|Kagan|Jackson|1991|pp=21, 419}}; {{Harvnb|McCann|Nishenko|Sykes|Krause|1979}}; {{Harvnb|Rong|Jackson|Kagan|2003}}.</ref> However, some underlying assumptions about seismic gaps are now known to be incorrect. A close examination suggests that "there may be no information in seismic gaps about the time of occurrence or the magnitude of the next large event in the region";<ref>{{Harvnb|Lomnitz|Nava|1983}}.</ref> statistical tests of the circum-Pacific forecasts shows that the seismic gap model "did not forecast large earthquakes well".<ref>{{Harvnb|Rong|Jackson|Kagan|2003|p=23}}.</ref> Another study concluded that a long quiet period did not increase earthquake potential.<ref>{{Harvnb|Kagan|Jackson|1991|loc=Summary}}.</ref> ==== Seismicity patterns ==== {{anchor|M8}} {{anchor|PI}} Various heuristically derived algorithms have been developed for predicting earthquakes. Probably the most widely known is the M8 family of algorithms (including the RTP method) developed under the leadership of [[Vladimir Keilis-Borok]]. M8 issues a "Time of Increased Probability" (TIP) alarm for a large earthquake of a specified magnitude upon observing certain patterns of smaller earthquakes. TIPs generally cover large areas (up to a thousand kilometers across) for up to five years.<ref>See details in {{Harvnb|Tiampo|Shcherbakov|2012|loc=§2.4}}.</ref> Such large parameters have made M8 controversial, as it is hard to determine whether any hits that happened were skillfully predicted, or only the result of chance. M8 gained considerable attention when the 2003 San Simeon and Hokkaido earthquakes occurred within a TIP.<ref name=":11">{{Harvnb|CEPEC|2004a}}.</ref> In 1999, Keilis-Borok's group published a claim to have achieved statistically significant intermediate-term results using their M8 and MSc models, as far as world-wide large earthquakes are regarded.<ref>{{Harvnb|Kossobokov|Romashkova|Keilis-Borok|Healy|1999}}.</ref> However, Geller et al.<ref name=":12">{{Harvnb|Geller|Jackson|Kagan|Mulargia|1997}}.</ref> are skeptical of prediction claims over any period shorter than 30 years. A widely publicized TIP for an M 6.4 quake in Southern California in 2004 was not fulfilled, nor two other lesser known TIPs.<ref>{{Harvnb|Hough|2010b|pp=142–149}}.</ref> A deep study of the RTP method in 2008 found that out of some twenty alarms only two could be considered hits (and one of those had a 60% chance of happening anyway).<ref>{{Harvnb|Zechar|2008}}; {{Harvnb|Hough|2010b|p=145}}.</ref> It concluded that "RTP is not significantly different from a naïve method of guessing based on the historical rates [of] seismicity."<ref>{{Harvnb|Zechar|2008|p=7}}. See also p. 26.</ref> {{anchor|AMR}}''Accelerating moment release'' (AMR, "moment" being a measurement of seismic energy), also known as time-to-failure analysis, or accelerating seismic moment release (ASMR), is based on observations that foreshock activity prior to a major earthquake not only increased, but increased at an exponential rate.<ref>{{Harvnb|Tiampo|Shcherbakov|2012|loc=§2.1}}. {{Harvnb|Hough|2010b|loc=chapter 12}}, provides a good description.</ref> In other words, a plot of the cumulative number of foreshocks gets steeper just before the main shock. Following formulation by {{Harvtxt|Bowman|Quillon|Sammis|Sornette|1998}} into a testable hypothesis,<ref>{{Harvnb|Hardebeck|Felzer|Michael|2008|loc=par. 6}}.</ref> and a number of positive reports, AMR seemed promising<ref>{{Harvnb|Hough|2010b|pp=154–155}}.</ref> despite several problems. Known issues included not being detected for all locations and events, and the difficulty of projecting an accurate occurrence time when the tail end of the curve gets steep.<ref>{{Harvnb|Tiampo|Shcherbakov|2012|loc=§2.1|p=93}}.</ref> But rigorous testing has shown that apparent AMR trends likely result from how data fitting is done,<ref>{{Harvnb|Hardebeck|Felzer|Michael|2008|loc=§4}} show how suitable selection of parameters shows "DMR": ''Decelerating'' Moment Release.</ref> and failing to account for spatiotemporal clustering of earthquakes.<ref>{{Harvnb|Hardebeck|Felzer|Michael|2008|loc=par. 1, 73}}.</ref> The AMR trends are therefore statistically insignificant. Interest in AMR (as judged by the number of peer-reviewed papers) has fallen off since 2004.<ref>{{Harvnb|Mignan|2011|loc=Abstract}}.</ref> ==== Machine learning ==== Rouet-Leduc et al. (2019) reported having successfully trained a regression [[random forest]] on acoustic time series data capable of identifying a signal emitted from fault zones that forecasts fault failure. Rouet-Leduc et al. (2019) suggested that the identified signal, previously assumed to be statistical noise, reflects the increasing emission of energy before its sudden release during a slip event. Rouet-Leduc et al. (2019) further postulated that their approach could bound fault failure times and lead to the identification of other unknown signals.<ref>{{Harvnb|Rouet-Leduc|Hulbert|Lubbers|Barros|2017}}.</ref> Due to the rarity of the most catastrophic earthquakes, acquiring representative data remains problematic. In response, Rouet-Leduc et al. (2019) have conjectured that their model would not need to train on data from catastrophic earthquakes, since further research has shown the seismic patterns of interest to be similar in smaller earthquakes.<ref>{{cite web|url=https://www.quantamagazine.org/artificial-intelligence-takes-on-earthquake-prediction-20190919/|title=Artificial Intelligence Takes on Earthquake Prediction|last1=Smart|first1=Ashley|website=Quanta Magazine|date=19 September 2019|access-date=2020-03-28}}</ref> Deep learning has also been applied to earthquake prediction. Although [[Bath's Law|Bath's law]] and [[Omori's law]] describe the magnitude of earthquake aftershocks and their time-varying properties, the prediction of the "spatial distribution of aftershocks" remains an open research problem. Using the [[Theano (software)|Theano]] and [[TensorFlow]] software libraries, DeVries et al. (2018) trained a [[Artificial neural network|neural network]] that achieved higher accuracy in the prediction of spatial distributions of earthquake aftershocks than the previously established methodology of Coulomb failure stress change. Notably, DeVries et al. (2018) reported that their model made no "assumptions about receiver plane orientation or geometry" and heavily weighted the change in [[shear stress]], "sum of the absolute values of the independent components of the stress-change tensor," and the von Mises yield criterion. DeVries et al. (2018) postulated that the reliance of their model on these physical quantities indicated that they might "control earthquake triggering during the most active part of the seismic cycle." For validation testing, DeVries et al. (2018) reserved 10% of positive training earthquake data samples and an equal quantity of randomly chosen negative samples.<ref>{{Harvnb|DeVries|Viégas|Wattenberg|Meade|2018}}.</ref> Arnaud Mignan and Marco Broccardo have similarly analyzed the application of artificial neural networks to earthquake prediction. They found in a review of literature that earthquake prediction research utilizing artificial neural networks has gravitated towards more sophisticated models amidst increased interest in the area. They also found that neural networks utilized in earthquake prediction with notable success rates were matched in performance by simpler models. They further addressed the issues of acquiring appropriate data for training neural networks to predict earthquakes, writing that the "structured, tabulated nature of earthquake catalogues" makes transparent machine learning models more desirable than artificial neural networks.<ref>{{Harvnb|Mignan|Broccardo|2019}}.</ref> ==== EMP induced seismicity ==== High energy [[electromagnetic pulse]]s can [[Induced seismicity#Electromagnetic pulses|induce earthquakes]] within 2–6 days after the emission by EMP generators.<ref>{{Harvnb|Tarasov|Tarasova|2009}}</ref> It has been proposed that strong EM impacts could control seismicity, as the seismicity dynamics that follow appear to be a lot more regular than usual.<ref>{{Harvnb|Novikov|Okunev|Klyuchkin|Liu|2017}}</ref><ref>{{Harvnb|Zeigarnik|Novikov|Avagimov|Tarasov|2007}}</ref>
Edit summary
(Briefly describe your changes)
By publishing changes, you agree to the
Terms of Use
, and you irrevocably agree to release your contribution under the
CC BY-SA 4.0 License
and the
GFDL
. You agree that a hyperlink or URL is sufficient attribution under the Creative Commons license.
Cancel
Editing help
(opens in new window)