首页> 外文期刊>Journal of geophysical research. Solid earth: JGR >Statistical tests on clustered global earthquake synthetic data sets
【24h】

Statistical tests on clustered global earthquake synthetic data sets

机译:聚类全球地震综合数据集的统计检验

获取原文
获取原文并翻译 | 示例
       

摘要

We study the ability of statistical tests to identify nonrandom features of earthquake catalogs, with a focus on the global earthquake record since 1900. We construct four types of synthetic data sets containing varying strengths of clustering, with each data set containing on average 10,000 events over 100years with magnitudes above M = 6. We apply a suite of statistical tests to each synthetic realization in order to evaluate the ability of each test to identify the sequences of events as nonrandom. Our results show that detection ability is dependent on the quantity of data, the nature of the type of clustering, and the specific signal used in the statistical test. Data sets that exhibit a stronger variation in the seismicity rate are generally easier to identify as nonrandom for a given background rate. We also show that we can address this problem in a Bayesian framework, with the clustered data sets as prior distributions. Using this new Bayesian approach, we can place quantitative bounds on the range of possible clustering strengths that are consistent with the global earthquake data. At M = 7, we can estimate 99th percentile confidence bounds on the number of triggered events, with an upper bound of 20% of the catalog for global aftershock sequences, with a stronger upper bound on the fraction of triggered events of 10% for long-term event clusters. At M = 8, the bounds are less strict due to the reduced number of events. However, our analysis shows that other types of clustering could be present in the data that we are unable to detect. Our results aid in the interpretation of the results of statistical tests on earthquake catalogs, both worldwide and regionally.
机译:我们研究统计测试识别地震目录的非随机特征的能力,重点是自1900年以来的全球地震记录。我们构建了四种类型的包含不同聚类强度的合成数据集,每个数据集平均包含10,000多个事件100年,其幅度大于M =6。我们对每个综合实现应用一套统计测试,以便评估每个测试将事件序列识别为非随机事件的能力。我们的结果表明,检测能力取决于数据量,聚类类型的性质以及统计测试中使用的特定信号。对于给定的背景速率,通常在地震活动率方面表现出较大变化的数据集更容易被识别为非随机。我们还表明,我们可以在贝叶斯框架中解决此问题,将聚类的数据集作为先验分布。使用这种新的贝叶斯方法,我们可以在与全球地震数据一致的可能聚类强度的范围上设置定量界限。在M = 7时,我们可以估计触发事件的数量的第99个百分位数置信区间,全局余震序列目录的上限为20%,长期触发事件的分数上限为10%,则上限更高长期事件群集。在M = 8时,由于事件数量的减少,边界的严格程度有所降低。但是,我们的分析表明,我们无法检测到的数据中可能存在其他类型的聚类。我们的结果有助于解释全球和区域性地震目录的统计测试结果。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号