...
首页> 外文期刊>Literary & linguistic computing >Using the Google N-Gram corpus to measure cultural complexity
【24h】

Using the Google N-Gram corpus to measure cultural complexity

机译:使用Google N-Gram语料库衡量文化复杂性

获取原文
获取原文并翻译 | 示例
   

获取外文期刊封面封底 >>

       

摘要

Empirical studies of broad-ranging aspects of culture, such as 'cultural complexities' are often extremely difficult. Following the model of Michel et al. (Michel, J.-B., Shen, Y. K., Aiden, A. P. et al. (2011). Quantitative analysis of culture using millions of digitized books. Science, 331(6014): 176-82), and using a set of techniques originally developed to measure the complexity of language, we propose a text-based analysis of a large corpus of topic-uncontrolled text to determine how cultural complexity varies over time within a single culture. Using the Google Books American 2Gram corpus, we are able to show that (as predicted from the cumulative nature of culture), US culture has been steadily increasing in complexity, even when (for economic reasons) the amount of actual discourse as measured by publication volume decreases. We discuss several implication of this novel analysis technique as well as its implications for discussion of the meaning of 'culture.'
机译:对文化的广泛方面(例如“文化复杂性”)进行实证研究通常非常困难。按照米歇尔等人的模型。 (Michel,J.-B.,Shen,YK,Aiden,AP等(2011)。使用数百万本数字化书籍对文化进行定量分析。Science,331(6014):176-82),并使用一组最初用于测量语言复杂性的技术,我们提出了对大量主题不受控制的文本的基于文​​本的分析,以确定在单个文化中文化复杂性如何随时间变化。使用Google Books American 2Gram语料库,我们可以证明(根据文化的累积性质预测),即使(出于经济原因)通过出版物衡量的实际话语量,美国文化的复杂度也在稳步提高音量降低。我们讨论了这种新颖的分析技术的几种含义,以及对“文化”含义的讨论的含义。

著录项

  • 来源
    《Literary & linguistic computing》 |2013年第4期|668-675|共8页
  • 作者

    Patrick Juola;

  • 作者单位

    Evaluating Variations in Language Laboratory,Duquesne University,600 Forbes Avenue,Pittsburgh, PA 15282, USA;

  • 收录信息
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号