首页> 外文会议>International Conference on Advanced Cloud and Big Data >Allocation of Resources after Disaster Based on Big Data from SNS and Spatial Scan
【24h】

Allocation of Resources after Disaster Based on Big Data from SNS and Spatial Scan

机译:基于SNS大数据和空间扫描的灾后资源分配

获取原文

摘要

After a disaster such as earthquakes, debris flows, forest fires, or landslides, etc., a lot of people have to be away from their home and gather in a shelter. In addition, the refugees suffer from the shortage of necessary resources due to impaired life infrastructure, such as damaged roads and communication networks. The degree of reducing damage depends on the amount of food, water, daily necessities and communication resources required by each shelter. How to effectively and efficiently allocate resources according to grasp the exact need of a disaster situation will be an important issue. We estimate the degree of the disaster by collecting and analyzing big data from the SNS, and building a platform for the communication resources to be efficiently and effectively allocated. In order to achieve this goal, we are challenging the following issues A) Understanding situations (user requirements) after disaster occur The SNS streams large scale semantic information about real time situation in society, especially during and after disaster. It is both domain-specific and computational challenge in processing the heterogeneous large data set to extract the exact situational content with reduced semantic uncertainty. The machine learning (ML) and natural language processing (NPL) tool kits are useful in semantic analysis, but still needs domain-specific implementation and computational improvement for the situation understanding from the SNS big data. B) Understanding distribution patterns of situations/users' requirements The disaster related situation is spatiotemporally correlated, and varies dynamically in space and time. It is also domain-specific and computational challenge in estimating the spatiotemporal distribution patterns of the disaster affect based on the spatial big data from SNS. The scan statistics such as the spatial scan have provided well tested mathematical tools and software for spatial data mining. However, new methodologies are necessary since the assumptions have to be different when it meets the spatial big data in SNS. And the computational complexity in spatial big data is also a bottleneck for real-time processing. C) Solving uncertainty of big crowd data One of the major features in big crowd data, e.g., SNS data, is uncertainty behind the data. Especially in a disaster scenario, the collecting time period cannot be long enough to smooth the data automatically. How to efficiently solve uncertainty problem in the big crowd data in a disaster scenario becomes a new and big challenge for disaster management.
机译:在发生地震,泥石流,森林大火或山体滑坡等灾难之后,许多人必须离开家园并聚集在避难所中。此外,由于道路和通讯网络受损等生活基础设施受损,难民还缺乏必要的资源。减少破坏的程度取决于每个庇护所所需的食物,水,日用品和通讯资源。如何根据对灾难情况的确切需求来有效而高效地分配资源将是一个重要的问题。我们通过收集和分析来自SNS的大数据并为有效和高效地分配通信资源构建平台,来估计灾难的程度。为了实现此目标,我们正在挑战以下问题:A)了解灾难发生后的状况(用户需求)SNS流式传输有关社会实时状况(尤其是灾难发生期间和之后)的大规模语义信息。在处理异构大数据集以减少语义不确定性的情况下提取确切的情境内容,这既是领域特定的挑战,也是计算上的挑战。机器学习(ML)和自然语言处理(NPL)工具套件在语义分析中很有用,但仍需要特定领域的实现和计算改进,以便从SNS大数据中了解情况。 B)了解情况/用户需求的分布模式与灾难相关的情况是时空相关的,并且在空间和时间上动态变化。在基于SNS的空间大数据估算灾害影响的时空分布模式方面,这也是特定领域的计算难题。诸如空间扫描之类的扫描统计信息为空间数据挖掘提供了经过良好测试的数学工具和软件。但是,新的方法是必要的,因为当满足SNS中的空间大数据时,假设必须有所不同。空间大数据的计算复杂度也是实时处理的瓶颈。 C)解决大人群数据的不确定性大人群数据(例如SNS数据)的主要特征之一就是数据背后的不确定性。尤其是在灾难情况下,收集时间段不能足够长以自动平滑数据。如何有效地解决灾难场景中大群数据的不确定性问题成为灾难管理的新挑战。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号