【24h】

RDF Graph Anonymization Robust to Data Linkage

机译:RDF图匿名化对数据链接的强大

获取原文

摘要

Privacy is a major concern when publishing new datasets in the context of Linked Open Data (LOD). A new dataset published in the LOD is indeed exposed to privacy breaches due to the linkage to objects already present in the other datasets of the LOD. In this paper, we focus on the problem of building safe anonymizations of an RDF graph to guarantee that linking the anonymized graph with any external RDF graph will not cause privacy breaches. Given a set of privacy queries as input, we study the data-independent safety problem and the sequence of anonymization operations necessary to enforce it. We provide sufficient conditions under which an anonymization instance is safe given a set of privacy queries. Additionally, we show that our algorithms for RDF data anonymization are robust in the presence of sameAs links that can be explicit or inferred by additional knowledge.
机译:在链接开放数据(LOD)的上下文中发布新数据集时,隐私是一个主要问题。由于LOD中已存在于其他数据集中已存在的对象,LOD中发布的新数据集确实暴露于隐私泄露。在本文中,我们专注于构建RDF图的安全匿名的问题,以保证将匿名图与任何外部RDF图链接的图链接不会导致隐私泄露。给出了一组隐私查询作为输入,我们研究了数据无关的安全问题,以及强制执行所需的匿名操作序列。我们提供了足够的条件,因为一组隐私查询给出了一个匿名化实例。此外,我们表明,我们的RDF数据匿名算法在Sameas链接的存在中是强大的,这些链路可以通过额外的知识来显式或推断。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号