首页> 外文会议>IEEE Applied Imagery Pattern Recognition Workshop >Deepfakes for Histopathology Images: Myth or Reality?
【24h】

Deepfakes for Histopathology Images: Myth or Reality?

机译:组织病理学图像的深饼:神话或现实?

获取原文

摘要

Deepfakes have become a major public concern on the Internet as fake images and videos could be used to spread misleading information about a person or an organization. In this paper, we explore if deepfakes can be generated for histopathology images using advances in deep learning. This is because the field of digital pathology is gaining a lot of momentum since the Food and Drug Administration (FDA) approved a few digital pathology systems for primary diagnosis and consultation in the United States. Specifically, we investigate if state-of-the- art generative adversarial networks (GANs) can produce fake histopathology images that can trick an expert pathologist. For our investigation, we used whole slide images (WSIs) hosted by The Cancer Genome Atlas (TCGA). We selected 3 WSIs of colon cancer patients and produced 100,000 patches of 256×256 pixels in size. We trained three popular GANs to generate fake patches of the same size. We then constructed a set of images containing 30 real and 30 fake patches. An expert pathologist reviewed these images and marked them as either real or fake. We observed that the pathologist marked 10 fake patches as real and correctly identified 34 patches (as fake or real). Thirteen patches were incorrectly identified as fake. The pathologist was unsure of 3 fake patches. Interestingly, the fake patches that were correctly identified by the pathologist, had missing morphological features, abrupt background change, pleomorphism, and other incorrect artifacts. Our investigation shows that while certain parts of a histopathology image can be mimicked by existing GANs, the intricacies of the stained tissue and cells cannot be fully captured by them. Unlike radiology, where it is relatively easier to manipulate an image using a GAN, we argue that it is a harder challenge in digital pathology to generate an entire WSI that is fake.
机译:Deepfakes已成为互联网上的主要公众关注,因为假图像和视频可用于传播有关人员或组织的误导信息。在本文中,我们探讨了使用深度学习的进步来生成Deepefakes的Deepfakes。这是因为数字病理领域以来,由于食品和药物管理局(FDA)批准了一些数字病理系统,以获得少数数字病理系统进行美国的主要诊断和磋商。具体而言,我们调查最新的生成对抗性网络(GANS)可以产生可以欺骗专家病理学家的假组织病理学图像。对于我们的调查,我们使用由癌症基因组Atlas(TCGA)托管的整个幻灯片图像(WSIS)。我们选择了3个WSIS的结肠癌患者,并产生了100,000个256×256像素的尺寸。我们训练了三个流行的GAN,以产生相同尺寸的假斑。然后,我们构建了一组包含30个真实和30个假斑块的图像。专家病理学家审查了这些图像并将其标记为真实或假。我们观察到病理学家标志着10个假斑,如真实的,正确识别34个补丁(作为假或真实)。 13个贴片被错误地确定为假。病理学家不确定3个假斑。有趣的是,病理学家正确识别的假斑缺少形态学特征,突然背景变化,渗透形状和其他不正确的伪影。我们的研究表明,虽然组织病理学图像的某些部分可以被现有的GAN模仿,但是它们不能完全捕获染色组织和细胞的复杂性。与放射学不同,在那里使用甘甘相对较容易操纵图像,我们认为这是数字病理学中更难的挑战,以生成一个假的WSI。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号