首页> 美国卫生研究院文献>other >Endoscopic-CT: Learning-Based Photometric Reconstruction for Endoscopic Sinus Surgery
【2h】

Endoscopic-CT: Learning-Based Photometric Reconstruction for Endoscopic Sinus Surgery

机译:内窥镜CT:内窥镜鼻窦手术的基于学习的光度重建

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

In this work we present a method for dense reconstruction of anatomical structures using white light endoscopic imagery based on a learning process that estimates a mapping between light reflectance and surface geometry. Our method is unique in that few unrealistic assumptions are considered (i.e., we do not assume a Lambertian reflectance model nor do we assume a point light source) and we learn a model on a per-patient basis, thus increasing the accuracy and extensibility to different endoscopic sequences. The proposed method assumes accurate video-CT registration through a combination of Structure-from-Motion (SfM) and Trimmed-ICP, and then uses the registered 3D structure and motion to generate training data with which to learn a multivariate regression of observed pixel values to known 3D surface geometry. We demonstrate with a non-linear regression technique using a neural network towards estimating depth images and surface normal maps, resulting in high-resolution spatial 3D reconstructions to an average error of 0.53mm (on the low side, when anatomy matches the CT precisely) to 1.12mm (on the high side, when the presence of liquids causes scene geometry that is not present in the CT for evaluation). Our results are exhibited on patient data and validated with associated CT scans. In total, we processed 206 total endoscopic images from patient data, where each image yields approximately 1 million reconstructed 3D points per image.
机译:在这项工作中,我们提出了一种基于白光内窥镜图像的密集重建解剖结构的方法,该方法基于估计光反射率与表面几何形状之间映射关系的学习过程。我们的方法的独特之处在于,它考虑了一些不切实际的假设(即,我们不假设朗伯反射模型,也不假设点光源),并且我们在每个患者的基础上学习模型,从而提高了准确性和可扩展性。不同的内窥镜序列。所提出的方法假设通过结合运动结构(SfM)和Trimmed-ICP进行准确的视频CT配准,然后使用已配准的3D结构和运动来生成训练数据,以利用该训练数据学习观察到的像素值的多元回归到已知的3D表面几何形状。我们使用非线性回归技术进行演示,该技术使用神经网络来估计深度图像和表面法线贴图,从而产生高分辨率空间3D重建,平均误差为0.53mm(在低端,当解剖结构与CT精确匹配时)到1.12毫米(在高端,当液体的存在导致场景几何形状在CT中不存在以进行评估)。我们的结果显示在患者数据上,并通过相关的CT扫描进行了验证。总共,我们从患者数据中处理了206张内窥镜图像,其中每张图像每个图像产生大约一百万个重建的3D点。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号