首页> 外文会议>2010 IEEE International Conference on Technologies for Homeland Security >Evaluating information assurance performance and the impact of data characteristics
【24h】

Evaluating information assurance performance and the impact of data characteristics

机译:评估信息保障绩效和数据特征的影响

获取原文

摘要

Research and development of new information assurance techniques and technologies is ongoing and varied. Each new proposal and technique arrives with great promise and anticipated success as research teams struggle to develop new and innovative responses to emerging threats. Unfortunately, these techniques frequently fall short of expectation when deployed due to difficulties with false alarms, trouble operating in a non-idealized or new domain, or flexibility limiting assumptions which are only valid with specific input sets. We believe these failures are due to fundamental problems with the experimental method for evaluating the effectiveness of new ideas and techniques. This work explores the effect of a poorly understood data synthesis process on the evaluation of IA devices. The point of an evaluation is to independently determine what a detector can and cannot detect, i.e. the metric of detection. This can only be done when the data contains carefully controlled ground truth. We broadly define the term “similarity class” to facilitate discussion about the different ways data (and more specifically test data) can be similar, and use these ideas to illustrate the pre-requisites for correct evaluation of anomaly detectors. We focus on how anomaly detectors function and should be evaluated in 2 specific domains with disparate system architectures and data: a sensor and data transport network for air frame tracking and display, and a deep space mission spacecraft command link. Finally, we present empirical evidence illustrating the effectiveness of our approach in these domains, and introduce the entropy of a time series sensor as a critical measure of data similarity for test data in these domains.
机译:新的信息保证技术和技术的研究和开发正在进行中并且种类繁多。随着研究团队努力开发新的创新应对新威胁的方法,每一项新提议和新技术都充满希望并有望取得成功。不幸的是,由于错误警报的困难,在非理想化或新领域中的操作困难或仅适用于特定输入集的灵活性限制假设,这些技术在部署时常常达不到预期。我们认为,这些失败是由于用于评估新思想和新技术的有效性的实验方法存在根本问题。这项工作探索了对IA设备的评估缺乏了解的数据合成过程的影响。评估的重点是独立确定检测器可以检测和不能检测的内容,即检测指标。仅当数据包含经过仔细控制的基本事实时,才可以执行此操作。我们广泛定义术语“相似性类别”,以方便讨论数据(更具体而言是测试数据)相似的不同方式,并使用这些思想来说明正确评估异常检测器的前提条件。我们专注于异常检测器的功能,应在两个特定的领域中使用不同的系统架构和数据进行评估:用于机体跟踪和显示的传感器和数据传输网络,以及深空任务航天器的命令链接。最后,我们提供经验证据,说明我们的方法在这些领域中的有效性,并介绍了时间序列传感器的熵作为这些领域中测试数据数据相似性的关键指标。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号