首页> 外文会议>IEEE/ACM International Conference on Mining Software Repositories >On Improving Deep Learning Trace Analysis with System Call Arguments
【24h】

On Improving Deep Learning Trace Analysis with System Call Arguments

机译:用系统呼叫论点改进深入学习追踪分析

获取原文

摘要

Kernel traces are sequences of low-level events comprising a name and multiple arguments, including a timestamp, a process id, and a return value, depending on the event. Their analysis helps uncover intrusions, identify bugs, and find latency causes. However, their effectiveness is hindered by omitting the event arguments. To remedy this limitation, we introduce a general approach to learning a representation of the event names along with their arguments using both embedding and encoding. The proposed method is readily applicable to most neural networks and is task-agnostic. The benefit is quantified by conducting an ablation study on three groups of arguments: call-related, process-related, and time-related. Experiments were conducted on a novel web request dataset and validated on a second dataset collected on pre-production servers by Ciena, our partnering company. By leveraging additional information, we were able to increase the performance of two widely-used neural networks, an LSTM and a Transformer, by up to 11.3% on two unsupervised language modelling tasks. Such tasks may be used to detect anomalies, pre-train neural networks to improve their performance, and extract a contextual representation of the events.
机译:内核迹线是低级事件的序列,包括名称和多个参数,包括时间戳,进程ID和返回值,具体取决于事件。他们的分析有助于揭示入侵,识别错误,并找到延迟原因。但是,通过省略事件争论来阻碍他们的有效性。要解决此限制,我们介绍了一种使用嵌入和编码的参数学习事件名称的表示的一般方法。该方法很容易适用于大多数神经网络,并且是任务不可知的。通过对三组论点进行消融研究来量化的好处:呼叫相关,与流程相关和时间相关的。实验在新颖的Web请求数据集上进行,并在我们的合作伙伴公司Ciena收集的第二个数据集上验证。通过利用额外信息,我们能够在两个无监督的语言建模任务中提高两个广泛使用的神经网络,LSTM和变压器的性能,高达11.3%。这些任务可用于检测异常,预先火车前神经网络以提高它们的性能,并提取事件的上下文表示。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号