首页> 外文期刊>Journal of visual communication & image representation >VI-NET: A hybrid deep convolutional neural network using VGG and inception V3 model for copy-move forgery classification
【24h】

VI-NET: A hybrid deep convolutional neural network using VGG and inception V3 model for copy-move forgery classification

机译:VI-NET:一种混合深度卷积神经网络,使用VGG和inception V3模型进行复制-移动伪造分类

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

Nowadays, various image editing tools are available that can be utilized for manipulating the original images; here copy-move forgery is most common forgery. In copy-move forgery, some part of the original image is copied and pasted into the same image at some other location. However, Artificial Intelligence (AI) based approaches can extract manipulated features easily. In this study, a deep learning-based method is proposed to classify the copy-move forged images. For classifying the forged images, a deep learning (DL) based hybrid model is pre-sented named as VI-NET using fusion of two DL architectures, i.e., VGG16 and Inception V3. Further, output of two models is concatenated and connected with two additional convolutional layers. Cross-validation protocols, K10 (90 training, 10 testing), K5 (80 training, 20 testing), and K2 (50 training, 50 testing) are applied on the COMOFOD dataset. Moreover, the performance of VI-NET is compared with transfer learning and machine learning models using evaluation metrics such as accuracy, precision, recall, F1 score, etc. Proposed hybrid model performed better than other approaches with classification accuracy of 99 +/- 0.2 in comparison to accuracy of 95 +/- 4 (Inception V3), 93 +/- 5 (MobileNet), 59 +/- 8 (VGG16), 60 +/- 1 (Decision tree), 87 +/- 1 (KNN), 54 +/- 1 (Naive Bayes) and 65 +/- 1 (random forest) under K10 protocol. Similarly, results are evaluated based on K2 and K5 validation protocols. It is experimentally observed that the proposed model performance is better than existing standard and customized deep learning architectures.
机译:如今,可以使用各种图像编辑工具来处理原始图像;在这里,复制移动伪造是最常见的伪造。在复制移动伪造中,原始图像的某些部分被复制并粘贴到其他位置的同一图像中。然而,基于人工智能 (AI) 的方法可以轻松提取纵的特征。该文提出一种基于深度学习的复制移动伪造图像分类方法。为了对伪造图像进行分类,使用两种深度学习架构(即VGG16和Inception V3)的融合,预先发送了基于深度学习(DL)的混合模型,命名为VI-NET。此外,两个模型的输出被连接起来,并与两个额外的卷积层连接起来。交叉验证协议 K10(90% 训练,10 % 测试)、K5(80 % 训练,20 % 测试)和 K2(50 % 训练,50 % 测试)应用于 COMOFOD 数据集。此外,使用准确率、精确率、召回率、F1 分数等评估指标将 VI-NET 的性能与迁移学习和机器学习模型进行比较。在K10协议下,所提出的混合模型表现优于其他方法,分类精度为99 +/- 0.2 %,而准确率为95 +/- 4 %(Inception V3)、93 +/- 5 %(MobileNet)、59 +/- 8 %(VGG16)、60 +/- 1 %(决策树)、87 +/- 1 %(KNN)、54 +/- 1 %(朴素贝叶斯)和65 +/- 1 %(随机森林)。同样,根据 K2 和 K5 验证协议评估结果。实验结果表明,所提模型的性能优于现有的标准和定制的深度学习架构。

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号