首页> 外文会议>International Conference on Inventive Computation Technologies >DeepGrip: Cricket Bowling Delivery Detection with Superior CNN Architectures
【24h】

DeepGrip: Cricket Bowling Delivery Detection with Superior CNN Architectures

机译:DeepGrip:蟋蟀保龄球交付检测,具有优越的CNN架构

获取原文

摘要

Delivery in cricket is the sole action of bowling a cricket ball towards the batsman. The outcome of the ball is immensely pivoted on the grip of the bowler. An instance when whether the ball is going to take a sharp turn or keeps straight through with the arm depends entirely upon the grip. And to the batsmen, the grip of the cricket bowl is one of the biggest enigmas. Without acknowledging the grip of the bowl and having any clue of the behavior of the ball, the mis-hit of a ball is the most likely outcome due to the variety in bowling present in modern-day cricket. The paper proposed a novel strategy to identify the type of delivery from the finger grip of a bowler while the bowler makes a delivery. The main purpose of this research is to utilize the preliminary CNN architecture and the transfer learning models to perfectly classify the grips of bowlers. A new dataset of 5573 images from Real-Time videos in offline mode were prepared for this research, named GRIP DATASET, consisted of grip images of 13 different classes. Hence the preliminary CNN model and the pre-trained transfer learning models - Vgg16, Vgg19, ResNet101, ResNet52, DenseNet, MobileNet, AlexNet, Inception V3, and NasNet were used to train with GRIP DATASET and analyze the outcome of grips. The training and validation accuracies of the models are noteworthy with the maximum validation accuracy of the preliminary model reaching 98.75%. This study is expected to be yet another steppingstone in the use of deep learning for the game of cricket.
机译:板球交货是将一个板球队朝击球手的唯一行动。球的结果非常枢转在球瓶的抓地力上。一个例子,当球是急转弯或保持直线的情况,完全取决于抓地力。并且到了击球手,蟋蟀碗的握把是最大的谜团之一。在不承认碗的抓地力并有任何线索的行为的情况下,球的错误袭击是由于保龄球在现代板球上的各种各样的最有可能的结果。本文提出了一种新颖的策略,以识别来自礼服夹持器的手指握把的交付类型,而备用礼服送达。本研究的主要目的是利用初步的CNN架构和转移学习模型来完美地分类投放者的夹具。从脱机模式下从实时视频中的一个新的数据集是为该研究编写了名为Grip DataSet的该研究,包括13个不同类的握把图像。因此,初步的CNN模型和预先接受的转移学习模型 - VGG16,VGG19,Reset101,Resnet52,Densenet,MobileNet,AlexNet,Inception V3和NASnet用于用Grip DataSet训练并分析夹具的结果。模型的培训和验证准确性值得注意的是,初步模型的最大验证准确性达到98.75%。这项研究预计将在使用深度学习的蟋蟀游戏中是另一个阶梯桥。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号