首页> 外文期刊>Ecology and Evolution >Improving the accessibility and transferability of machine learning algorithms for identification of animals in camera trap images: MLWIC2
【24h】

Improving the accessibility and transferability of machine learning algorithms for identification of animals in camera trap images: MLWIC2

机译:提高机器学习算法的可访问性和可转移性,以便识别相机陷阱图像的识别图像:MLWIC2

获取原文
           

摘要

Motion‐activated wildlife cameras (or “camera traps”) are frequently used to remotely and noninvasively observe animals. The vast number of images collected from camera trap projects has prompted some biologists to employ machine learning algorithms to automatically recognize species in these images, or at least filter‐out images that do not contain animals. These approaches are often limited by model transferability, as a model trained to recognize species from one location might not work as well for the same species in different locations. Furthermore, these methods often require advanced computational skills, making them inaccessible to many biologists. We used 3?million camera trap images from 18 studies in 10 states across the United States of America to train two deep neural networks, one that recognizes 58 species, the “species model,” and one that determines if an image is empty or if it contains an animal, the “empty‐animal model.” Our species model and empty‐animal model had accuracies of 96.8% and 97.3%, respectively. Furthermore, the models performed well on some out‐of‐sample datasets, as the species model had 91% accuracy on species from Canada (accuracy range 36%–91% across all out‐of‐sample datasets) and the empty‐animal model achieved an accuracy of 91%–94% on out‐of‐sample datasets from different continents. Our software addresses some of the limitations of using machine learning to classify images from camera traps. By including many species from several locations, our species model is potentially applicable to many camera trap studies in North America. We also found that our empty‐animal model can facilitate removal of images without animals globally. We provide the trained models in an R package (MLWIC2: Machine Learning for Wildlife Image Classification in R), which contains Shiny Applications that allow scientists with minimal programming experience to use trained models and train new models in six neural network architectures with varying depths.
机译:运动激活的野生动物摄像头(或“相机陷阱”)经常用于远程和非侵入性观察动物。从相机陷阱项目收集的大量图像提示一些生物学家使用机器学习算法来自动识别这些图像中的物种,或者至少滤除不含动物的图像。这些方法通常受模型可转换性的限制,因为培训的模型,以识别从一个位置识别物种可能在不同位置中的相同物种也无法工作。此外,这些方法通常需要高级的计算技能,使它们无法进入许多生物学家。我们使用了来自美国的10个州的18个研究的3百万相机陷阱图像,培训了两个深神经网络,一个识别58种,“物种模型”和一个确定图像是否为空的,或者它包含一种动物,“空动物模型”。我们的物种模型和空动物模型分别具有96.8%和97.3%的准确度。此外,模型在一些样本的数据集上表现良好,因为物种模型在加拿大的物种上具有91%的精度(精度范围为36%-91%,跨越样本数据集)和空动物模型从不同的大洲的样品外数据集上实现了91%-94%的准确性。我们的软件解决了使用机器学习的一些限制来对摄像头陷阱进行分类图像。通过包括许多来自几个地点的物种,我们的物种模型可能适用于北美的许多相机陷阱研究。我们还发现,我们的空动物模型可以促进在没有全球动物的情况下删除图像。我们在R封装中提供训练有素的模型(MLWIC2:R)中的野生动物图像分类,其中包含闪亮的应用程序,允许科学家使用训练有素的型号和六个神经网络架构中具有不同深度的六个神经网络架构中的新模型。
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号