首页> 外文会议>International conference on artificial intelligence and soft computing >Multi-class and Cluster Evaluation Measures Based on Renyi and Tsallis Entropies and Mutual Information
【24h】

Multi-class and Cluster Evaluation Measures Based on Renyi and Tsallis Entropies and Mutual Information

机译:基于Renyi和Tsallis熵及互信息的多类聚类评估方法

获取原文

摘要

The evaluation of cluster and classification models in comparison to ground truth information or other models is still an objective for many applications. Frequently, this leads to controversy debates regarding the informative content. This particularly holds for cluster evaluations. Yet, for imbalanced class cardinalities, similar problems occur. One possibility to handle evaluation tasks in a more natural way is to consider comparisons in terms of shared or non-shared information. Information theoretic quantities like mutual information and divergence are designed to answer respective questions. Besides formulations based on the most prominent Shannon-entropy, alternative definitions based on relaxed entropy definitions are known. Examples are Renyi- and Tsallis-entropies. Obviously, the use of those entropy concepts result in an readjustment of mutual information etc. and respective evaluation measures thereof. In the present paper we consider several information theoretic evaluation measures based on different entropy concepts and compare them theoretically as well as regarding their performance in applications.
机译:与真实情况信息或其他模型相比,聚类和分类模型的评估仍然是许多应用程序的目标。通常,这导致有关信息内容的辩论。对于群集评估尤其如此。然而,对于基数不平衡的基数,也会出现类似的问题。以更自然的方式处理评估任务的一种可能性是考虑根据共享或非共享信息进行比较。诸如互信息和散度之类的信息理论量旨在回答各自的问题。除了基于最突出的香农熵的公式之外,基于松弛熵定义的替代定义也是已知的。例子是Renyi熵和Tsallis熵。显然,那些熵概念的使用导致相互信息等及其相应评估措施的重新调整。在本文中,我们考虑了基于不同熵概念的几种信息理论评估方法,并在理论上以及它们在应用中的性能方面进行了比较。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号