【24h】

THE CO-INFORMATION LATTICE

机译:协同信息格

获取原文
获取原文并翻译 | 示例

摘要

In 1955, McGill published a multivariate generalisation of Shannon's mutual information. Algorithms such as Independent Component Analysis use a different generalisation, the redundancy, or multi-information. McGill's concept expresses the information shared by all of K random variables, while the multi-information expresses the information shared by any two or more of them. Partly to avoid confusion with the multi-information, I call his concept here the co-information. Co-informations, oddly, can be negative. They form a partially ordered set, or lattice, as do the entropies. Entropies and co-informations are simply and symmetrically related by Moebius inversion. The co-information lattice sheds light on the problem of approximating a joint density with a set of marginal densities, though as usual we run into the partition function. Since the marginals correspond to higher-order edges in Bayesian hypergraphs, this approach motivates new algorithms such as Dependent Component Analysis, which we describe, and (loopy) Generalised Belief Propagation on hypergraphs, which we do not. Simulations of subspace-ICA (a tractable DCA) on natural images are presented on the web. In neural computation theory, we identify the co-information of a group of neurons (possibly in space/ time staggered patterns) with the 'degree of existence' of a corresponding cell assembly.
机译:1955年,麦吉尔(McGill)发表了香农互信息的多元概括。独立组件分析等算法使用不同的概括,冗余或多信息。麦吉尔的概念表示所有K个随机变量共享的信息,而多信息表示其中两个或多个任意变量共享的信息。部分是为了避免与多信息混淆,我在这里将他的概念称为共信息。奇怪的是,共同信息可能是负面的。它们与熵一样形成部分有序的集合或晶格。熵和共信息通过Moebius反演简单而对称地关联。共同信息网格阐明了用一组边际密度近似接头密度的问题,尽管通常我们会遇到分配函数。由于边际对应于贝叶斯超图的高阶边缘,因此这种方法激发了新算法,例如我们描述的依存分量分析和(超循环)超图上的(可信)广义置信度传播。网络上展示了自然图像上的子空间ICA(可处理DCA)仿真。在神经计算理论中,我们识别出一组神经元的共同信息(可能以空间/时间交错模式)与相应细胞装配体的“存在度”。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号