首页> 外文期刊>Mathematical Problems in Engineering: Theory, Methods and Applications >Rational Probabilistic Deciders - Part I: Individual Behavior
【24h】

Rational Probabilistic Deciders - Part I: Individual Behavior

机译:理性概率决策 - 第一部分:个体行为

获取原文
获取原文并翻译 | 示例
获取外文期刊封面目录资料

摘要

This paper is intended to model a decision maker as a rational probabilistic decider (RPD) and to investigate its behavior in stationary and symmetric Markov switch environments. RPDs take their decisions based on penalty functions defined by the environment. The quality of decision making depends on a parameter referred to as level of rationality. The dynamic behavior of RPDs is described by an ergodic Markov chain. Two classes of RPDs are considered-local and global. The former take their decisions based on the penalty in the current state while the latter consider all states. It is shown that asymptotically (in time and in the level of rationality) both classes behave quite similarly. However, the second largest eigenvalue of Markov transition matrices for global RPDs is smaller than that for local ones, indicating faster convergence to the optimal state. As an illustration, the behavior of a chief executive officer, modeled as a global RPD, is considered, and it is shown that the company performance may or may not be optimized-depending on the pay structure employed. While the current paper investigates individual RPDs, a companion paper will address collective behavior.
机译:本文旨在将决策者建模为理性概率决策器(RPD),并研究其在稳态和对称马尔可夫开关环境中的行为。RPD 根据环境定义的惩罚函数做出决定。决策的质量取决于一个称为理性水平的参数。RPD的动态行为由遍历马尔可夫链描述。两类 RPD 被认为是局部的和全局的。前者根据当前状态的处罚做出决定,而后者则考虑所有状态。结果表明,渐近地(在时间和理性水平上),两个类的行为非常相似。然而,全局RPD的马尔可夫转移矩阵的第二大特征值小于局部RPD的马尔可夫转移矩阵,表明向最优状态的收敛速度更快。举例来说,考虑了以全球RPD为模型的首席执行官的行为,并表明公司绩效可能会或可能不会优化,这取决于所采用的薪酬结构。虽然本论文研究了单个 RPD,但配套论文将讨论集体行为。

著录项

获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号