首页> 外文期刊>The Journal of Portfolio Management >Timing Small versus Large Stocks
【24h】

Timing Small versus Large Stocks

机译:定时小股票与大股票

获取原文
获取原文并翻译 | 示例
       

摘要

U.S. equity managers who might consider an alpha-generating strategy using a small-size bet would earn, on average, a positive expected alpha in the long run. They could also experience long periods of underperformance. The classic small-minus-big strategy, which systematically favors small-caps, might well be too naive, and size timing, even if risky, can present an opportunity to add further value. We show that strategies based on three artificial intelligence approaches—recursive partitioning, neural networks, and genetic algorithms—could successfully time the U.S. size premium over the 1990-2004 period. Of the six individual timing strategies examined—three artificial intelligence approaches conditioned on historical data (1975-1989) and on recent data (1975-month preceding the prediction)—five outperform the SMB premium. None of the six timing strategies systematically outperforms the SMB strategy during the three five-year sub-periods examined. Yet a strategy based on the majority rule (that is, a strategy favored by at least two of the three artificial intelligence approaches) outperformed the SMB strategy in each subperiod. Not only does the consensus strategy benefit from stronger predictive signals, but it also allows the number of bets (transaction costs) to be reduced. Five of the six timing strategies, as well as the consensus strategies, remain profitable even after transaction costs. Although all methods have their merits, recursive partitioning could be favored for its much greater transparency and ease of interpretation. In the case of NN and GA, we deal mostly with black boxes. For investors favoring results over understanding, the black box syndrome is not a serious issue, but when the model does fail (as each method does in one subperiod), the investor will find it quite complex to see what went wrong, given the opaque nature of the model. In this presentation, we consider only extreme bets, 100% long in small-caps and 100% short in large-caps, and vice versa.This follows Fox [1999], who stresses that managers with superior forecasting skills (60% hit rate or higher) should favor more extreme tilts because they improve the entire range of possible returns. Still, we can reasonably conceive of less extreme strategies that would allow for neutral allocation when choices are less clear-cut. Therefore, considering three states of the world— small-cap tilt, large-cap tilt, and no tilt—could be more interesting, as it could yield stronger predictive signals and fewer switches (lower transaction costs).
机译:从长远来看,可能会考虑使用小额下注的alpha生成策略的美国股票经理平均会获得预期的alpha积极值。他们还可能会长期遭受性能不佳的困扰。系统地偏爱小型股的经典的“从小到大”的策略可能太天真了,而且即使有风险,规模调整也可以提供增加价值的机会。我们证明了基于三种人工智能方法的策略-递归分区,神经网络和遗传算法-可以成功地使美国规模溢价在1990-2004年期间计时。在所研究的六个单独的计时策略中,有三种以历史数据(1975-1989年)和最新数据(1975年为预测前的条件)为条件的人工智能方法中,有五种优于SMB溢价。在所审查的三个五年子期间中,六个计时策略均没有系统地胜过SMB策略。但是,基于多数规则的策略(即,三种人工智能方法中至少有两种受其青睐的策略)在每个子周期中均优于SMB策略。共识策略不仅受益于更强的预测信号,而且还可以减少下注次数(交易成本)。即使扣除交易成本,六个计时策略中的五个以及共识策略仍然保持盈利。尽管所有方法都有其优点,但递归分区可能因其更大的透明度和易于解释而受到青睐。对于NN和GA,我们主要处理黑匣子。对于偏爱结果胜于理解的投资者而言,黑匣子综合症并不是一个严重的问题,但是当模型确实失败时(就像每种方法在一个子时期中所做的那样),鉴于不透明的性质,投资者会发现错在哪里很复杂。模型的在本次演讲中,我们仅考虑极端下注,小型股的多头占100%,大型股的空头占100%,反之亦然。在Fox [1999]之前,他强调管理者具有出色的预测能力(命中率达60%)或更高)应该倾向于更极端的倾斜,因为它们可以改善整个可能的回报范围。不过,我们可以合理地设想一些不太极端的策略,这些策略可以在选择不太明确的情况下进行中立分配。因此,考虑到世界上的三种状态-小盘倾斜,大盘倾斜和无倾斜-可能会更有趣,因为它可以产生更强的预测信号和更少的切换(降低交易成本)。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号