首页> 外文期刊>AI & society >Artificial intelligence: looking though the Pygmalion Lens
【24h】

Artificial intelligence: looking though the Pygmalion Lens

机译:人工智能:透过皮格马利翁角镜头

获取原文
获取原文并翻译 | 示例
           

摘要

The AI debate brings to our notice, on the one hand, the danger of singularity, and on the other, the enchantment of AI Futures of a data-driven world. Singularity is seen in terms of human beings 'enslaved by enormously intelligent computers', supported by the claim that 'humans are no more than biological machines'. The enchantment of AI Futures is stimulated by entrepreneurial opportunities offered by deep learning and machine learning tools in domains ranging from health and medicine to Industry 4.0 projects. Whilst these narratives continue to evolve, we feel the wrath of the 'god of algorithms' in feeling helpless when confronted by the customer relations sermon, "the Computer Says NO", bereft of any common sense decisions. As tempting though the digital sermon of the intelligent machine may be to the tech prophets, the concern here is with how would we cope with gaps between the complexity and ambiguity of our living world and the unpredictable algorithmic miscalculation. In exploring this concern, our attention is drawn to a new universal narrative of "Dataism" (Harari 2015), propagated by the new high-tech 'Platonians of the Silicon Valley'. This narrative legitimizes the authority of a giant data flow system, defined by algorithms and inhabited by emails, blogs, Apps, Facebook, Twitter, Amazon and Google. It is as if the Pygmalion AI philosophers of today are enthralled by the universality of the Turing Machine, and are engaged in anthropomorphizing the robot 'Eliza' into a 'robotic duchess' of human society. This algorithmic manipulation is not only continuing the historical disconnect of language from its cultural bearings, it is leading us to subconsciously accept the imitation machine as an 'unpalatable truth', or leading to our 'willful' blindness' limiting our ability to imagine the 'unthinkable'. Phil Rosenzweig (https://www.mckin sey.com/business-functions/strategy-and-corporate-finan ce/our-insights/the-benefits-and-limits-of-decision-model s) asks us to understand the limits of the predictability of data-driven decision models, technically dazzling as they are, for example in detecting fraudulent credit-card use and predicting rainfall. But these predictions can neither change the behaviour of card users nor of the farmers to benefit from weather predictions without wise counselling of card users and without the wisdom of experiential knowledge of farmers to manage and improve crop yields. Data-driven decision models, in computing predictions of complex and large databases 'may relieve the decision makers of some of the burden; but the danger is that these decision models are often so impressive that it's easy to be seduced by them', and to overlook the need to use them wisely. As Rosenzweig says, 'the challenge thus isn't to predict what will happen but to make it happen, and how to control and avoid the adverse happenings'. Whist social media in the form of Facebook, twitter and Google, powered by the intelligent machine, draws and captures our attention, we are in danger of becoming mere passive observers and losing sight of the new social, cultural, ethical, and political tensions created by the intelligent machine. These new tensions exacerbate the already current conditions of conflict, vulnerability, and instability arising from globalization. In the pursuit of a new paradigm of artificial intelligence for common good, we need to reflect on the potential and limits of the dream of the exact language and the limit of digital discourse promoted by the proponents of the intelligent machine.
机译:AI辩论一方面引起我们的注意,另一方面是数据驱动世界的AI Future的魅力。奇异性是从人类“被高度智能的计算机所奴役”的角度出发,并得到了“人类不过是生物机器”的说法的支持。深度学习和机器学习工具在从健康和医学到工业4.0项目等领域的创业机会,激发了AI期货的魅力。尽管这些叙述不断发展,但是当面对客户关系讲道“计算机说不”时,我们感到“算法之神”的愤怒使他们无助,而这却没有任何常识性的决定。尽管智能机器的数字讲道可能吸引了技术先知,但这里的关注点是我们如何应对生活世界的复杂性和歧义性以及不可预测的算法错误之间的差距。在探讨这一问题时,我们将注意力转移到新的“数据主义”叙事(Harari,2015年)上,该叙事由新型高科技“硅谷的柏拉图主义者”传播。这种叙述使大型数据流系统的权威合法化,该系统由算法定义,并被电子邮件,博客,Apps,Facebook,Twitter,亚马逊和Google所占据。就像今天的Pygmalion AI哲学家一样,都被图灵机的普遍性所吸引,并致力于将机器人“伊丽莎”拟人化为人类社会的“机器人公爵夫人”。这种算法操纵不仅继续使语言与其文化方位的历史脱节,还导致我们下意识地将模仿机器视为“不可口的真相”,或者导致我们的“故意”盲目性限制了我们想象“难以想象”。 Phil Rosenzweig(https://www.mckin sey.com/business-functions/strategy-and-corporate-finan ce / our-insights / the-benefits-and-limits-of-decision-model)要求我们了解数据驱动决策模型的可预测性的局限性,在技术上令人眼花as乱,例如在检测欺诈性信用卡使用和预测降雨方面。但是,这些预测不能改变持卡人的行为,也不会改变农民的行为,而无法从天气预报中受益,而无须对持卡人进行明智的咨询,也没有农民的经验知识来管理和提高农作物的产量。在计算复杂和大型数据库的预测时,以数据为依据的决策模型可以减轻决策者的负担;但是危险在于这些决策模型通常令人印象深刻,以至于很容易被它们所吸引,并且忽略了明智地使用它们的需要。正如罗森茨威格所说,“因此,挑战不是预测会发生什么,而是要实现它,以及如何控制和避免不利事件发生”。由智能机器提供动力的以Facebook,Twitter和Google形式存在的Whist社交媒体吸引并吸引了我们的注意力,我们有可能成为被动的观察者,而忽视了由此产生的新的社会,文化,道德和政治紧张局势通过智能机。这些新的紧张局势加剧了全球化带来的冲突,脆弱性和不稳定本已存在的状况。在追求实现公益的新范式时,我们需要反思精确语言的梦想的潜力和局限性,以及智能机器的支持者所倡导的数字话语的局限性。

著录项

  • 来源
    《AI & society》 |2018年第4期|459-465|共7页
  • 作者

    Karamjit S.Gill;

  • 作者单位

    University of Brighton Brighton UK;

  • 收录信息
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号