【24h】

Can Artificial Intelligence Be Safe?

机译:人工智能可以安全吗?

获取原文

摘要

Many of the recommended practices for generating safety related software involve using computer tools that themselves qualify for the attribute "artificial intelligence", although this is not always evident. A conversion from free text to a formal notation is admittedly done by highly qualified humans, but most of the rest is done by machines. Depending on the outcome of the intermediate computer programs that are applied, either a modification of the input and a subsequent re-run of the corresponding program will be required, or the next program can be started immediately. Since the programs all provide electronic output, it would be a simple task to automate the entire process, for example by using suitable scripts. In addition, the formal proof for railway interlocking systems relies on sophisticated computer programs to perform complex logic tasks. So effectively, "logical artificial intelligence" is already being used in safety related systems, or to answer the question in the title of this paper: Yes, it already is! That being said, the next step must be to develop methods for assessing the safety of artificially intelligent systems. Certainly some of the technologies will be able to be regarded as proven in use, but for more sophisticated technologies it will take a long time to reach the amount of experience that is necessary to qualify. For safety related software, the standards such as recognise that a quantitative demonstration of safety is not possible. Therefore, a qualitative approach is used. By demonstrating that well structured and controlled processes have been applied, it is possible to make a decision about the quality of the software and infer its suitability for a safety related application. Currently, the standards do not regard artificial intelligence as a well structured and controllable process, so they reject it. This is somewhat contradictory, because an artificially intelligent system is a programmed system, so if the acceptable methods and tools are applied when the artificially intelligent system is created, it should be possible to qualify it for a safety related application.
机译:生成安全相关软件的许多建议做法都涉及使用计算机工具本身符合属性“人工智能”的资格,虽然这并不总是显而易见。从自由文本到正式符号的转换是通过高素质的人类完成的,但大多数剩余时间都是由机器完成的。根据应用的中间计算机程序的结果,将需要修改和随后的相应程序的重新运行,或者可以立即启动下一个程序。由于程序都提供电子输出,因此是一个简单的任务,可以自动执行整个过程,例如通过使用合适的脚本。此外,铁路互锁系统的正式证明依赖于复杂的计算机程序来执行复杂的逻辑任务。所以有效地,“逻辑人工智能”已经在安全相关系统中使用,或者回答本文标题中的问题:是的,它已经是!已经说,下一步必须是开发用于评估人工智能系统安全的方法。当然,一些技术将能够被认为是经过验证的,但对于更复杂的技术,需要很长时间才能达到资格所需的经验。对于安全相关软件,不可能识别出安全性证明的标准是不可能的。因此,使用定性方法。通过展示已经应用了结构良好的结构和控制的过程,可以决定软件的质量,并推断出安全相关应用的适用性。目前,该标准不认为人工智能作为结构良好的结构和可控的过程,因此他们拒绝它。这有点矛盾,因为人工智能系统是一个编程系统,因此如果在创建人工智能系统时应用可接受的方法和工具,则应有资格符合安全相关的应用。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号