首页>
外文会议>Safety-critical systems symposium
>Safety-Critical Software and Safety-Critical Artificial Intelligence: Integrating New Practices and New Safety Concerns for AI systems
【24h】
Safety-Critical Software and Safety-Critical Artificial Intelligence: Integrating New Practices and New Safety Concerns for AI systems
Bayesian Networks (BNs) are commonly used in a range of Artificial Intelligence-based (AI) systems. Example applications include aspects of navigation systems aboard autonomous vehicles and advanced disease and fault diagnosis systems. The nature of BNs introduces assurance concerns beyond those typical of conventional software systems. For example, BNs are often designed and built using large amounts of operational data in combination with subjective expert knowledge. A BN is an abstract probabilistic graphical model that represents a target domain or problem. The structure and parameterisation of this data-driven model influences the behaviour of a software system utilising it. A number of assurance considerations arise from the interactions between these data- and model-focussed system aspects. This paper introduces a set of system viewpoints that integrate the concerns and practices of AI practitioners with those of conventional software assurance practices. This approach captures concerns that are broadly applicable to other AI technologies and highlights several inadequacies in existing approaches to software assurance.
展开▼