The all-pervasive nature of software questions our trust in many safety-critical software systems (SCSS), where the term stands for systems in which a software failure (or even, in some cases, its correct behaviour under unexpected circumstances) may pose a serious threat to one or more between humans, material properties, and the environment. Traditional hardware RAMS analysis has conceived quantitative and qualitative methods to estimate Reliability, Availability, Maintainability and Safety of systems. As far as safety is concerned, two main ways are used to assess it: 1. demonstrate that the probability of the top event is low enough or 2. logically infer that a hazardous event is impossible or, at least, that all mitigation measures have been taken should it happen. Our aim with respect to safety-critical software systems has been to investigate which state-of-the-art methods, belonging to these two parallel "paths", would seem more effective in the assessment of safety. A historical analysis has been conducted on the basis of a series of past incidents and accidents in various fields of technology. The results have been some considerations on the difficulty of historical analysis itself and hints about the most common causes of software failures, being mistakes in requirements definition.
展开▼