Current approaches in science, including most machine and deep learning methods, rely heavily at their core on traditional statistics and information theory, but these theories are known to fail to capture certain fundamental properties of data and the world related to recursive and computable phenomena, and they are ill-equipped to deal with high-level functions such as inference, abstraction, modelling and causation, being fragile and easily deceived. How is it that some of these approaches have (apparently) been successfully applied? We explore recent attempts to adopt more powerful, albeit more difficult methods, methods based on the theories of computability and algorithmic probability, which may eventually display and grasp these higher level elements of human intelligence. We propose that a fundamental question in science regarding how to find shortcuts for faster adoption of proven mathematical tools can be answered by shortening the adoption cycle and leaving behind old practices in favour of new ones. This is the case for randomness, where science continues to cling to purely statistical tools in disentangling randomness from meaning, and is stuck in a self-deluding pattern of still privileging regression and correlation despite the fact that mathematics has made important advances to better characterise randomness that have yet to be incorporated into scientific theory and practice.
展开▼