Developments in AI are moving fast but is top-down regulation the best approach? “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” That was the terrifying-sounding statement signed by more than 350 business leaders and researchers at the end of May. Released by the non-profit Center for AI Safety, signatories included the astronomer Martin Rees, who is known for his deep thoughts about the future of humanity.
展开▼