Laurent – stock.adobe.com
An interim AI safety report coming out of the Bletchley Declaration shows AI experts are not in agreement over some of the biggest risks
An interim artificial intelligence (AI) safety report has highlighted the lack of universal agreement among AI experts on a range of topics, including both the state of current AI capabilities and how these could evolve over time.
The International scientific report on the safety of advanced AI was among the key commitments to emerge from the Bletchley Park discussions as part of the landmark Bletchley Declaration.
The report explores differing opinions on the likelihood of extreme risks that could impact society, such as large-scale unemployment, AI-enabled terrorism and a loss of control over the technology.
The experts who took part in the report broadly agreed that society and policymakers in government need to prioritise improving their understanding of the impact of AI technology.
The report’s chair, Yoshua Bengio, said: “When used, developed and regulated responsibly, AI has incredible potential to be a force for positive transformative change in almost every aspect of our lives. However, because of the magnitude of impacts, the dual use and the uncertainty of future trajectories, it is incumbent on all of us to work together to mitigate the associated risks in order to be able to fully reap these benefits.
“Governments, academia and the wider society need to continue to advance the AI safety agenda to ensure we can all harness AI safely, responsibly and successfully.”
Initially launched as the State of Science report last November, the report unites a diverse global team of AI experts, including an Expert Advisory Panel from 30 AI nations, as well as representatives of the United Nations and the European Union.
Secretary of state for science, innovation and technology Michelle Donelan said: “Building on the momentum we created with our historic talks at Bletchley Park, this report will ensure we can capture AI’