putilov_denis – stock.adobe.com
Two codes of practice are now available to help developers boost the security of their AI applications
The government has provided guidance to help artificial intelligence (AI) developers improve cyber security in AI models.
The Department for Science, Innovation and Technology (DSIT) said the measures aimed at developers would establish a global standard for protecting AI models from hacking. The government hopes the codes of practice will help businesses innovate and drive economic growth across the nation and form the basis of a future global standard.
Rosamund Powell, research associate at The Alan Turing Institute, said: “AI systems come with a wide range of cyber security risks which often go unaddressed as developers race to deploy new capabilities. The code of practice released today provides much-needed practical support to developers on how to implement a secure-by-design approach as part of their AI design and development process.
“Plans for it to form the basis of a global standard are crucial given the central role international standards already play in addressing AI safety challenges through global consensus. Research highlights the need for inclusive and diverse working groups, accompanied by incentives and upskilling for those who need them, to ensure the success of global standards like this.”
DSIT said the codes set out requirements for developers to make their products resilient against tampering, hacking and sabotage to boost confidence in the use of AI models across most industries.
“We have always been clear that to harness the enormous potential of the digital economy, we need to foster a safe environment for it to grow and develop,” said technology minister Saqib Bhatti. “This is precisely what we are doing with these new measures, which will help make AI models resilient