
Singapore will soon release instructions it says will offer “practical measures” to bolster the security of artificial intelligence (AI) tools and systems.
The Cyber Security Agency (CSA) is slated to publish its draft Technical Guidelines for Securing AI Systems for public consultation later this month, said Janil Puthucheary, Singapore’s senior minister of state for Ministry of Communications and Information.
Also: The best VPN services (and how to choose the right one for you)
The voluntary guidelines can be adopted alongside existing security processes that organizations implement to address potential risks in AI systems, said Puthucheary, during his opening speech Wednesday at the Association of Information Security Professionals (AiSP) AI security summit.
Through the technical guidelines, CSA hopes to offer a useful reference for cybersecurity professionals looking to improve the security of their AI tools, the minister said.
Also: AI is changing cybersecurity and businesses must wake up to the threat
He further urged the industry and community to do their part in ensuring AI tools and systems remain safe and secure against malicious threats, even as techniques continue to evolve.
“Over the past couple of years, AI has proliferated rapidly and been deployed in a wide variety of spaces,” he said. “This has significantly impacted the threat landscape. We know this rapid development and adoption of AI has exposed us to many new risks, [including] adversarial machine learning, which allows attackers to compromise the function of the model.”
He pointed to how security vendor McAfee succeeded in compromising Mobileye by making changes to the speed limit signs that the AI system was trained to recognize.
AI is fueling new security risks, and public and private sector organizations must work to understand this evolving threat landscape, Puthucheary said.
Also: Cybersecurity teams need new skills even as they struggle to manage legacy systems
He noted that Singapore’s government CIO, the Government Technology Agency (GovTech), is developing capabilities to simulate potential attacks on AI systems to grasp how they can impact the security of such platforms.
“By doing so, this will help us to put the right safeguards in place,” he said.
He added that efforts to better guard against existing threats must continue, as AI is vulnerable to “classic” cyber threats, such as those targeting data privacy. He noted that the growing adoption of AI will expand the attack surface through which data can be exposed, compromised, or leaked.
He said AI can be tapped to create increasingly sophisticated malware, such as WormGPT, that can be difficult for existing security systems to detect.
Also: The biggest challenge with increased cybersecurity attacks, according to analysts
At the same time, AI can be leveraged to improve cyber defense and arm security professionals with the ability to identify risks faster, at scale, and with better precision, the minister said. He said security tools powered by machine learning can help detect anomalies and launch autonomous action to mitigate potential threats.
According to Puthucheary, AiSP is setting up an AI special interest group in which its members can exchange insights on developments and capabilities. Established in 2008, AiSP describes itself as an industry group focused on driving technical competence and interests of Singapore’s cybersecurity community.
In April, the US National Security Agency’s AI Security Center released an information sheet, Deploying AI Systems Securely, which it said offered best practices on deploying and operating AI systems.
Developed jointly with the US Cybersecurity and Information Security Agency, the guidelines aim to enhance the integrity and availability of AI systems and create mitigations for known vulnerabilities in AI systems. The document also outlines methodologies and controls to detect and respond to malicious activities against AI systems and related data.
+ There are no comments
Add yours