It’s no secret that businesses around the world are implementing artificial intelligence to gain a competitive advantage in their industries. AI could have an outsize impact on corporate cybersecurity, as well, according to a new study of 2,486 information technology and security professionals, conducted by Google Cloud and the Cloud Security Alliance.
Fifty-five percent of companies around the globe are planning to use AI to improve corporate cybersecurity in 2024, according to the State of AI and Security Survey Report. The survey also found that 21% of IT decision makers think AI can help them with creating security rules and 19% say attack simulation and compliance violation could ultimately prove to be the most likely cybersecurity use cases in 2024.
“The advent of AI in cybersecurity marks a transformative era in the realm of digital defense, bringing a blend of promising breakthroughs and intricate challenges,” the researchers wrote in their survey. “AI has the potential to be a vital ally in bolstering security defenses, identifying emerging threats, and facilitating swift responses.”
Also: How to create a passkey for your Google account (and why you should)
Not everyone is so sure that AI will necessarily improve security in their organization: just 63% of respondents agreed with that sentiment. The remaining 36% were either neutral or disagreed that AI would play an important role in improving their cybersecurity. The survey highlights a stubborn divide among those analyzing AI — individuals who believe it will have an important and potentially life-changing impact on companies and those who believe it could be more trouble than it’s worth.
Indeed, a quarter of the respondents to the survey said they believed AI would ultimately benefit hackers and other bad actors with malicious intent, and 9% of respondents said they weren’t sure which side AI would benefit most. The same percentage of respondents (34%) said AI would ultimately benefit cybersecurity professionals.
Either way, IT professionals don’t necessarily see AI as a threat to their jobs. The majority of respondents (30%) said AI will likely “enhance” their skillsets and another 28% said that it will likely support them in their cybersecurity roles.
“These findings underscore that while AI will bring significant changes to security teams, it’s primarily seen as a complementary tool rather than a complete replacement,” the researchers said. “It’s set to assist in bridging skills and knowledge gaps that have plagued the industry, but there are healthy concerns about becoming overly reliant on it.”
What about the C-suite?
The study also found that C-level executives, who are often not as fully engaged or educated on the changing nature of technology, are fully invested in AI. Seventy-four percent of IT professionals said that their executive leadership team is at least moderately aware of AI and how it can be used in the enterprise, while 82% said the corporate AI push is not being led by IT, but rather by senior leadership. Interestingly, IT professionals said their top-level executives also seem to have more knowledge of AI than their direct reports.
“C-levels demonstrate a notably higher self-reported familiarity with AI technologies than their staff,” the survey found. “For example, 52% of C-suite executives report being very familiar with generative AI, in stark contrast to only 11% of staff members.”
Also: Bosses think that security is taken care of: CISOs aren’t so sure
Looking ahead, the researchers see no sign of AI cybersecurity tools losing their luster in the corporate world. They cautioned, however, that implementing AI tools too quickly — and doing so without educating staff — could lead to missteps and unintended consequences.
“This complex picture underscores the need for a balanced, informed approach to AI integration in cybersecurity,” the researchers wrote, “combining strategic leadership with comprehensive staff involvement and training to navigate the evolving cyber threat landscape effectively.”
+ There are no comments
Add yours