The 2024 Study on the State of AI in Cybersecurity

Sponsored by MixMode, the purpose of this research is to understand the value of artificial intelligence (AI) to strengthen an organizations’ security posture and how they are integrating these technologies into their security infrastructure. A challenge faced by organizations is the ability to overcome AI’s tendency to increase the complexity of their security architecture.

Ponemon Institute surveyed 641 IT and security practitioners in organizations that are at some stage of AI adoption. All respondents are involved in detecting and responding to potentially malicious content or threats targeting their organization’s information systems or IT security infrastructure. They also have some level of responsibility for evaluating and/or selecting AI-based cybersecurity tools and vendors.

AI improves IT security staff’s productivity and the ability to detect previously undetectable threats — 66 percent of respondents believe the deployment of AI-based security technologies will increase the productivity of IT security personnel. Given the oft-cited problem of a shortage of IT security expertise, this can be an important benefit. According to the research, an average of 34 security personnel are dedicated to the investigation and containment of cyber threats and exploits. About half of respondents (49 percent) say dedicated security personnel have specialized skills relating to the supervision of AI tools and technologies.

Sixty-six percent of respondents say their organizations are using AI to detect attacks across the cloud, on-premises and hybrid environments. On average, organizations receive 22,111 security alerts per week and an average of 51 percent of these alerts can be handled by AI without human supervision and an average of 35 percent are investigated. An average of 9,854 false positives are generated by security tools in a typical week. An average of 12,009 unknown threats go undetected. Seventy percent of respondents say AI is highly effective in detecting previously undetectable threats.

“AI is a game-changer for cybersecurity, as it can automate and augment the detection and response capabilities of security teams, as well as reduce the noise and complexity of security operations,” said John Keister, CEO of MixMode. “However, AI also poses new challenges and risks, such as the threat of AI being used for adversarial attacks and the need for specialized operator skills. MixMode understands the complexity of AI and delivers automated capabilities to revolutionize the cybersecurity landscape through our patented self-learning algorithm that can detect threats and anomalies in real-time at high speed and scale. This helps enterprises rapidly recognize new malware and insider risks to help strained security teams automate mundane tasks and focus on higher-level defenses.”

Following are the findings that describe the value of AI and the challenges when leveraging AI to detect and respond to cyberattacks.

 The value of AI

 AI is valuable when used in threat intelligence and for threat detection. In threat intelligence, AI is mainly used to track indicators of suspicious hostnames, IP addresses and file hashes (65 percent of respondents). In threat detection it creates rules based on known patterns and indicators of cyber threats (67 percent of respondents). Other primary uses in threat intelligence are the results of cybercrime investigations and prosecutions (60 percent of respondents) and in Tactics, Techniques & Procedure Reports (TTP). TTP’s are used to analyze and APT’s operation or can be used as a means of profiling a certain threat actor.

The benefits of AI include an improved ability to prioritize threats and vulnerabilities and the identification of application security vulnerabilities. Fifty percent of respondents say their security posture improves because of being better able to prioritize threats and vulnerabilities and 46 percent of respondents say AI identifies application security vulnerabilities.

Most AI adoption is at the early stage of being mature, but organizations are already reaping benefits. Fifty-three percent of respondents say their organizations’ use of AI is at the early stage of adoption. That means the AI strategy is defined, investments are planned and partially deployed. Only 18 percent of respondents say AI in cybersecurity activities are fully deployed and security risks are assessed. Effectiveness is measured with KPIs and C-level executives are regularly updated about how AI is preventing and reducing cyberattacks

AI effectiveness is measured by the financial benefits these technologies deliver. Sixty-three percent of respondents say their organizations measure the decrease in the cost of cybersecurity operations, 55 percent of respondents say they measure increases in revenue and 52 percent of respondents say they measure the productivity increases the SOC team’s ability to detect and respond to threats.

Defensive AI is critical to protecting organizations from cyber criminals using AI. Fifty-eight percent of respondents say their organizations are investing in AI to stop AI-driven cybercrimes. Sixty-nine percent of respondents say defensive AI is important to stopping cybercriminals’ ability to direct targeted attacks at unprecedented speed and scale while going undetected by traditional, rule-based detection. The basic rules model refers to a traditional approach where predefined rules and signatures are used to analyze and detect threats.

 The challenges of AI deployment

 Difficulty in applying AI-based controls that span across the entire enterprise and interoperability issues are the two main barriers to effectiveness. Sixty-one percent of respondents say a deterrent to AI effectiveness is the inability to apply AI-based controls that span across the entire enterprise and 60 percent of respondents say there are interoperability issues among AI technologies.

However, AI adoption complicates integrating AI-based security technologies with legacy systems (65 percent of respondents) and requires the simplification of security architecture to obtain maximum value from AI-based security technologies (64 percent of respondents). Seventy-one percent of respondents say digitization is a prerequisite and critical enabler for deriving value from AI.

Organizations are struggling to identify areas where AI would create the most value. Only 44 percent of respondents say they can accurately pinpoint where to deploy AI. However, 62 percent of respondents say their organizations have a high ability to identify where machine learning would add the most value. A machine learning (ML) model in cybersecurity is a computational algorithm that uses statistical techniques to analyze and interpret data to make predictions or decisions related to security.

Eighty-one percent of respondents say generative AI is highly important. Generative AI refers to a category of AI algorithms that generates new outputs based on the large language models they have been trained on. Sixty-nine percent of respondents say it is highly important to integrate advanced analysis methods, including machine learning and/or behavioral analytics.

Outside expertise is needed to obtain value from AI. Fifty-four percent of respondents say their organizations need outside expertise to maximize the value of AI-based security technologies. Fifty percent of respondents say a reason to invest in AI is to make up for the shortage in cybersecurity expertise.

The lack of budget and internal expertise are barriers to getting value from AI. The top two reasons organizations are not leveraging AI is insufficient budget for AI-based technologies (56 percent of respondents). Currently, 60 percent of respondents say 25 percent or less of the average annual total IT security budget of $28 million is allocated to investments in AI and ML investments. Another barrier is the lack of internal expertise to validate vendors’ claims (53 percent of respondents). Forty-two percent of respondents say there is not enough time to integrate AI-based technologies into security workflows.

Many employees outside of IT and IT security distrust decisions made by AI. Fifty-six percent of respondents say it is very difficult to get rank-and-file employees to trust decisions made by AI. Slightly more than half (52 percent of respondents) say it is very difficult to safeguard confidential and personal data used in AI. Forty-six percent of respondents say it is very difficult to comply with privacy and security regulations and mandates. As the deployment of AI matures, organizations may become more confident in their ability to safeguard personal data and comply with privacy and security regulations and mandates.

Despite AI using personal data, few organizations have privacy policies applied to AI. Sixty-five percent of respondents say confidential consumer data is used by their organizations AI. However, only 38 percent of respondents say their organizations have privacy policies specifically for the use of AI. If they do have policies, 44 percent of respondents say they conduct regular privacy impact assessments and 41 percent of respondents say their organizations appoint a privacy officer to oversee the governance of AI privacy policies. Only 25 percent of respondents say they work with vendors to ensure “privacy by design” in the use of AI technologies.

Organizations are at risk because consumer data is often used by organizations’ AI. According to the research, organizations are having difficulties in safeguarding confidential data. However, 65 percent of respondents say consumer data is being used by their organizations’ AI. Without having the needed safeguards in place, consumer data is at risk of being breached. Seventy percent of respondents say analytics are used by AI.

Despite AI technologies using personal data, few organizations have privacy polices applied to AI. As discussed, 65 percent of respondents say confidential consumer data is used by their organizations AI. However, only 38 percent of respondents say their organizations have privacy policies specifically for the use of AI. If they do have policies, 44 percent of respondents say they conduct regular privacy impact assessments and 41 percent of respondents say their organizations appoint a privacy officer to oversee the governance of AI privacy policies.

Organizations should take a unified approach with an organizational task force to manage risk. According to the findings, few organizations have an enterprise-wide strategy for understanding where AI adds value. Less than half of respondents (49 percent) have an organizational task force to manage AI risk and only 37 percent of respondents have one unified approach to managing both AI and privacy security risks.

To read the rest of this study, visit MixMode.com

 

Leave a Reply

Your email address will not be published. Required fields are marked *