INTRODUCTION: The overall threat from China and other adversaries has only increased over time and has accelerated and been exacerbated with technological innovation and their access to AI. In an article from January 2025, Jen Easterly, former Director of the Cybersecurity and Infrastructure Security Agency (CISA), lays out some of the risks to US critical infrastructure. CISA defines critical infrastructure as encompassing 16 sectors from utilities to government agencies to banks and the entire IT industry. Outages happen consistently across all sectors and vulnerabilities are everywhere. So, the key for all Cyber programs is continuing to improve upon early detection and early response.
After the Crowdstrike outage in 2024 that affected thousands of hospitals, airports and businesses worldwide, Easterly said, “We are building resilience into our networks and our systems so that we can withstand a significant disruption or at least drive down the recovery time to be able to provide services, which is why I thought the CrowdStrike incident — which was a terrible incident — was a useful exercise, like a dress rehearsal, for what China may want to do to us in some way and how we react if something like that happens,” she said. “We have to be able to respond very rapidly and recover very rapidly in a world where [an issue] is not reversible.” –
What will organizations do to combat persistent threats and cyberattacks from increasingly sophisticated adversaries? A goal of this research MixMode sponsored is to provide information on how industry can leverage AI in their cybersecurity plans to detect attacks earlier (be predictive) and improve their ability to recover from attacks more quickly.
Organizations are in a race to adopt artificial intelligence (AI) technologies to strengthen their ability to stop the constant threats from cyber criminals. This is the second annual study sponsored by MixMode on this topic. The purpose of this research is to understand how organizations are leveraging AI to effectively detect and respond to cyberattacks.
Ponemon Institute surveyed 685 US IT and IT security practitioners in organizations that have adopted AI in some form. These respondents are familiar with their organization’s use of AI for cybersecurity and have responsibility for evaluating and/or selecting AI-based cybersecurity tools and vendors.
Since last year’s study, organizations have not made progress in their ability to integrate AI security technologies with legacy systems and streamline their security architecture to increase AI’s value. More respondents believe it is difficult to integrate AI-based security technologies with legacy systems, an increase from 65 percent to 70 percent of respondents. Sixty-seven percent of respondents, a slight increase from 64 percent of respondents, say their organizations need to simplify and streamline their security architecture to obtain maximum value from AI. Most organizations continue to use AI to detect attacks across the cloud, on-premises and hybrid environments.
The following research findings reveal the benefits and challenges of AI: How organizations are using AI to improve their security posture.
In just one year since the research was first conducted, organizations are reporting that their security posture has significantly improved because of AI. The biggest changes are improving the ability to prioritize threats and vulnerabilities (an increase from 50 percent to 56 percent of respondents), increasing the efficiency of the SOC team (from 43 percent to 51 percent) and increasing the speed of analyzing threats (from 36 percent to 43 percent).
Since 2024, the maturity of AI programs has increased. Fifty-three percent of organizations have achieved full adoption stage (31 percent of respondents) or mature stage (22 percent of respondents). This is an increase from 2024 when 47 percent of respondents said they had reached the full adoption stage (29 percent of respondents) or mature stage (18 percent of respondents).
AI-based security technologies increase productivity and job satisfaction. Seventy percent of respondents say AI increases the productivity of IT security personnel, an increase from 66 percent in 2024. Fifty-one percent of respondents say AI improves the efficiency of junior analysts so that senior analysts can focus on critical threats and strategic projects. Sixty-nine percent of respondents say since the adoption of AI, job satisfaction has improved because of the elimination of tedious tasks, an increase from 64 percent.
Forty-four percent of respondents are using AI-powered cybersecurity tools or solutions. By leveraging advanced algorithms and machine learning techniques. AI-powered systems analyze vast amounts of data, identify patterns and adapt their behavior to improve performance over time.
Forty-three percent of respondents are using pre-emptive security tools to stay ahead of cybercriminals. Pre-emptive security tools apply AI-based data analysis to cybersecurity so organizations can anticipate and prevent future attacks. The benefits include the ability to preemptively deter threats and minimize damages, prioritize tasks effectively and address the most important business risks first. Pre-emptive security data can guide response teams, offer insights into the attack’s objectives, potential targets and more. The result is continuous improvement to ensure more accurate forecasts and reduce costs associated with handling attacks
Respondents say pre-emptive security is used to identify patterns that signal impending threats (60 percent), assess risks to identify emerging threats and potential impact (57 percent) and is used to harness vast amounts of online metadata from various sources as an input to predictive analytics (52 percent).
Pre-emptive security will decrease the ability of cybercriminals to direct targeted attacks. Fifty-two percent of respondents in organizations that use pre-emptive security say that without it cybercriminals will become more successful at directing targeted attacks at unprecedented speed and scale while going undetected by traditional, rule-based detection. Forty-nine percent say investments are being made in pre-emptive AI to stop AI-driven cybercrimes.
Fifty-eight percent of respondents say their SOCs use AI technologies. The primary benefit of an AI-powered SOC is that alerts are resolved faster, according to 57 percent of respondents. In addition to faster resolution of alerts, 55 percent of respondents say it frees up analyst bandwidth to focus on urgent incidents and strategic projects. Fifty percent of respondents say it applies real-time intelligence to identify patterns and detect emerging threats.
An AI-powered SOC is effective in reducing threats. Human analysts are effective as the final line of defense in the AI-powered SOC. Fifty-seven percent of respondents say AI in the SOC is very or highly effective in reducing threats and 50 percent of respondents say their human analysts are very or highly effective as the final line of defense in the AI-powered SOC.
More organizations are creating one unified approach to managing both AI and privacy security risks, an increase from 37 percent to 52 percent of respondents. In addition, 58 percent of respondents say their organizations identify vulnerabilities and what can be done to eliminate them.
The barriers and challenges to maximizing the value from AI
While an insufficient budget to invest in AI technologies continues to be the primary governance challenge, more organizations say an increase in internal expertise is needed to validate vendors’ claims. The lack of internal expertise to validate vendors’ claims increased significantly from 53 percent to 59 percent of respondents. One of the key takeaways from the research is that 63 percent of respondents say the decision to invest in AI technologies is based on the extensiveness of the vendors’ expertise.
As the number of cyberattacks increase, especially malicious insider incidents, organizations lack confidence in their ability to prevent risks and threats. Fifty-one percent of respondents say their organizations had at least one cyberattack in the past 12 months, an increase from 45 percent of respondents in 2024.
Only 42 percent say their organizations are very or highly effective in mitigating risks, vulnerabilities and attacks across the enterprise. The attacks that increased since 2024 are malicious insiders (53 percent vs. 45 percent), compromised/stolen devices (40 percent vs. 35 percent) and credential theft (49 percent vs. 53 percent). The primary types of attacks in 2024 and 2025 are phishing/social engineering and web-based attacks.
The effectiveness of AI technologies is diminished because of interoperability issues and an increase in a heavy reliance on legacy IT environments. The barriers to AI-based security technologies’ effectiveness are interoperability issues (63 percent, an increase from 60 percent of respondents), can’t apply AI-based controls that span across the entire enterprise (59 percent vs. 61 percent of respondents) and can’t create a unified view of AI users across the enterprise (56 percent vs 58 percent of respondents). The most significant trend is the increase in the heavy reliance on legacy IT environments, an increase from 36 percent to 45 percent of respondents.
Complexity challenges the preparedness of cybersecurity teams to work with AI-powered tools. Only 42 percent of respondents say their cybersecurity teams are highly prepared to work with AI-powered tools. Fifty-five percent of respondents say AI-powered solutions are highly complex.
AI continues to make it difficult to comply with privacy and security mandates and to safeguard confidential and personal data in AI. Forty-eight percent of respondents say it is highly difficult to achieve compliance and 53 percent of respondents say it is highly difficult to safeguard confidential and personal data in AI
To read key findings and the rest of this report, visit MixMode’s website.