Monthly Archives: May 2026

2026 Cost of Insider Risks: Global

Ponemon Institute is pleased to present the findings of the 2026 Cost of Insider Risks: Global study. Sponsored by DTEX, this is the seventh benchmark study conducted to understand the financial consequences of insider threats caused by careless or negligent employees or contractors, criminals or malicious insiders or credential thieves.

As revealed in this research, organizations face increasing costs to respond to insider security incidents. Since the 2018 study, the number of organizations represented in the research has more than doubled from 156 to 354 in 2025 and the average number of incidents discovered and analyzed in this research increased from 3,269 to 7,490 in 2025. The average time to contain the incident decreased significantly in 2025 to 67 days from 81 days in 2024. However, only 13 percent of incidents were contained in less than 30 days.

This cost study is unique in addressing the core systems and business process-related activities that drive a range of expenditures associated with a company’s response to insider negligence and criminal behaviors. In this research, we define an insider-related incident as one that results in the diminishment of a company’s core data, networks or enterprise systems. It also includes attacks perpetrated by external actors who steal the credentials of legitimate employees/users (i.e., imposter risk).

The first study was conducted in 2016 and focused exclusively on companies in North America. Since then, the research has been expanded to include organizations in EMEA and Asia-Pacific with a global headcount of less than 500 to more than 75,000. In this year’s study, we interviewed 8,750 IT and IT security practitioners in 354 organizations that experienced one or more material events caused by an insider.

The most prevalent insider security incident continues to be caused by careless or negligent employees.

According to the findings, 53 percent of incidents experienced by organizations represented in this research were due to employee negligence and the average annual cost to remediate these incidents was $10.3 million. Not as frequent are incidents involving criminal or malicious insiders (27 percent of incidents) and credential theft (20 percent of incidents). The average cost per malicious or criminal incidents is $4.7 million and the average cost for credential theft is $4.5 million.

As shown in this research, the cost of insider risk varies significantly based on the type of incident. The activities that drive costs are monitoring & surveillance, investigation, escalation, incident response, containment, ex-post analysis and remediation.

The following are the most salient findings from this research.  

 The negligent insider is the root cause of most incidents. The average number of negligent insider incidents is 13.8 in this year’s study and the average cost for each incident is $747,107. There are a variety of reasons employees can put their organizations at risk. These include not ensuring their devices are secured, not following the organization’s policies for safeguarding sensitive and confidential information and forgetting to patch and upgrade to the latest version.

 Malicious insiders accounted for an average of 6.3 incidents and the average cost per incident of $742,125.  In the context of this research, malicious insiders are employees or authorized individuals who use their data access for harmful, unethical or illegal activities. Because of their potentially wider access to an organization’s sensitive and confidential data, malicious insiders are harder to detect than incidents caused by external attackers or hackers.

 Credential theft incidents average $842,462 per incident, an increase from $779,707 in 2024 and continues to be the costliest. The average number of credential theft incidents increased from 4.8 in 2024 to 5.3 in 2025. The intent of the credential thief is to steal users’ credentials that will grant them access to critical data and information. These attackers commonly use phishing.

 Insider security incidents in 2025 cost more and their frequency is increasing.  According to the 2024 research, 57 percent of companies experienced between 21 and more than 40 incidents per year. This year, 68 percent of organizations had between 21 and more than 40 incidents.

The research analyzed the impact security technologies and activities can have on reducing costs. Privileged access management (PAM) can save an average of $6.1 million and user behavior analytics (UBA) saves $5.1 million.

Technology and disruption or downtime are the most significant financial consequences when dealing with insider incidents. The research presents the average percentage of insider cost for careless or negligent employees, criminal insiders and credential theft according to the following seven consequences: Disruption cost (downtime), direct & indirect labor, technology, cash outlays, process/workflow changes, revenue losses and overhead.

The cost incurred by technologies (30 percent of the average cost of financial consequences) involves technologies used to respond to the insider incident includes the amortized value and the licensing for software and hardware that are deployed. Business disruption includes diminished employee/user productivity (19 percent of the average cost of financial consequences).

Companies spend the most on containment of the insider security incident. An average of $247,587 is spent to contain the consequences of an insider incident. The least amount of average cost is for escalation $39,728. The faster containment occurs, the lower the cost. If it takes more than 90 days, the average cost is $21.9 million. If it takes less than 30 days, the average cost is $14.2 million.

North American companies are spending more than the average annualized cost of $19.5 million on activities that deal with insider threats. Companies in North America experienced the highest average total cost at $24 million. European companies had the next highest cost at $18.6 million.

Health and pharma have the highest average activity costs. The average activity cost for health and pharma is $28.8 million. Technology and software are the next highest at $24.2 million.

 Organizational size affects the cost. The cost of incidents varies according to organizational size. Large organizations with a headcount of more than 75,000 spent an average of $28.4 million over the past year to resolve insider-related incidents. To deal with the consequences of an insider incident, smaller-sized organizations with a headcount below 500 spent an average of $8.9 million.

Five signs that your organization is at risk

  • Employees are not trained to fully understand and apply laws, mandates, or regulatory requirements related to their work and that affect the organization’s security.
  • Employees are unaware of the steps they should take at all times to ensure that the devices they use—both company issued and BYOD—are secured at all times.
  • Employees are sending highly confidential data to an unsecured location in the cloud, exposing the organization to risk.
  • Employees break your organization’s security policies to simplify tasks.
  • Employees expose your organization to risk if they do not keep devices and services patched and upgraded to the latest versions at all times.

To read the full findings of this report, visit DTEX’s website by clicking here. 

Criminals impersonate doctor with deepfake ads, sell supplements. Could you tell?

Bob Sullivan

Dr. Maurice Sholas has a beautiful, challenging calling — he cares for very sick children.  He takes on the saddest of cases, and works with families so kids with spina bifida or traumatic injuries can still “win” at life. For some, that means gaining the ability to visit the bathroom independently.

But lately, Sholas has been put in a no-win situation by artificial intelligence.  His likeness was used to create a deepfake video hawking supplements — specifically targeting Black consumers.  Try as he might, he still hasn’t been able to remove all the various videos that have landed on places like TikTok and Twitter.

So instead of caring for very sick children, the Harvard-educated New Orleans doctor now spends time fighting AI and learning about intellectual property law.

“What’s frustrating is that it costs money, time, effort, and relationships to protect something that should be intrinsically mine, ” he told me during our interview for The Perfect Scam podcast I host for AARP.

There’s been a lot of talk about the problem of Deepfake videos and politics — how activists might change an election by, quite literally, putting words into a leader’s mouth. I believe consumers have become relatively sophisticated at spotting the more outlandish fakes — President Trump wearing Pope garments, for example.  On the other hand, fake ads — especially those involving less popular figures — can be harder to discern. And they might ultimately cause more damage.

Sholas told me he knows of at least one person who bought the supplements based on the fake videos. After telling his story on local television, a victim reached out.

Scholas is not identified in the video; his appearance is altered slightly, and a fake voice is dubbed onto it. But his lab coat nametag is visible.

There is very little a victim can do to get fake content removed from the Internet.  Sholas first reached out to the account that posted the videos, which ultimately blocked him. The very tool used to abuse his identity was now being used to prevent him from defending himself. Initially, he says, social media companies ignored his complaints.  Later, after the local story aired, some services took action, but by then, copies of the video had spread across multiple services.  He consulted a lawyer and was redirected to a PR company.

“They said the best thing you could do is hire a PR firm basically to go out there and do a sweep of the internet and push positive content to counteract whatever misinformation is there,” he said. That kind of search engine optimization could cost up to $20,000, he was told. Instead, he has taken to posting a series of self-made content.

“When someone borrows, to use a kind word, or steals, to use a real word, it puts me at risk, it puts my medical license at risk, and it puts my livelihood at risk. And to protect all of that, there’s nothing I can do as a small guy but spend more money,” he said.

Fake video is far more pervasive on social media than most people realize, says Frank McKenna, chief fraud strategist of a company called Point Predictive. He’s also the author of the popular Frank on Fraud newsletter.

“I see these all over TikTok, all over Instagram, all over Facebook. They’re inundating people’s news feeds; the social media platforms I don’t think are doing enough to kind of control the problem,” he told me.

Dr. Maurice Sholas shows a reporter the deepfake videos he found. (WLTV.com)

NBC’s Al Roker was actually the victim of a similar deepfake attack about a year ago. You can watch his interiew about it at this link.

“I think people probably don’t realize how many deep fakes they’re seeing as they scroll through social media. From my experience, it’s at least half the videos that you’re seeing ….there’s some element of AI generation in those videos. And that’s only going to get worse,” he said. “The case will be that most of the content you’re looking at online is AI-assisted in some way …  So people are going to have to get accustomed to the fact that they’re going to have to question pretty much everything. … These other celebrity deep fakes, I think, are going to surprise a lot of people, because they’re becoming more and more common.”

How hard is it to make fake videos like the ones that use Sholas’ likeness? Not hard at all, McKenna says.

“Using information off of YouTube videos, Instagram videos, or Facebook videos that you post, the criminals and scammers can take that content and put those into AI generating videos, and make you say anything that they want,” he said. “So just a few seconds of video can create these…they call them AI avatars, and they can basically make you sell vitamins or make you sell crypto investments and things like that. So it’s not hard at all, anybody can do it and a lot of scammers are.”

And, perhaps the most alarming part of this dark new trend — consumers are over-confident in their ability to spot fakes.

“The thing about AI deep fakes is 60 percent of the population thinks they can spot them, but in reality, I think a study … found that only .1% of people can actually identify those deep fakes,” he said.