The 2025 Study on Cyber Insecurity in Healthcare: The Cost and Impact on Patient Safety and Care

Larry Ponemon

Healthcare organizations’ ability to protect confidential patient data and ensure the highest quality of medical care is increasingly at risk, underscoring the need for a more human-centric security approach, our Cyber Insecurity in Healthcare study has found.

This fourth annual report was conducted to determine the healthcare industry’s effectiveness in reducing human-targeted cybersecurity risks and disruptions to patient care. With sponsorship from Proofpoint, Ponemon Institute surveyed 677 IT and IT security practitioners in healthcare organizations who are responsible for participating in such cybersecurity strategies as setting IT cybersecurity priorities, managing budgets and selecting vendors and contractors.

Healthcare organizations remain frequent targets, with cyberattacks continuing to disrupt patient care. According to the research, 93 percent of organizations surveyed experienced at least one cyberattack in the past 12 months. For organizations in that group, the average number of cyberattacks was 43, a 3-point increase from 40 in 2024.

The cyberattacks analyzed that took place over a two-year period in this research are cloud/account compromises, supply chain attacks, ransomware and business email compromise (BEC)/spoofing/impersonation. Among the organizations that experienced the four types of cyberattacks, an average of 72 percent report disruption to patient care, a 3-point jump from 69 percent in 2024.

While the cost of cyberattacks has declined, they remain a significant financial burden. We asked respondents to estimate the single most expensive cyberattack experienced in the past 12 months from a range of less than $10,000 to more than $25 million. Based on the responses, the average total cost for the most expensive cyberattack was $3.9 million, down from $4.7 million in 2024 but still substantial. This includes all direct cash outlays, direct labor expenditures, indirect labor costs, overhead costs and lost business opportunities.

Operational disruptions stemming from system availability problems remain the most expensive consequence. The following is a breakdown of the five cybersecurity cost categories for the single most expensive cyberattack as well as their average cost:

  • Disruption to normal healthcare operations cost an average of $1,210.172, a decrease from $1,469,524 in 2024.
  • Users’ idle time and lost productivity dropped to $858,832 from $995,484 in 2024. These costs were due to downtime or system performance delays.
  • The cost of the time required to ensure the impact on patient care is corrected decreased to $702,680 from an average of $853,272 in 2024.
  • The damage or theft of IT assets and infrastructure averaged $624,605,down slightly from $711,060 in 2024.
  • Remediation and technical support activities, including forensic investigations, incident response activities, help desk and delivery of services to patients saw the largest drop (28.6%) from $711,060 in 2024 to $507,491 in 2025.

 For the first time, this year’s study examined plans to secure clinical operations in the cloud. Thirty percent of respondents say their organizations have moved clinical applications to the cloud. Forty-five percent say their organizations will move clinical applications to the cloud in the next six months (9 percent), within the next year (8 percent), in the next one to two years (15 percent) or eventually (13 percent). This accelerating shift toward cloud-hosted clinical systems underscores the urgency of addressing cloud/account compromise risks, given the potential impact on patient care and service continuity.

 The report analyzes four types of cyberattacks that occurred over the past two years and their impact on healthcare organizations, patient safety and patient care delivery:

 Cloud/account compromise. A cloud/account compromise results from criminals obtaining access to credentials (e.g. user ID and passwords). The consequence is typically an account takeover where criminals then use those validated credentials to commit fraud and transfer sensitive data to systems under their control.

For the fourth consecutive year, frequent attacks against the cloud make it the top cybersecurity threat. Nearly two-thirds of respondents (64 percent) say their organizations are vulnerable or highly vulnerable to a cloud/account compromise. Seventy-two percent say their organizations have experienced cloud/account compromises, an increase from 69 percent in 2024. These organizations had an average of 21 such compromises in the past two years.

Supply chain attacks. Supplier impersonation and compromise attacks occur when a malicious actor impersonates or successfully compromises an email account in the supply chain. The attacker then observes, mimics and uses historical information to craft scenarios to spoof employees in the supply chain.

Fewer organizations are experiencing supply chain attacks. Forty-four percent of respondents say their organizations experienced an attack against its supply chains, a significant decline from 68 percent in 2024. Of these organizations, on average they experienced four supply chain attacks in the past two years. Fifty-seven percent say their organizations are very or highly vulnerable to supply chain attacks.

Ransomware. Ransomware is a sophisticated piece of malware that blocks the victim’s access to files. While there are many strains of ransomware, they generally fall into two categories. Crypto ransomware encrypts files on a computer or mobile device making them unstable. It takes the files hostage, demanding a ransom in exchange for the decryption key needed to restore the files. Locker ransomware is a virus that blocks basic computer functions, essentially locking the victim out of their data and files located on the infected devices. Instead of targeting files with encryption, cybercriminals demand a ransom to unlock the device.

Fewer organizations are paying a ransom, but the amount paid has increased. The costliest ransom paid (extrapolated value) was $1.2 million, up from $1.1 million in 2024 and a staggering 56 percent increase from $771,905 in 2022 when we first began tracking this data. This continuous rise underscores how threat actors are demanding and receiving larger payouts even as payment rates declined (33 percent in 2025 vs. 36 percent in 2024). Fifty-five percent of respondents believe their organizations are vulnerable or highly vulnerable to a ransomware attack. In the past two years, organizations that had ransomware attacks (61 percent) experienced an average of 5 such attacks.  The combination of threat exposure and escalating ransom demands creates operational and financial risk for healthcare organizations.

Business email compromise (BEC)/spoofing/impersonation. BEC attacks are a form of cybercrime that uses email fraud to attack healthcare organizations to achieve a specific outcome. Examples include invoice scams, spear phishing that are designed to gather data for other criminal activities, attorney impersonations and CEO fraud.

Concerns about these attacks have decreased significantly since 2022 when 64 percent of respondents said their organizations were very or highly vulnerable. In the 2025 research, 53 percent say their organizations are vulnerable or highly vulnerable to a BEC/spoofing/impersonation incident, a very slight decrease from 52 percent in 2024. And, 62 percent say their organizations experienced an average of four attacks in the past two years. In 2024, 57 percent said they had an average of four attacks in the past two years.

From breach to bedside: The persistent link between cyberattacks and patient safety

As in the previous report, an important part of the research is the connection between cyberattacks and patient safety. Among the organizations that experienced the four types of cyberattacks in the study, an average of 72 percent report disruption to patient care, a 3-point jump from 69 percent in 2024.

An average of 54 percent report poor patient outcomes due to increases in medical procedure complications. An average of 53 percent saw an increase in a longer length of stay and an average of 29 percent say patient mortality rates increased.

The following are additional trends in how cyberattacks have affected patient safety and patient care delivery.

  • Supply chain attacks continue to be the most likely to affect patient care. While fewer organizations in this year’s research had a supply chain attack (44 percent in 2025 vs. 68 percent in 2024), 87 percent of those respondents say it disrupted patient care, an increase from 82 percent in 2024. Patients were primarily impacted by delays in procedures and tests that resulted in poor outcomes (51 percent) and an increase in complications from medical procedures (49 percent). Mortality rates increased significantly from 26 percent in 2024 to 32 percent in 2025.
  • BEC/spoofing/impersonation attacks cause delays in procedures and tests. Sixty-two percent of respondents say their organizations experienced a BEC/spoofing/impersonation incident and had an average of four attacks. Of these respondents, 70 percent say a BEC/spoofing/impersonation attack against their organizations disrupted patient care. Sixty-five percent say the attacks caused delays in procedures and tests that have resulted in poor outcomes and 55 percent say it increased complications from medical procedures.
  • Ransomware attacks cause delays in patient care. Sixty-one percent of respondents say their organizations experienced an average of 5 successful ransomware attacks. Sixty-seven percent say ransomware attacks had a negative impact on patient care. Of these respondents, 67 percent say it resulted in longer lengths of stay, which affects organizations’ ability to care for patients. Fifty-six percent say it resulted delays in procedures and tests that resulted in a disruption to patient care.
  • Cloud-based user accounts/collaboration tools that enable productivity are most often attacked. Seventy-two percent of respondents say their organizations experienced an average of 21 cloud/account compromises, a slight increase from 20 in 2024. In this year’s study, 61 percent say the cloud/account compromises resulted in disruption in patient care, an increase from 57 percent in 2024. Sixty-one percent say cloud/account compromises increased complications from medical procedures and 52 percent say it resulted in longer length of stay. The tools most often attacked are text messaging (59 percent), Zoom/Skype/video conferencing (54 percent) and email (45 percent).
  • Data loss or exfiltration disrupts patient care and can increase mortality rates. Ninety-six percent of organizations in this research had at least two data loss or exfiltration incidents involving sensitive and confidential healthcare data in the past two years. On average, organizations experienced 18 such incidents in the past two years and 55 percent of respondents say they impacted patient care. Of these respondents, 54 percent say it increased the mortality rate and 36 percent say it caused delays in procedures and tests that resulted in poor outcomes. 

Employee negligence because of not following policies (35 percent of respondents), privilege access abuse (25 percent) and employee sends PII or PHI to an unintended recipient via email (25 percent) are the primary root causes of the incident.

 For the fourth year in a row, the data reinforces a sobering reality: cyber threats aren’t just IT security issues, they’re clinical risks. When care is delayed, disrupted or compromised due to a cyberattack, patient outcomes are impacted, and lives are potentially put at risk. 

Other key trends in cyber insecurity

Concerns about insecure mobile apps (eHealth) remained the top issue for the second consecutive year. However, respondents who cited this issue decreased from 59 percent of respondents in 2024 to 55 percent of respondents in 2025. Organizations are less worried about employee-owned mobile devices or BYOD (a decrease from 53 percent of respondents in 2024 to 49 percent of respondents in 2025) and cloud/account compromise (a decrease from 55 percent in 2024 to 49 percent in 2025 rounding out the top three spots. Thirty-eight of respondents identified generative AI to AI tools as a cyber concern, a new category in this year’s survey.

The top two barriers to achieving a strong cybersecurity posture continue to be a lack of in-house expertise and clear leadership. Forty-three percent of respondents cite insufficient in-house expertise, while 40 percent point to a lack of clear leadership. Fewer organizations view limited budgets as a primary deterrent, with 37 percent citing it in 2025, down from 40 percent in 2024. The annual IT budget in 2025 is $66.2 million. Of that 21 percent is allocated to information security.

Organizations continue to rely on security training and awareness programs to reduce risks caused by employees. But are they effective?  Negligent employees pose a significant risk to healthcare organizations. While more organizations (76 percent in 2025 vs. 71 percent of respondents in 2024) are taking steps to address the risk of employees’ lack of awareness about cybersecurity threats, but do they really reduce risks? Sixty-three percent say they conduct regular training and awareness programs. Fifty-one percent say their organizations monitor the actions of employees and 47 percent of respondents say their organizations use simulations of phishing attacks.

Multi-factor authentication (MFA) and secure email gateway are the top two technologies to reduce email and other email-based attacks. The use of MFA increased from 49 percent of respondents in 2024 to 54 percent of respondents in 2025. This is followed by secure email gateway (SEG) (52 percent of respondents in 2025 vs. 45 percent of respondents in 2024) and patch & vulnerability management (51 percent of respondents in 2025 vs. 52 percent in 2024).

Privileged access management (PAM) is the technology most often used to prevent identity risk and lateral movement in the network (59 percent of respondents). This is followed by identity and access management (53 percent of respondents) and alerts from SIEM to gain visibility (50 percent of respondents). 

Trends in AI and machine learning in healthcare

AI can increase the productivity of IT security personnel and reduce the time and cost of patient care and administrators’ work. For the second year, we include in the research the impact AI is having on security and patient care. Fifty-seven percent of respondents say their organizations have embedded AI in cybersecurity (30 percent) or embedded in both cybersecurity and patient care (27 percent). Fifty-five percent of these respondents say AI is very effective in improving organizations’ cybersecurity posture.

Fifty-five percent of respondents agree or strongly agree that AI-based security technologies will increase the productivity of their organizations’ IT security personnel. Fifty-six percent of respondents agree or strongly agree that AI simplifies patient care and administrators’ work by performing tasks that are typically done by humans but in less time and at a lower cost.

While only 40 percent of respondents use AI and machine learning to understand human behavior. Of these respondents, 55 percent say understanding human behavior to protect emails is very important. 

While AI offers benefits, there are issues that may deter wide-spread acceptance. Sixty percent of respondents say it is difficult or very difficult to safeguard confidential and sensitive data used in organizations’ AI.

AI technologies are maturing and stabilizing. While the No.1 challenge to the effectiveness of AI-based security technologies is interoperability (34 percent of respondents), the challenge of a lack of mature and/or stable AI technologies decreased from 34 percent of respondents to 28 percent of respondents. The second most difficult challenge are errors and inaccuracies in data inputs ingested by AI technology (33 percent of respondents).

 AI-based data loss prevention (DLP) is effective in preventing data loss incidents caused by employees and malicious insiders. AI-based DLP refers to using artificial intelligence and machine learning techniques to enhance DLP solutions, making them more effective at identifying and preventing sensitive data from being leaked or misused. This includes things like automatically classifying sensitive data, detecting anomalous user behavior and adapting to evolving threats.

Twenty-three percent of respondents say their organizations have adopted AI-based DLP and another 29 percent plan to adopt it in in six months (14 percent) or in one year (15 percent). Fifty-six percent of respondents say AI-based DLP is very or highly effective in preventing employee data loss incidents and 50 percent say this technology is very or highly effective in preventing malicious insider data loss incidents.

To read the full report, visit Proofpoint’s website. 

Facebook plays role in one-third of all scams — and earns 10% of its revenue that way

Bob Sullivan

Only an egghead inside a Big Tech company would devise a plan to fight crime that involves charging criminals more to access victims.  Behavioral economics, right!

When you make 10 percent of your revenue from crime, how else would you try to stop it? Reuters recently reported these stunning facts, based on internal Meta documents, in a story you should really read immediately. 

If you’ve ever reported an ongoing crime to Facebook/Meta — say, your account has been hijacked by a crypto scammer — you know the firm largely ignores these active crime scenes.  Well, the documents Reuters examines show Facebook ignores 95% of those complaints. I’ve been writing about this problem for years. And years. Soldiers’ accounts are often stolen and used for romance scams, for example.  This is heartbreaking to the financial victim, but also utterly maddening and violating to the soldier whose picture and profile are used to defraud people. Not only does Meta not care, it makes bank off these crimes, Reuters says.

“Meta internally projected late last year that it would earn about 10% of its overall annual revenue – or $16 billion – from running advertising for scams and banned goods, internal company documents show,” the story, written by , says.  You might remember him as the reporter who broke the Frances Haugen whistleblower story — she alleged that Instagram has piles of research showing it was harming kids, but did little to stop that.

Meta’s fraud filters are so promiscuous that they allow ads even if analysis shows 94% confidence the ad is a scam, the story says.

Even worse — Facebook grooms victims. Facebook’s algorithm pushes people into the arms of criminals. Users who click on scam ads get a healthy helping of more scam ads.

This allegation that Facebook profits from scams has been around for a long time.  I covered this lawsuit in 2021 which claimed that — not only does Meta cash in on scams — but it has actively recruited criminals and their posts, and has even held special training for them.

Somewhere along the line, you’ve heard the cynical phrase that facing government fines for breaking the law is “just the cost of doing business.”  Well, these documents put hard numbers on that concept. While Facebook has been hit by some of the largest regulatory fines in history, the company earns so much cash that it just isn’t afraid of fines. Again, from the story:

  • “Meta has internally acknowledged that regulatory fines for scam ads are certain, and anticipates penalties of up to $1 billion, according to one internal document. But those fines would be much smaller than Meta’s revenue from scam ads, a separate document from November 2024 states. Every six months, Meta earns $3.5 billion from just the portion of scam ads that ‘present higher legal risk,’ the document says, such as those falsely claiming to represent a consumer brand or public figure or demonstrating other signs of deceit. That figure almost certainly exceeds ‘the cost of any regulatory settlement involving scam ads.’

This was the theme in a podcast series called “Too Big to Sue” I hosted for Duke University. Big Tech is so powerful and rich now that it really isn’t subject to regulation by nation-states.  That’s why the push for platform accountability is so crucial.

Other bombshells in this story:

  • A May 2025 presentation by its safety staff estimated that the company’s platforms were involved in a third of all successful scams in the U.S.
  • Meta has also placed restrictions on how much revenue it is willing to lose from acting against suspect advertisers, the documents say. In the first half of 2025, a February document states, the team responsible for vetting questionable advertisers wasn’t allowed to take actions that could cost Meta more than 0.15% of the company’s total revenue. That works out to about $135 million out of the $90 billion Meta generated in the first half of 2025.
  • Meta also was ignoring the vast majority of user reports of scams, a document from 2023 indicates. By that year, safety staffers estimated that Facebook and Instagram users each week were filing about 100,000 valid reports of fraudsters messaging them, the document says. But Meta ignored or incorrectly rejected 96% of them.
  • Even when advertisers are caught red-handed, the rules can be lenient, the documents indicate. A small advertiser would have to get flagged for promoting financial fraud at least eight times before Meta blocked it, a 2024 document states. Some bigger spenders – known as “High Value Accounts” – could accrue more than 500 strikes without Meta shutting them down, other documents say.
  • To advertise on Meta’s platforms, a business has to compete in an online auction. Before the bidding, the company’s automated systems calculate the odds that an advertiser is engaged in fraud. Under Meta’s new policy, likely scammers who fall below Meta’s threshold for removal would have to pay more to win an auction. Documents from last summer called such “penalty bids” a centerpiece of Meta’s efforts to reduce scams. Marketers suspected of committing fraud would have to pay Meta more to win ad auctions, thus impacting their profits and reducing the number of users exposed to their ads.

Here’s a portion of the response Meta gave Reuters for this story: “In a statement, Meta spokesman Andy Stone said the documents seen by Reuters “present a selective view that distorts Meta’s approach to fraud and scams.” The company’s internal estimate that it would earn 10.1% of its 2024 revenue from scams and other prohibited ads was “rough and overly-inclusive,” Stone said. The company had later determined that the true number was lower, because the estimate included “many” legitimate ads as well, he said. He declined to provide an updated figure.”

 

 

From homeless to helping North Korea’s weapons program; the vexing problem of laptop farms

Source: Department of Justice

Bob Sullivan

It’s a dark, cluttered room full of bookshelves, each shelf jam-packed with laptop computers. There are dozens of them humming away, lights flickering. And each one has a Post-It note attached with a single name on it. And there’s a pink purse just hanging off the side of one of those shelves. What is that purse? And what do those laptops have to do with funding North Korea’s weapons program? That purse belonged to a woman named Christina Chapman, and those laptops … well this is a rags to riches to rags story you might not believe.

Fortunately, the Wall Street Journal’s Bob McMillan recently spoke to me for an episode of The Perfect Scam to help explain all this.

“The North Koreans, if they have a superpower, it’s identifying people who will do almost anything in task rabbit style for them,” he told me.  And that’s where Christina Chapman comes in.

When this story begins, Chapman is a down-on-her-luck 40-something woman — at times homeless, at times living in a building without working showers — who makes a Hail-Mary pass by enrolling in a computer coding school. That doesn’t work either, at first.  She chronicles her troubles in a series of TikTok videos where she shares her increasing frustration, even desperation.

“I need some help and I don’t know really how to do this. Um, I’m classified as homeless in Minnesota,” she says in one. “I live in a travel trailer. I don’t have running water. I don’t have a working bathroom. And now I don’t have heat. Um, I don’t know if anybody out there is willing to help…”

But then a company reaches out and offers her a job working as the “North American representative” for their international firm.  Her job is to manage a series of remote workers.  The opportunity seems like a godsend.  Soon, she’s able to move into a real home and eventually go on some dream vacations.   At one point, she goes to Drunken Shakespeare and gets to be Queen for a day. For a night, anyway.

But underneath it all, she knows something is wrong. The job requires her to receive laptop computers for “new hires” and set them up on her home network. That’s why there’s all those racks and all those Post-it notes.  The home office appears in some of her TikTok videos, and it looks a bit like something out of The Matrix. Every computer represents an employee. And many of them work at various U.S. companies… hundreds of companies.  And instead of logging directly into their networks, they log into Chapman’s network, and she relays their traffic to the companies they work for.

That’s not the only suspicious thing about Chapman’s job.  Each new employee must be set up with a new identity.  She files I-9 eligibility forms for each one, and often times accepts paychecks on their behalf.

Eventually, Chapman comes to understand that she’s being deceptive and breaking the law.  Clearly, she’s helping people who are ineligible to work in the U.S.  evade workplace checks.  In a private email at the time, she frets about going to prison over these deceptions.

What she doesn’t seem to know is where these ineligible workers come from. They’re all from North Korea.  And the hundreds of companies employing Champan’s remote workers are ultimately sending money to the Hermit Kingdom.

“And that is, at this point, bringing in hundreds of millions of dollars to the regime according to the Feds,” McMillan told me. “And … they like to remind us that’s being used to fund their weapons program. Which is pretty scary.”

Chapman is running what’s come to be known as a laptop farm. And while the details about her situation, revealed in McMillan’s Wall Street Journal story, are incredible, laptop farms are not unusual. Fake remote workers are a rampant problem.

“It seems basically if you work for a Fortune 500 company, I would be shocked if you haven’t had a North Korean at least apply for a job there. And many of them have hired people,” he said.

Eventually, one of Chapman’s clients does something suspicious, and the company complains to the FBI. Their investigation reveals hundreds of laptop computers are humming away in Champan’s home, essentially downloading millions of dollars from U.S. companies and funneling it to North Korea, evading U.S. sanctions.  She’s arrested and ultimately pleads guilty and is sentenced to eight years in prison.

“My impression is that when she initially started out, it was to receive a higher-paying job,” said FBI agent Joe Hooper. “She got wrapped up in actually getting paid for what she was doing, and she knew she was doing something wrong, but was looking the other way.”

 Ultimately, prosecutors say Chapman helped get North Koreans paying jobs at 300 US companies. They included a top 5 major television network, a Silicon Valley technology company, an aerospace manufacturer, an American car maker, a luxury retail store, and a US media and entertainment company. Collectively, they paid Chapman’s laptop farm workers $17 million. Over a three-year period, she made about $150,000.  So, she wasn’t really living like that queen from Drunken Shakespeare.
“They target the vulnerable and she definitely was vulnerable,” McMillan said. “She was, I think, a well-intentioned person who was just, just desperate and you do feel sad for her watching the videos because she didn’t make a ton of money, she didn’t appear to be, have any animus toward the United States. There’s no evidence really that I’ve seen that she actually knew she was working for North Korea, but at a certain point, like it was clear, it was clearly, she clearly knew she was working on a scam.”

Clark Flynt-Barr, now government affairs director for AARP (owner and producer of The Perfect Scam), used to work for Chainanalysis, which conducts cryptocurrency investigations. She told me that some North Korean remote workers hang onto their jobs for months, or even years. Some are good employees, even, and don’t know they are a pawn in their government’s effort to evade sanctions.

“They’re good at their job and they’re, in some cases, quite shocked to learn that they’re a criminal who has infiltrated the company,” she said

It’s hard for me to imagine that companies can have remote workers they know so little about — don’t they ever ask how the spouse and kids are? — but McMillan said the arrangement works well for many software developers.

“I think there are a lot of companies where software development is not necessarily their core competency, but they have to have some software…and so they hire these people who are pretty used to offshoring coding to other countries,” he said. “Basically, all they care about is, ‘Just make the software work. Do the magic, spread, spread the magic, software pixie dust and just get this done.’ ”

The remote work scam grew out of long-running efforts by North Korean hackers to steal cryptocurrency, McMillan said. Many were working to get hired by crypto firms so they could pull inside jobs, and then realized there was money to be made in simply collecting paychecks.

The good news is laptop farms are now squarely in the focus of the FBI. A DOJ press release from June indicates that search warrants were executed on 29 different laptop farms all around the country, and there was actually a guilty plea in Massachusetts.

There’s a side note to the story that’s pretty amusing; cybersecurity researchers have come to learn that many North Korean workers go by the name “Kevin” because they are fans of the Despicable Me movie franchise.  You can hear more about that, and much more from Christina Chapman’s TikTok account, if you listen to this episode of The Perfect Scam. But in case podcasts aren’t your thing, some crucial advice: Don’t tell the online world you are desperate; that makes you a target.  If you are hiring, make sure you know who you are hiring and where they live. Ask about the family! And if you are looking for a job, know that there are many criminals out there who can make almost anything sound legitimate.

And one other note that’s hardly amusing; there’s another set of victims in this story, people whose identities are used to facilitate the remote worker deception. Some of these people don’t find out about it until they get a bill from the IRS for failure to pay taxes on income earned by the criminal.  That’s why it’s important to check your credit and your Social Security earnings statement often.

Click here, or click the play button below, to listen to this episode.

New Study Reveals Insider Threats and AI Complexities Are Driving File Security Risks to Record Highs, Costing Companies Millions

Larry Ponemon

As threats continue to accelerate and increase in cost, cyber resilience has shifted from being a technical priority to being a strategic, fiscal imperative. Executives must take ownership by investing in technology that reduces risk and cost while enabling organizations to keep pace with an ever-evolving AI landscape.

The purpose of this research is to learn what organizations are doing to achieve an effective file security management program. Sponsored by OPSWAT, Ponemon Institute surveyed 612 IT and IT security practitioners in the United States who are knowledgeable about their organizations’ approach to file security.

“A multi-layered defense that combines zero-trust file handling with advanced prevention tools is no longer optional but is the standard for organizations looking to build resilient, scalable security in the AI era,” added George Prichici, VP of Products at OPSWAT. “Leveraging a unified platform approach allows file security architectures to adapt to new threats and defend modern workflows and complex file ecosystems inside and outside the perimeter.”

File security refers to the methods and techniques used to protect files and data from unauthorized access, theft, modification or deletion. It involves using various security measures to ensure that only authorized users can access sensitive files and to protect files from security threats. As shown in this research, the most serious risks to file security are data leakage caused by negligent and/or malicious insiders and not having visibility into who is accessing files and being able to control access.

Attacks on sensitive data in files are frequent and costly and indicate the need to invest in technologies and practices to reduce the threat. Sixty-one percent of respondents say their organizations have had an average of eight data breaches or security incidents due to unauthorized access to sensitive and confidential data in files in the past two years.

Fifty-four percent of respondents say these breaches and incidents had financial consequences. The average cost of incidents for organizations in the past two years was $2.7 million. Sixty-six percent of respondents say the average cost of all incidents in the past two years was between $500,000 and more than $10,000,000.

The bottom line of organizations is impacted by the loss of customer data and diminished employee and workplace productivity. These are the most common consequences from these security incidents.

Insights into the state of file security

 Insiders pose the greatest threat to file security. The most serious risk is caused by malicious and negligent insiders who leak data (45 percent of respondents). Other top risks are file access visibility and control (39 percent of respondents) and vendors providing malicious files and/or applications (33 percent of respondents). Only 40 percent of respondents say their organizations can detect and respond to file-based threats within a day (25 percent) or within a week (15 percent).

Files are most vulnerable when they are shared, uploaded and transferred. Only 39 percent of respondents are confident that files are secure when transferring files to and from third parties and only 42 percent of respondents are confident that files are secure during the file upload stage. The Open Web Application Security Project (OWASP) released principles on securing file uploads. According to 40 percent of respondents, the principle most often used or will be used is to store files on a different server. Thirty-one percent of respondents say they only allow authorized users to upload files.

The file-based environment that poses the most risk is file storage such as on-premises, NAS and SharePoint, according to 42 percent of respondents. Forty percent of respondents say web file uploads such as public portals and web forms are a security risk.

Macro-based malware and zero-day or unknown malware are the types of malicious content of greatest concern to file security. Organizations have encountered these types of malicious content and are most concerned about macro-based malware and zero-day or unknown malware according to 44 percent and 43 percent of respondents, respectively.

The effectiveness of file management practices is primarily measured by how productive IT security employees are, according to 52 percent of respondents. Other metrics include the assessment of the security of sensitive and confidential data in files (49 percent of respondents) and fines due to missed compliance (46 percent of respondents). Only about half (51 percent of respondents) say their organizations are very or highly effective in complying with various industry and government regulations that require the protection of sensitive and confidential information.

Country of origin and DLP are most likely used or will be used to improve file security management practices. Country of origin is mainly used to neutralize zero-day or unknown threats (51 percent of respondents). The main reason to use DLP is to prevent data leaks of sensitive data and to control file sharing and access (both 44 percent of respondents).

Most companies are also using or planning to use content disarm and reconstruction (66 percent of respondents), software bill of materials (65 percent of respondents), multiscanning (64 percent of respondents), sandboxing (62 percent of respondents), file vulnerability assessment (61 percent of respondents) and the use of threat intelligence (57 percent of respondents).

AI is being used to mitigate file security risks and reduce the costs to secure files. Thirty-three percent of respondents say their organizations have made AI part of their organizations’ file security strategy and 29 percent plan to add AI in 2026. To secure sensitive corporate files in AI workloads, organizations primarily use prompt security tools (41 percent of respondents) and mask sensitive information (38 percent of respondents).

Twenty-five percent of organization have adopted a formal Generative AI (GenAI) policy and 27 percent of respondents say their organizations have an ad hoc approach. Twenty-nine percent of respondents say GenAI is banned.

The security of data files is most vulnerable when transferring files to and from third parties. Only 39 percent of respondents say their organizations have high confidence in the security of files when transferring them to and from third parties.

Only 42 percent of respondents have high confidence in the security of files during the file upload stage (internal/external) and when sharing files via email or links. Forty-four percent of respondents say their organizations are highly confident in the security of files when downloading them from unknown sources. Organizations have more confidence when storing files in the cloud, on-premises or hybrid (54 percent of respondents) or in the security of backups (53 percent of respondents).

To read the key findings from this research, download the full report at OPSWAT.COM

 

The Challenges to Ensuring Information Is Secure, Compliant and Ready for AI

Larry Ponemon

The Ponemon Institute and OpenText recently released a new global report, “The Challenges to Ensuring Information Is Secure, Compliant and Ready for AI,” revealing that while enterprise IT leaders recognize the transformative potential of AI, a gap in information readiness is causing their organizations to struggle in securing, governing, and aligning AI initiatives across businesses.

The purpose of this research is to drive important insight into how IT and IT security leaders are ensuring the security of information without hindering business goals and innovation.

A key takeaway is that IT and IT security leaders are under pressure to ensure sensitive and confidential information is secure and compliant without making it difficult for organizations to innovate and pursue opportunities to grow the business.

“This research confirms what we’re hearing from CIOs every day. AI is mission-critical, but most organizations aren’t ready to support it,” said Shannon Bell, Chief Digital Officer, OpenText. “Without trusted, well-governed information, AI can’t deliver on its promise.”

The research also reveals what needs to be done to achieve AI readiness based on the experiences of the 50 percent of organizations that have invested in AI. These include preventing the exposure of sensitive information, strengthening encryption practices and reducing the risk of poor or misconfigured systems due to over-reliance on AI for cyber risk management. When deploying, organizations should develop an AI data security program, use tools to validate AI prompts and their responses, train teams to spot AI-generated behavior patterns or threat actors, use data cleansing and governance and identify and mitigate bias in AI models for safe and responsible use.

Metrics to demonstrate the value of the IT security program to the business is the top priority in the next 12 months. Some 47 percent of respondents plan to use metrics to show the value IT security brings to the organization. This is followed by acceleration of digital transformation and automation of business processes (both 44 percent of respondents). Forty percent of respondents say a top three priority is the identification and prioritization of threats affecting business operations.

Organizations recognize the need to make AI part of their security strategy, but difficulties in adoption exist.

 Fifty percent of respondents say their organizations are using AI as part of their security strategy, but 57 percent of respondents rate the adoption of AI as very difficult to extremely difficult and 53 percent of respondents say it is very difficult or extremely difficult to reduce potential AI security and legal risks. Foundational to success is to ensure AI is secure, compliant and governed.

AI deployment has the support of senior leaders. Compared to other IT initiatives, 57 percent of respondents say AI initiatives have a very or very high priority. Fifty-five percent of respondents say their CEOs and Boards of Directors consider the use of AI as part of their IT and security programs as very or extremely important. A possible reason for such support is that 54 percent of respondents are confident or very confident of their organizations’ ability to demonstrate ROI from AI initiatives.

 CEOs, CIOs and CISOs are most likely to have authority for setting AI strategy. Fifteen percent of CEOs, 14 percent of CIOs and 12 percent of CISOs have final authority for such AI initiatives as technology investment decisions and the priorities and timelines for deployment.

 Despite leadership’s support for AI, IT/IT security and business goals may not be in alignment. Less than half (47 percent of respondents) say IT/IT security and business goals are in alignment with those who are responsible for AI initiatives. Fifty percent of respondents say their organizations have hired or are considering hiring a chief AI officer or a chief digital officer to lead AI strategy. Such an appointment of someone dedicated to managing the organization’s AI strategy may help bridge gaps between the goals and objectives of IT/IT security with those who have final authority over AI strategy.

Concerns about privacy can cause delays in AI adoption. The inadvertent infringement of privacy rights is considered the top risk caused by AI. Forty-four percent of respondents say their biggest concern is making sure risks to privacy are mitigated. Other concerns are weak or no encryption (42 percent of respondents) and poor or misconfigured systems due to over-reliance on AI for cyber risk management.

Developing a data security program and practice is considered the most important step to reduce risks from AI. Fifty-three percent of respondents say it is very difficult or extremely difficult to reduce potential AI security and legal risks. To address data security risks in AI, 46 percent of respondents say they are developing a data security program and practice. Other steps are using tools to validate AI prompts and their responses (39 percent of respondents), training teams to spot AI-generated behavior patterns or threat actors (39 percent of respondents), using data cleansing and governance (38 percent of respondents) and identifying and mitigating bias in AI models for safe and responsible use (38 percent of respondents).

Despite being a priority, the top governance challenge is insufficient budget for investments in AI technologies. Thirty-one percent of respondents say there is insufficient budget for AI-based technologies. This is followed by 29 percent of respondents who say there is not enough time to integrate AI-based technologies into security workflows, 28 percent of respondents who say IT and IT security functions are not aligned with the organization’s AI strategy and 28 percent of respondents say their organizations can’t recruit personnel experienced in AI-based technologies.

 The adoption of GenAI and Agentic AI

 GenAI is considered very or highly important to organizations’ IT and overall business strategy because it improves operational efficiency and worker productivity. Of the 50 percent of organizations that have adopted AI, 32 percent have adopted GenAI as part of their IT or overall business strategy and 26 percent will adopt GenAI in the next six months. Fifty-eight percent of these respondents say GenAI is important to highly important to their organizations’ IT and overall business strategy.

 GenAI supports security operations and employee productivity. The most important GenAI use cases are supporting security operations (e.g. analyzing alerts, generating playbooks) (39 percent of respondents), improving employee productivity (e.g. drafting documents, summarizing content) (36 percent of respondents), assisting with software development (e.g. code generation or debugging) (34 percent of respondents) and accelerating threat detection or incident response (34 percent of respondents).

 Copyright and other legal risks are the biggest challenges to an effective GenAI program. Respondents were asked to identify the biggest challenges to an effective GenAI program. Forty-three percent of respondents say copyright and other legal risks are the top challenge to an effective GenAI program. Thirty-seven percent of respondents say lack of in-house expertise and 36 percent of respondents say regulatory uncertainty and changes are barriers to an effective GenAI program.

 Organizations are slow to adopt Agentic AI as part of their overall IT and business strategy. While 32 percent of respondents who are using AI have adopted GenAI, only 19 percent have adopted Agentic AI. Only 31 percent of the organizations that have adopted Agentic AI say it is very or extremely important to their organizations’ IT and business strategy.

Organizations’ approaches to securing data and supporting business innovation

 Ensuring the high availability of IT services supports business innovation. Respondents were asked what is most critical to supporting business innovation. Forty-seven percent of respondents say it is ensuring high availability of IT services and 43 percent of respondents say it is recruiting and retaining qualified personnel. Another important step, according to 39 percent of respondents, is to reduce security complexity by integrating disparate security technologies.

 Business innovation is dependent upon IT’s agility in supporting frequent shifts in strategy. Fifty-three percent of respondents say it is very difficult to support business goals and transformation. To support innovation the most important digital assets to secure are source code (44 percent of respondents), custom data (44 percent of respondents), contracts and legal documents (42 percent of respondents) and intellectual property (42 percent of respondents).

The importance of proving the business value of technology investments

Only 43 percent of respondents say their organizations are very or highly confident in the ability to measure the ROI of investments related to securing and managing information assets. The biggest challenge in demonstrating ROI for information management and security technologies is the inability to track downstream business impacts (52 percent of respondents).

The ROI of downstream business impacts involves understanding the indirect benefits and costs that ripple outwards from an initiative, activity or technology investment. Examples to measure include reduced errors and rework, increased efficiency and productivity and reduced compliance risks. Other challenges are the difficulty in quantifying intangible benefits (51 percent of respondents) and competing priorities (47 percent of respondents).

 Organizations are eager to see the ROI from security technologies.  Calculating ROI is important to proving the business value of IT security investments. It is helpful in making informed decisions about IT security strategies and investments, evaluating performance and calculating profitability. ROI from investments is expected to be shown within six months to one year according to 55 percent of respondents. Forty-five percent of respondents say the timeline is one year to two years (21 percent) or no required timeframe (24 percent).

 Security strategies and technology investments should address the risks of ransomware and malicious insiders.  Fifty-three percent of respondents say their organizations had a data breach or cybersecurity incident in the past two years. The average number of incidents was three. During this time, only 28 percent of respondents say cybersecurity incidents have decreased (18 percent) or decreased significantly (10 percent). Ransomware and malicious insiders are the most likely cyberattacks, according to 40 percent and 37 percent of respondents, respectively. The data most vulnerable to insider risks are customer or client data (58 percent of respondents), financial records (46 percent of respondents) and source code (43 percent of respondents).

 Malicious insiders pose a significant risk to data security. Encryption for data in transit (39 percent of respondents), email data loss prevention (35 percent of respondents), and encryption for data at rest (35 percent of respondents) are primarily used to reduce the risk of negligent and malicious insiders.

 Organizations find it difficult to reduce insider or malicious data loss incidents without jeopardizing trust. Fifty-one percent of respondents say their organizations are effective or very effective in their ability to monitor insider activity across hybrid and/or remote environments. Only 41 percent of respondents say their organizations are effective or very effective in creating trust while taking steps to reduce data loss incidents caused by negligent or malicious insiders.

 Reducing complexity in organizations’ IT security architecture is needed to have a strong security posture. Seventy-three percent of respondents say reducing complexity is essential (23 percent), very important (23 percent) and important (27 percent). Complexity increases because of new or emerging cyber threats (52 percent of respondents), the Internet of Things (46 percent of respondents) and the rapid growth of unstructured data (44 percent of respondents).

 Accountability for reducing complexity is essential. To reduce complexity the most essential steps are to appoint one person to be accountable (59 percent of respondents), streamline security and data governance policies (56 percent of respondents) and reduce the number of overlapping tools and platforms (55 percent of respondents). On average, organizations have 15 separate cybersecurity technologies

To read more key findings and download the entire report, click here. (PDF)

Yes, there is a 9-1-1 for scam victims. Get to know the guardian angels of the Internet — AARP’s Fraud Watch Network

Bob Sullivan

Many years ago, a very smart book editor I worked with (Jill Schwartzman at Dutton now) gently admonished me because I failed to include resources for consumers in my tirades about the mistreatment of consumers.  So was born concepts like “Red Tape Tips” I’d include at the end of my columns and an appendix in each book listing consumer advocacy organizations.  But that experience forced me to face a stark reality: Most of these organizations don’t really take phone calls. While there are plenty of well-meaning non-profit groups out there who try to fix broken policies that favor the Gotcha Capitalists and criminals — there are hardly any organizations set up to field calls from people who are hurting and need help right now.

There’s no 9-1-1 for a consumer who’s about to get ripped off.

Actually, there is. It’s AARP’s Fraud Watch Network helpline. And I’m proud to say that my work on AARP’s Perfect Scam podcast helps highlight the important work they do.

First, let me say I don’t fault the folks who created or work at various grassroots consumer organizations. They often toil away with skeleton staffs and meager funding, true Davids in a battle against billion-dollar Goliaths.  But it’s just not practical for them to take calls and offer customer support to individual victims or take on their cases.

And yes, if you are the victim of a crime, you can and should call 9-1-1 (or the non-emergency line) and report that to the police. Unfortunately, many in-progress scams are difficult to report — “what’s the crime?” — and local police aren’t always set up to offer on-the-spot advice or empathetic listening.

That’s why I’m happy to talk about the Fraud Watch Network. It’s staffed Monday-Friday, from 9 a.m. to 9 p.m., mainly by trained volunteers. They reach out to every caller within 24 hours or so. It’s staffed mainly by volunteers who are ready to help victims within a day or so, and offer both empathetic listening and practical advice.  They’ve stopped millions of dollars in criminal transactions by giving people a place to turn when they’re in crisis.

Who are these guardian angels of this dangerous digital age? In this week’s Perfect Scam episode, I spotlight two volunteers who do this work.  Like most helpline volunteers, Dee Johnoson and Mike Alfred are both former victims who once called the helpline, and now they are two of the 150 volunteers who give their time because they are called to help others.

At this link, you can find a partial transcript of the episode, in case podcasts aren’t your thing. I do hope you’ll listen, however. You’ll really like Mike and Dee. I want readers to see their kindness and empathy in action — those are in short supply these days, I fear.  But more than anything, I want readers to know that there is a 9-1-1 for scams.  If you or someone you love is caught up in an Internet crime right now, I urge you to call the AARP Fraud Watch Network Helpline at 877-908-3360 or visit the website. You’ll get near-immediate help from experts who really care.

You can also email me, of course, at the address on my contact page. Or you can email The Perfect Scam team at theperfectscampodcast@aarp.org.

AARP’s Helpline is part of AARP’s Fraud Watch Network. In addition to volunteers helping victims, the network has roughly a thousand trained volunteers working in their communities and online to spread the message of fraud prevention. To learn more, visit

Optimizing What Matters Most: The State of Mission-Critical Work

In this study, mission critical work refers to tasks, systems or processes within an organization that are essential for its operations and survival. If they fail or are disrupted, the entire operations could be significantly impacted or even brought to a complete halt. It is the most critical work that must be done without interruption to maintain functionality. The difference between mission critical and business critical systems is that mission critical is vital to the core mission or primary functions of the organization. Business critical systems are crucial to business operations and support the organizations’ core processes.

In addition to the impact on the sustainability of organizations, mission-critical failures and disruptions can have a ripple effect with significant economic consequences for government and industry sectors. This is particularly the case when failures and disruptions involve critical infrastructure, which encompasses systems and assets essential for the functioning of a society and its economy. These failures can disrupt supply chains, impact business productivity, and lead to economic losses.

To reduce disruptions and failures, organizations need to assess their ability to manage the risks to tasks, systems and/or processes as well as to protect and secure sensitive and confidential data in mission critical workflows.  However, as shown in this research, confidence in understanding the risk, security and privacy vulnerabilities in mission-critical workflows is low.

Respondents were asked to rate their confidence in the privacy and security and their ability to understand the risk profile of their organization’s mission-critical workflows on a scale of 1 = no confidence to 10 = highly confident. Only 47 percent of respondents say they are very or highly confident in understanding the risk profile of mission-critical workflows. Slightly more than half of respondents (52 percent) are very or highly confident in the privacy and security of mission-critical workflows.

The importance of optimizing mission-critical workflows

In the past 12 months, 64 percent of organizations report they experienced an average of 6 disruptions or failures in executing mission-critical workflows. Respondents say cyberattacks are the number one reason mission critical failures and disruptions occur. To prevent these incidents, 61 percent of organizations in this research believe a strong security posture is critical.

The disruption or failure of mission-critical workflows can result in the loss of high-value information assets. This is followed by data center downtime, which not only prevents mission critical work from being completed but can have severe financial consequences. Sixty-three percent of respondents say the number one metric used to measure the cost of a disruption or failure is the cost of downtime of critical operations. According to a study conducted by Ponemon Institute in 2020, the average cost of one data center downtime was approximately $1 million. Forty-six percent of respondents say that the organizations’ survivability was affected because of a complete halt to operations.

A strong security posture and knowledgeable mission-critical staff are the most important factors to prevent mission-critical disruption and failures. Organizations need to secure mission-critical workflows to avoid disruptions or failures (61 percent of respondents) supported by a knowledgeable mission-critical staff (57 percent of respondents). Also important is an enterprise-wide incident response plan (51 percent of respondents).

Few organizations have risk mitigation strategies in place as part of their mission-critical collaboration tools. According to 47 percent of respondents, their organizations use mission-critical collaboration tools. However, only 39 percent of respondents have risk mitigation strategies in place. Of these respondents, 59 percent of respondents say they have backup procedures to prevent data loss, 54 percent of respondents say they have contingency plans to handle unexpected events.

Cyberattacks and system glitches were the primary causes of the disruption or failure. To reduce the likelihood of a disruption or failure, organizations need to ensure the security of their mission-critical workflows. Fifty percent of respondents cite cyberattacks as the cause of disruption or failure followed by 49 percent who say it was a system glitch. Sixty-one percent say a strong security posture is the most important step to preventing disruptions and failures.

Measuring the financial consequences of a disruption or failure can help organizations prioritize the resources needed to secure mission-critical workflows. Fifty-three percent of respondents say their organizations measured the cost of the disruption or failure in executing mission-critical workflows. The metrics most often used are the cost of downtime of critical operations (63 percent of respondents), which is the number two consequence of a disruption or failure. Other metrics are the cost to recover the organization’s reputation (51 percent of respondents) and the cost to detect, identify and remediate the incident (50 percent of respondents).

Organizations should consider increasing the role of IT and IT security functions in assessing cyber risks that threaten workflow’s reliability. Despite the threat of a cyberattack targeting mission-critical workflows, only 16 percent of respondents say the CISO and only 10 percent of respondents say the CIO are most responsible for executing mission-critical workflows securely. The function most responsible is the business unit leader, according to 26 percent of respondents.

A dedicated team supports the optimization of mission-critical workflows. Fifty-six percent of respondents say their organizations have a team dedicated to managing mission-critical workflows. The 44 percent of organizations without a dedicated team say it is very or highly difficult to accomplish the goals of mission-critical workflows. According to the research presented in this report, a dedicated team gives organizations the following advantages.

  • Increased effectiveness in prioritizing critical communications among team members
  • More likely to be able to prevent disruptions and failures in executing mission-critical workflows
  • More likely to measure the costs of a disruption or failure to improve the execution of mission-critical workflows
  • Improved efficiency of mission-critical workflow management and effectiveness in streamlining mission-critical workflows
  • More likely to use mission-critical collaboration tools

 Mission-critical workflows require setting clear objectives, understanding the requirements, mapping workflows and managing risks. The two most often used activities to manage mission-critical are analyzing current workflow processes (47 percent of respondents) and training mission-critical employees (44 percent of respondents). Only 34 percent of respondents say their organizations are very or highly effective in prioritizing critical communication among team members.

Mission-critical workflows can be overly complex and inefficient. Taking steps to automate repetitive tasks where possible and to regularly review and update workflows are only used by 38 percent and 36 percent of organizations, respectively. Only 46 percent of respondents say their organizations are very or highly effective in streamlining mission-critical workflows to improve their efficiency and very or highly efficient in managing mission-critical workflows.

Ineffective communication about the execution of mission-control workflows can put organizations’ critical operations at risk. Sixty percent of respondents say it is the lack of real-time information sharing and 58 percent of respondents say it is the lack of secure information sharing that are barriers to effectively executing mission-critical workflows.

Enterprise-wide incident response plans should be implemented to reduce the time to respond, contain and remediate security incidents that compromise mission-critical workflows. Fifty-one percent of respondents say an enterprise-wide incident response plan is critical to the prevention of disruption and failures. Fifty-nine percent of organizations measure effectiveness based on how quickly compromises to mission-critical workflows are addressed. Organizations also measure their ability to prevent and detect cyberattacks against mission-critical workflows.

Organizations are adopting AI to improve the management of mission-critical workflows. However, organizations need to consider the potential AI security risks to mission-critical workflows. Fifty-one percent of respondents say their organizations have deployed AI. Most often AI is used to automate repetitive tasks (60 percent) and secure data used and data harvested by Large Language Models (LLMs). The top AI security risks according to respondents are potential leakage or theft of confidential and sensitive data (53 percent) and potential backdoor attacks on their AI infrastructure such as sabotage or malicious code injection (48 percent).

Mission-critical collaboration tools are considered very or highly effective, but adoption is slow. Only 47 percent of respondents use mission-critical collaboration tools. However, 54 percent of respondents say these tools are very or highly effective in making workflows efficient with minimum disruption to critical operations. The features considered most important are data encryption (61 percent of respondents), data loss prevention (56 percent of respondents) and the ability to securely enable real-time communication between teams (56 percent of respondents).

To read the rest of this report, including key findings, please visit Mattermost.com

US (finally) issues warning about crypto ATMs

Bob Sullivan

Finally, crypto ATMs are getting a bit of the attention that they deserve.

As host of AARP’s The Perfect Scam podcast, I talk to crime victims every week.  A few years ago, a majority had their money stolen via bogus gift card transactions. Today, it feels like almost every person is the victim of a cryptocurrency scam, and many have their money stolen through crypto ATMs.

I’m sure you’ve seen these curious machines in convenience stores and gas stations, which are also known as convertible virtual currency (CVC) kiosks.  Put cash in, and you can send or receive crypto around the world.

Crypto ATMs, in theory, democratize crypto. Someone who wouldn’t feel comfortable buying crypto online can do so in a familiar way, using a machine that works just like the ones we’ve used to get cash for many years.  Perhaps you won’t be surprised to hear that crypto ATMs are a bad deal.  Set aside crypto volatility and high transaction fees for a moment: No one who feels uncomfortable opening an online crypto account should be buying or transmitting crypto. Period.

And yet, these crypto ATMs are sprouting up like weeds, at a time when old-fashioned ATMs are disappearing. There were roughly 4,000 crypto ATMs in 2019, and there were more than 37,000 by January of this year.

I know that because  the U.S. Treasury’s Financial Crime Enforcement Network — FinCEN — published a notice Aug. 4 warning financial institutions about crypto ATMs and their connection to crime. The agency also said many of these devices are being put into service without registering as money service businesses with FinCEN, and their operators are sometimes failing to report suspicious activity.

As I mentioned, there really isn’t a use case for these fast-proliferating devices.  Well, there’s one. When a criminal has a victim confused and manipulated, the fastest way to steal their money is to persuade them to drive to the nearest crypto ATM and feed the machines with $100 bills. I’ve talked to countless victims who’ve told me harrowing, tragic tales of crouching in the dark corner of a gas station, shoving money into one of these machines, terrified they are being watched.  In fact, they aren’t. Employees are told not to get involved. So victims drive away, their money stolen in the fastest way possible.  The transfer is nearly instant, faster than a wire transfer, and irrevocable.

That means it’s the perfect gadget for criminals like the Jalisco Cartel in Mexico to steal cash from Americans. Particularly elderly Americans, FinCEN says. According to FTC data, people aged 60 and over were more than three times as likely as younger adults to report a loss using a crypto ATM.

“These kiosks have increasingly facilitated elder fraud, especially among tech/customer supports scams, government impersonation, confidence/romance scams, emergency/person-in-need scams, and lottery/sweepstakes scams,” FinCEN said. And the losses are huge. “In 2024, the FBI’s IC3 received more than 10,956 complaints reporting the use of CVC kiosks, with reported victim losses of approximately $246.7 million. This represents a 99 percent increase in the number of complaints and a 31 percent increase in reported victim losses from 2023.”

In other words, we have a five-alarm fire on our hands.  One that’s been blazing in broad daylight for at least a year and yet…every week, I continue to interview victims who crouched near a crypto ATM for days on end, stuffing bills into these machines, thinking they were doing the right thing.

Banks and kiosk operators should do much more. The current daily limits on transactions aren’t low enough; victims are just instructed to drive all over town, or make daily deposits for weeks on end, so criminals can steal hundreds of thousands of dollars this way.  Regulators should do more, too.  If the majority of transactions flowing through a certain kiosk can be traced to fraud, the machine should be removed immediately. It’s not impossible. The UK ordered all cryto ATMS shut down recently.

Tech can enhance our lives; it can also be weaponized. And when it is, we shouldn’t stand idly by and act as if we are powerless to stop the pain it is causing our most vulnerable people.

The State of Identity and Access Management (IAM) Maturity

Larry Ponemon

Identity Management Maturity (IDM) refers to the extent to which an organization effectively manages user identities and access across its systems and applications. It’s a measure of how well an organization is implementing and managing Identity and Access Management (IAM) practices. A mature IDM program ensures that only authorized users have access to the resources they need, enhancing security, reducing risks and improving overall efficiency.

Most organizations remain in the early to mid-stages of Identity and Access Management (IAM) maturity, leaving them vulnerable to identity-based threats. This new study of 626 IT professionals by the Ponemon Institute, sponsored by GuidePoint Security, highlights that despite growing awareness of insider threats and identity breaches, IAM is under-prioritized compared to other IT security investments. All participants in this research are involved in their organizations’ IAM programs.

Key Insights:

  • IAM is underfunded and underdeveloped.

Only 50 percent of organizations rate their IAM tools as very or highly effective, and even fewer (44 percent) express high confidence in their ability to prevent identity-based incidents. According to 47 percent of organizations, investments in IAM technologies trail behind other security investment priorities.

  • Manual processes are stalling progress.

 Many organizations still rely on spreadsheets, scripts and other manual efforts for tasks like access reviews, deprovisioning and privileged access management—introducing risk and inefficiencies.

  • High performers show the way forward.

 High performers in this research are those organizations that self-report their IAM technologies and investments are highly effective (23 percent). As a result, they report fewer security incidents and stronger identity controls. These organizations also lead other organizations represented in this research in adopting biometric authentication, authentication, identity threat detection and integrated governance platforms.

  • Technology and expertise gaps persist.

 A lack of tools, skilled personnel and resources is preventing broader progress. Many IAM implementations are driven by user experience goals rather than security or compliance needs.

Bottom Line:

Achieving IAM maturity requires a strategic shift—moving from reactive, manual processes to integrated, automated identity security. Organizations that treat IAM as foundational to cybersecurity, not just IT operations, are best positioned to reduce risk, streamline access and build trust in a dynamic threat landscape.

Part 2. Introduction: Including a Peek at High Performer Trends

The purpose of an Identity and Access Management program (IAM) is to manage user identities and access across systems and applications. A mature IAM program ensures that only authorized users have access to the resources they need to enhance security, reduce risks and improve overall efficiency.

This survey, sponsored by GuidePoint Security, was designed to understand how effective organizations are in achieving IAM maturity and which tools and practices are critical components of their identity and access management programs. A key takeaway from the research is that organizations’ continued dependency on manual processes as part of their IAM programs is a barrier to achieving maturity and reducing insider threats. Such a lack of maturity can lead to data breaches and security incidents caused by negligent or malicious insiders.

Recent examples of such events include former Tesla employees in 2023 who leaked sensitive data about 75,000 current and former employees to a foreign media outlet. In August 2022, Microsoft experienced an insider data breach where employees inadvertently shared login credentials for GitHub infrastructure, potentially exposing Azure servers and other internal systems to attackers.

According to the research, investments in IT security technologies are prioritized over IAM technologies.  Without the necessary investments in IAM, organizations lack confidence in their ability to prevent identity-based security incidents. Respondents were asked to rate effectiveness in their organizations’ tools and investments in combating modern identity threats on a scale from 1 = not effective to 10 = highly effective, their confidence in the ability to prevent identity-based security incidents from 1 = not confident to 10 = highly confident and the priority of investing in IAM technologies compared to other security technologies from 1 = not a priority to 10 = high priority.

Only half (50 percent of respondents) believe their tools and investments are very effective and only 44 percent of respondents are very or highly confident in their ability to prevent identity-based security incidents. Less than half of the organizations (47 percent of respondents) say investing in IAM technologies compared to other IT security technologies is a high priority.

Best practices in achieving a strong identity security posture

To identify best practices in achieving a strong identity security posture, we analyzed the responses of the 23 percent of IT professionals who rated the effectiveness of their tools and investments in combating modern identity threats as highly effective (9+ on a scale from 1 = low effectiveness to 10 = high effectiveness). We refer to these respondents and their organizations as high performers. Seventy-seven percent of respondents rated their effectiveness on a scale from 1 to 8. We refer to this group as “other” in the report.

Organizations that have more effective tools and investments to combat modern identity threats are less likely to experience an identity-based security incident. Only 39 percent of high performers had an identity-based security incident.

High performers are outpacing other organizations in the adoption of automation and advanced identity security technologies.  

  • Sixty-four percent of high performers vs. 37 percent of other respondents have adopted biometric authentication.
  • Fifty-nine percent of high performers vs. 34 percent of other respondents use automated mechanisms that check for compromised passwords.
  • Fifty-six percent of high performers vs. 23 percent of other respondents have a dedicated PAM platform.
  • Fifty-three percent of high performers vs. 31 percent of other respondents use IAM platforms and/or processes used to manage machine, service and other non-human accounts or identities.

 High performers are significantly more likely to assign privileged access to a primary account (55 percent vs. 30 percent). Only 25 percent of high performers vs. 33 percent of other respondents use manual or scripted processes to temporarily assign privileged accounts.

 High performers are leading in the adoption of ITDR, ISPM and IGA platforms. 

  • Thirty-seven percent of high performers vs. 12 percent of other respondents have adopted IDTR.
  • Thirty-five percent of high performers vs. 15 percent of other respondents have adopted ISPM.
  • Thirty-one percent of high performers vs. 9 percent of other respondents have adopted IGA platforms.

 Following are highlights from organizations represented in this research

 Identity verification solutions are systems that confirm the authenticity of a person’s identity, typically in digital contexts, such as online transactions or applications. These solutions use various methods to verify a person’s identity and ensures only authorized users have access to the resources they need.

Few organizations use identity verification solutions and services to confirm a person’s claimed identity. Only 39 percent of respondents say their organizations use identity verification solutions and services. If they do use identity verification solutions and services, they are mainly for employee and contractor onboarding (37 percent of respondents). Thirty-three percent of respondents say it is part of customer registration and vetting, and 30 percent of respondents say it is used for both employee/contractor and customer.

Reliance on manual processes stalls organizations’ ability to achieve maturity. Less than half of organizations (47 percent) have an automated mechanism that checks for compromised passwords. If they do automate checks for compromised passwords, 37 percent of respondents say it is for both customer and workforce accounts, 34 percent only automate checks for customer accounts, and 29 percent only automate checks for workforce accounts.

 To close the identity security gap, organizations need technologies, in-house expertise and resources. However, as discussed previously, more resources are allocated to investments in IT security. Fifty-four percent of respondents say there is a lack of technologies. Fifty-two percent say there is a lack of in-house expertise, and 45 percent say it is a lack of resources.

 Security is not a priority when making IAM investment decisions.  Despite many high-profile examples of insider security breaches, 45 percent of respondents say the number one priority for investing in IAM is to improve user experience. Only 34 percent of respondents say investments are prioritized based on the increase in number of regulations or industry mandates or the constant turnover of employees, contractors, consultants and partners (31 percent of respondents).

To achieve greater maturity, organizations need to improve the ability of IAM platforms to authenticate and authorize user identities and access rights. Respondents were asked to rate the effectiveness of their IAM platform in user access provisioning lifecycle from onboarding through termination, and its effectiveness authenticating and authorizing on a scale of 1 = not effective to 10 = highly effective. Only 46 percent of respondents say their IAM platform is very or highly effective for authentication and authorization. Fifty percent of respondents rate the effectiveness of their IAM platforms’ user access provisioning lifecycle from onboarding through termination as very or highly effective.

Policies and processes are rarely integrated with IAM platforms in the management of machine, service and other non-human accounts or identities. Forty-four percent of respondents say their IAM platform and/or processes are used to manage machine, service and other non-human accounts or identities. Thirty-nine percent of respondents say their organizations are in the adoption stage of using their IAM platform and/or processes to manage machine, service and other non-human accounts. Of these 83 percent of respondents (44 percent + 39 percent), 39 percent say the use of the IAM platform to manage machine, service and other non-human accounts or identities is ad hoc. Only 28 percent of these respondents say management is governed with policy and/or processes and integrated with the IAM platform.

IAM platforms and/or processes are used to perform periodic access review, attestation, certification of user accounts and entitlements but mostly it is manual. While most organizations conduct periodic access review, attestation and certification of user accounts and entitlements, 34 percent of respondents say it is manual with spreadsheets, and 36 percent say their organizations use custom in-house built workflows. Only 17 percent of respondents say it is executed through the IAM identity governance platform. Only 41 percent of respondents use internal applications and resources based on their roles and needs, to streamline onboarding, offboarding and access management. An average of 38 percent of internal applications are managed by their organizations’ IAM platforms.

Deprovisioning non-human identities, also known as non-human identity management (NHIM), focuses on removing or disabling access for digital entities like service accounts, APIs, and IoT devices when they are no longer needed. This process is crucial for security, as it helps prevent the misuse of credentials by automated systems that could lead to data breaches or system compromises.

Deprovisioning user access is mostly manual. Forty-one percent of respondents say their organizations include non-human identities in deprovisioning user access. Of those respondents, 40 percent say NHI deprovisioning is mostly a manual process. Twenty-seven percent of respondents say the process is automated with a custom script and 26 percent say it is automated with a SaaS tool or third-party solution.

Few organizations are integrating privileged access with other IAM systems and if they do the integration is not effective. Forty-two percent of respondents say PAM is running a dedicated platform. Twenty-seven percent say privileged access is integrated with other IAM systems, and 31 percent of respondents say privileged access is managed manually. Of these 27 percent of respondents, only 45 percent rate the effectiveness of their organizations’ IAM platforms for PAM as very or highly effective.

To read the full findings of this report, visit Guidepoint’s Website. 

Minnesota assassin used data brokers as a deadly weapon

I’ve called Amy Boyer the first person killed by the Internet…with only a hint of a stretch in my assertion.  She was stalked and murdered by someone who tracked her down using a data broker …in 1999.  I told her story in a documentary podcast called “No Place to Hide” published five years ago, on the 20th anniversary of her death.

The dark events that took place in Minnesota last month shows we’ve learned just about nothing, a sold 25 years after Amy’s unnecessary death.

Bob Sullivan

When alleged assassin Vance Boelter left his home on June 13, he had a list of 45 state politicians in his car, and enough ammunition to kill dozens of them. He also had a notebook full of their personal information, including home addresses. That notebook also had detailed information on 11 different Internet data brokers — how long their free trials were, how he could get home addresses. Most of them have names you’ve probably seen in online ads — I’ve redacted them in the image above to avoid giving them any unnecessary publicity.

Belter stalked his victims digitally. He ultimately killed Rep. Melissa Hortman and her husband, and shot a second state legislator and his wife, before his rampage ended. The horrific attack could have been even worse — and it was fueled, in part, by data brokers.

As stories of political violence mount in the U.S., a fresh spotlight is being shined on security for public officials — politicians, judges, government bureaucrats, even corporate executives. But America has failed for decades to take even basic steps to protect our privacy, failing again and again to pass a federal privacy law, even failing to do much about the hundreds of data brokers that profit off of selling our personal information.

What was the role of data brokers in this horrific crime and what more could be done to protect elected officials — protect all of us — going forward? I host a podcast for Duke University called Debugger, and in this recent episode, I talk with David Hoffman, a professor at Duke University and director of the Duke Initiative for Science and Society.

Would Boelter have found his victims without data brokers? Perhaps, perhaps not. We’ll never know.  But why do we seem to be making things so easy for stalkers, for murderers? Why do we pretend to be helpless bystanders when there are simple steps our society can take to make things harder for stalkers?

————Partial transcript————–

(lightly edited for clarity)

David Hoffman: We’ve known for quite a while that people have been actually been getting killed because of the accessibility of data from data brokers. These people search websites and people search data brokers are really the bottom feeders of the Internet economy. What we haven’t seen is something of such high profile as this particular instance, and it’s my hope that it’s going to serve as a catalyst for us to take some of the very reasonable policy actions that we could do to address this and make sure something like this doesn’t happen in the future.

Bob: It’s not just elected officials or CEOs of companies that are at risk for this, right? Who who else might be at risk from from digital stalking and from the information that can be gleaned from a data broker?

David Hoffman: I think some of the cases that we’ve seen have been, for instance, victims of domestic violence and stalking. But it can be just about anyone who, for one reason or another, has someone to fear … who can find out who they are, where they live, and other personal information about them and their family.

Bob: I know Duke has done some research on data brokers and their impact on national security and other issues.What kind of research have you done and what have you found?

David Hoffman: We’ve actually led a program on data broker research for six years now, and what we have done is shown the value that people are providing for the data so that… it actually has economic value, people are paying for it, and that they are creating the kinds of lists and selling them that are horrific.

Let me give you an example. We have found that there are entities out there that are collecting lists for sale of personal information about veterans and members of the military. We have found that there are people out there creating lists about people who are in the early stages of Alzheimer’s and dementia, and those people are selling those lists to scam artists, particularly because those people are at risk.

So we have actually done research where we’ve gone out and we have purchased this data from data brokers, and then we have analyzed what we have received and we see a tremendous amount of sensitive information, including information about sexual orientation of individuals and healthcare information.

Bob: Are there natural security risks as well from the sale of information at data brokers?

David Hoffman: You can imagine for the list that I described for members of the military and veterans …. not just information about them, but understanding information about their families, the issues that there could be for blackmail and for people trying to compromise people’s security clearances and get access to information.

Bob: I know there’s long been this perception that, you know, “You have no privacy, get over it.” There’s this helpless feeling many of us have that our information is out there. It’s hard for me to imagine sitting here…How could I make sure no one could find my home address, for example? Is there anything that Congress could do or policymakers could do to make this situation any better?

David Hoffman: Absolutely. I think there’s a number of things that people could do. So first of all, we have to take a look at where these entities are getting a lot of this information.

You know, for decades we have had public records that actually store people’s addresses, but before those records were digitized and made available on the internet, you would have to go to a clerk’s office in an individual county or city have to know what you’re looking for and be able to file an access request to get that information.

Now what we have done all across the United States is provide ready and open access to all of that information so that these nuts can access it en masse and be able to process it and then to further sell it. We need privacy laws that include the protection of public records because we include (personal information) when we purchase real estate or small business filings that we do, or a court case that we might be involved in. Yes, those produce public records, but we never intended those to be readily available to everyone at a moment’s notice on their computer or by automated bots that will go and collect them and then be able to provide that information to anybody who wants to provide a relatively small money, amount of money, usually under $20,

Bob: And in many cases free. One of the chilling elements of the affidavit I read … in the Minnesota case … he’s got a list of…how long the free trials are, what information you can get from each site… so you often don’t have to pay anything to get this kind of information, right?

David Hoffman: That’s absolutely right. And this just demonstrates once again, how important it should be that we have a comprehensive privacy law in the United States like they have in almost every other developed country around the world that would provide … protection for this kind of information. This isn’t something that’s going to chill innovation. This is not the kind of innovation that we need…people to actually create sort of spy-on-your-neighbor websites where you can learn all of this about anyone at that point in time.

We can still have innovation. We can still drive social progress with the use of data while providing much stronger protections for it.