Celebrating the 20th Anniversary of Ponemon Institute’s Cost of Data Breach

Twenty years ago, companies increasingly became awakened to the very real threat that their sensitive and confidential data had been or could be targeted by a cybercriminal. It was clear that such an incident would not only jeopardize the privacy of their customers and business partners, but it could also mean significant financial harm.

When discussing a possible research project, our client asked if there would be any way we could calculate the cost of a data breach. The idea was that having such a calculus would be extremely beneficial in helping IT and IT security practitioners prepare for the possible consequences of a data breach, but also to convince the C-suite and board members to budget more money so that investments in technologies and staffing would be sufficient. In both instances, we have heard the research has succeeded.

Over the years, the research has evolved based on what we have learned from organizations that have been breached.  In the typical study, we speak with IT, compliance and information security practitioners who are knowledgeable about their organization’s data breach and the costs associated with resolving the breach.

We are often asked, how do you calculate the cost? To calculate the average cost of a data breach, we collect both the direct and indirect expenses incurred by the organization. Direct expenses include engaging forensic experts, outsourcing hotline support and providing free credit monitoring subscriptions and discounts for future products and services. Indirect costs include in-house investigations and communication, as well as the extrapolated value of customer loss resulting from turnover or diminished customer acquisition rates.

In this year’s study, the global average cost was $4.4 million. Sixteen percent of organizations reported breaches involving attackers using AI, most often for phishing or deepfake impersonation, signaling an escalating AI arms race. U.S. average costs reached a record $10 million, fueled by the nation’s rising detection expenses and stricter regulatory penalties. In fact, more than one-third of U.S. organizations paid breach fines that averaged more than $250,000.

We hope you will download our 2025 report and look forward to hearing from you.

Click here to download the report.

Molly White on the state of crypto consumer protection

Bob Sullivan

There’s plenty of reasons to be skeptical of cryptocurrency, and there’s no better skeptic than Molly White, a programmer-turned-publisher who runs the popular newsletter “Citation Needed.”  So I was delighted to interview her recently for The Perfect Scam podcast I host.

There’s no bias like confirmation bias, and there’s no confirmation bias like someone who has invested in something – particularly something new and hard to understand.  In the long-running argument about crypto — is it a world-changing technology or a Ponzi scheme? — investors can’t help but root for one side of that debate.

Of course, this is more than just confirmation bias.  In a Ponzi scheme, faith=money. Other people’s faith.  So long as there are greater fools around, early Ponzi owners and their investments are safe. Only when the music stops playing do people get hurt.  These are all powerful forces that push rational people to do irrational things. And of course they react emotionally to anyone who wants to rain on their parade, who might hasten the end of the music with “pessimism.”

So, Molly White isn’t very popular among crypto investors.  If you really listen to her, I think you’ll find her quite reasonable, however.

Before we get to Molly in this episode, we speak with Glen Fishman, an early crypto investor who recently had almost $200,000 stolen from him in a sophisticated phishing scam.  Thanks to quick investigative work, federal agents were able to recover about half of the stolen crypto, but he was initially told it would take about a year to get his money back. It had been “removed” to an El Salvador-based exchange.

We tell Glen’s story to demonstrate that cryptocurrency holders do not enjoy many of the basic consumer protections that protect other financial account holders.  In fact, such protections fly in the face of the libertarian ideals that fuel the crypto world.  I’m not against this in any kind of philosophical way, but as a pragmatic matter, it’s a disaster. We see this is the rise of crypto ATMs, which have finally been outed as (almost entirely) a bank network for criminals. Unregulated money systems always devolve into cesspools of crime.  Some grow out of this phase, and I do wish this for cryptocurrency. But wishing is not a plan, and there are a lot of Glens out there who wish they understood this sooner.

Sophisticated financial tinkerers with money to burn are welcome to invest in crypto, of course, just as they are welcome to enjoy themselves in Las Vegas. But I worry: the investment bubble that is crypto relies on constantly recruiting more participants, and once again we are seeing an aggressive push into the consumer market.  I’m quite certain many buyers do not realize the extent to which they are playing with fire.

I don’t doubt that when the dust settles, there will be some real winners, and there will be a couple of interesting use cases for crypto. But in large part, cryptocurrency investing is still mere speculation, and when the bubble bursts, there’s going to be a lot of collateral damage. Many innocent bystanders will be hurt, as they always are.  We will learn that crypto has infiltrated some unexpected parts of the economy (like state pension funds), and I believe the fallout will be even wider than many pessimists expect.  We should be doing a lot more to contain this highly predictable damage right now. (Like this!)  Instead, for fairly obvious reasons, the current administration is smashing crypto guardrails. We all know how this story ends. We saw it in 2001 and 2009.  It’s a shame our memories are so terrible.

I hope you’ll listen to the full conversation. But in case you aren’t into podcasts, here’s a partial transcript of our conversation, very lightly edited for clarity.

——————-Partial transcript——————-

[00:38:22] Bob: There is an element in crypto baked right into its nature, which makes it more susceptible to theft of large amounts of money. In a way, it’s kind of built for that. We all know that it’s not just passwords that protect people’s financial accounts in the US, that there’s magic software that monitors transactions, particularly credit card transactions, but all transactions, and if somebody shows up and moves $178,000 suddenly out of an account, a red flag would pop up. We all have to trust that financial institutions are good at this, some aren’t, but should I trust that crypto exchanges are good at this? Would I have any reason to believe that?

[00:39:03] Molly White: Again, it really varies based on the company, but I would say that broadly in crypto, there’s actually a lot of resistance to the idea of placing limits on the types of transactions people can make and the amounts that people can transfer. The same types of limits that prevent someone from having their bank account drained by a bad actor are sometimes seen in the crypto world as an unfair infringement on your right to do what you please with your assets. And so there’s this sort of fragile balancing act that these companies have to take where they don’t anger their customers who feel like they should have access to the entirety of their accounts at any time, while also trying to prevent some sort of bad actor from completely draining the account. And so I would say that generally speaking, a lot of these programs in crypto exchanges are not as robust as in banks and other financial firms partly for that reason, partly because these companies are in some cases just less sophisticated, and then there’s also the issue where not everyone stores their crypto assets in a centralized account at an exchange like Coinbase or any of the various competitors that can impose those limits. And if you are storing your crypto assets in a wallet that is fully under your control and not at a third-party company, then there is no limit whatsoever on who can transfer the funds or to where or in what period of time, and there is absolutely no protection of that kind.

[00:40:40] Bob: That kind of transaction monitoring is basically against the whole ethos of cryptocurrency, right?

[00:40:46] Molly White: For many people it is. I think that as crypto has evolved and become more popular, we are seeing more people who appreciate the types of intervention by these third party exchanges or institutions that do add some degree of customer protection, but a lot of people do believe that ultimately these are my assets, I should be allowed to do anything I want with them, to transfer them immediately in any amount without anyone stepping in the way and saying, no, you’re not allowed to do what you want with your money. This is a very sort of libertarian ethos that underlies a lot of the crypto philosophy where people really don’t like the idea of anyone getting in the way of them and their money, whether it’s a government or a bank or some sort of compliance system or transaction monitoring. And so you have this sort of social opposition to these types of things as well as the limits that these companies are willing to go to to impose these types of systems.

[00:41:47] Bob: I think this is a really important point that I want to drive home for listeners, because okay, it’s one thing if you’re a tech person, you’re a libertarian, and go to a casino on the weekends for all I care, and you can invest in crypto for all I care, but when regular people who aren’t sophisticated, as we’re now in the next type cycle of this, become more and more involved in crypto, and they, they go to websites that might resemble a financial institution that they’re used to, and they might just presume there are protections around the transactions; I think that’s, that would be normal. I think it’s important to stress to them that they’re out on their own when it comes to crypto. Can you talk about that a little bit?

[00:42:23] Molly White: Absolutely. This is something I really try to drive home for people because I think, especially in the US, we’ve become very comfortable with some amount of protection around the financial activities that we engage in whether it’s banks offering depository insurance or transaction monitoring in our financial institutions, or oversight from securities regulators making sure that the stock exchange is a fair place to, to buy and sell assets. We become used to it and we begin to expect it everywhere, especially if the place we’re looking at really resembles a bank or a stock exchange or something like that. But ultimately, those protections are not there in crypto despite the similar appearance. We saw a really stark example of this in 2022 when a company called Celsius collapsed, and that was a crypto brokerage that had been advertising itself to customers as better than banks and providing services that banks would normally provide but describing themselves as the alternative, the superior alternative to a bank. Ultimately, it turned out that there was a lot of shady business happening at that company. The company collapsed and went bankrupt, and a number of customers wrote letters to the bankruptcy judge explaining how it had affected them. And I read multiple letters throughout that bankruptcy process that explained that: I didn’t think this could happen because this company was based in the United States, I thought US regulators were making sure everything was above board. Many people said they thought they had FDIC insurance on their assets in those accounts even though that type of depository insurance is not available in the crypto sector. So people thought that they were taking on a lot less risk than they actually were, and ultimately it destroyed some people’s lives. And this is really an issue throughout crypto where people just become used to these types of protections and they can’t fathom the idea that there is this total wild west financial sector that is advertising to everyday people, promising them the world, but there is really no safeguards there.

[00:44:44] Bob: Okay, so I hope you’re getting the message that if you invest in crypto, well you’re kind of on your own. And that’s okay if you do so with your eyes wide open. But there’s something else about crypto that’s important to understand; there’s just a lot of crime that travels across the network.

[00:45:01] Molly White: Crypto has become the choice for criminals doing any sort of cybercrime essentially. It has, because of its traceability challenges for law enforcement and others, because of the irreversibility of transactions, it’s a perfect asset for criminals, and now you never see ransomware, for example. Those attacks never happen outside of crypto. It’s always demands for Bitcoin or some other type of crypto asset because it’s just the perfect asset for that. We’re seeing increasing numbers of investment scams where people are being told that they can make a fortune overnight because it’ll go into Bitcoin or some other crypto asset that people have heard about and they’ve heard about people getting rich off of that, and they think maybe this is plausible. I think there are a ton of different reasons that criminals use it, but it has been very popular for cyber criminals, and if you look at the proportion of crime that happens using cryptocurrency, it is enormous compared to the number of people who are actually using crypto, investing in crypto; criminal activity is a substantial portion of that. And so I think it’s really been a boon for criminals, and it has caused this situation where everyday people who are using crypto or putting money into crypto have to be on high alert at all times because scams and hacks and frauds are just a part of the ecosystem. People who are enthusiastic and knowledgeable about crypto talk about the scams as though they’re just a normal day-to-day thing. It’s the cost of doing business. Pretty much everyone who has used crypto a substantial amount will, will admit they’ve been scammed at some point. So it’s really a free for all out there right now.

[00:46:52] Bob: I do sometimes wonder, is there’s so much fraud in crypto that maybe it wouldn’t exist or wouldn’t exist and it’s, weren’t encouraging cybercrime.

[00:47:00] Molly White: Yeah, it’s hard to say. I think it’s hard to say at that alternative scenario, but a substantial amount of activity in cryptocurrency is criminal. We are seeing more adoption of crypto broadly, and it’s clear that these days there is some institutional demand for crypto, there’s certainly been the retail enthusiasm around it, and so I wouldn’t say it’s fair to say that all of cryptocurrency is criminal activity or criminal behavior, but it certain is a shocking amount of it. And when you see crypto ATMs, you’re right, that is an enormous conduit for crypto thefts. There was a 99% increase from 2023 to 2024 with fraud involving cryptocurrency ATMs according to the FBI. $250 million was reported lost just in 2024, mostly from victims who are over the age of 60. And it was good to see that there has been a little bit of action coming out against these crypto ATM operators. There was recently a lawsuit against a major ATM operator coming out of the Attorney General for the District of Columbia explaining that the fraud protections are completely insufficient and that these crypto ATM companies are profiting off of thefts a lot of the time. They are taking large chunks from these transactions in which people are being scammed. That is how they are making their money. And there has been some attention to it, but I would say it has not been nearly enough.

[00:48:29] Bob: So I covered tech stocks during the dot com bubble, and I wrote a lot about the housing market during the housing bubble that proceeded The Great Recession, and I’m here to tell you, people who are making a lot of money really hate people who throw cold water on their investment bubbles. I still remember some of the hate mail I got. Well, Molly is in the midst of that right now.

[00:48:52] Bob: You’re a lone voice out there, one of the few voices. What is that like?

[00:48:55] Molly White: It’s a strange world. It’s certainly not a popular thing to do if you’re a crypto enthusiast. People don’t particularly like what I do. But I think it’s important to, to say, look, this technology, this financial asset has very serious issues, and that everyday people are suffering as a result of it, and there has not been sufficient enforcement or regulation around cryptocurrency, and we really need to be careful around this type of asset class. I am not opposed to crypto existing. I support anyone’s right to put money into cryptocurrency or speculate on the price of whatever token they’re interested in. I think that’s fine if people do so with all the facts, they have proper information to make informed decisions about what they’re doing with their money, and that they can trust that even if those assets go up or down in price, they will still be there tomorrow. But that is not the state of crypto at this point in time. At this stage, people not only have to understand that they’re taking on risk when it comes to the inherent volatility of most crypto assets, which are they go up and down in price quickly and dramatically. They also have to worry about the tokens that they’re putting money into being scams themselves. We’ve seen entire new words created for rug pulls and the types of other crypto crimes where people will create crypto assets and promise people the world for them, and then just take off overnight with all the assets and leaving the investors with nothing. And then finally, there’s the concern that, you know, even if you do have these crypto assets and you’re willing to take on the risk with the volatility, you may lose access to those assets through some sort of scam or the company might go bankrupt and you’ll be left with nothing. We’ve seen that happen over and over again. And I think just an unacceptable level of risk to ask people to take on. There’s a serious issue with information being available to investors to make informed decisions. And frankly, everyday people who are being encouraged to get into crypto are being brought in with an extreme disadvantage, and ultimately, end up often serving as exit liquidity for someone who is more sophisticated and potentially engaged in criminal activity.

[00:51:18] Bob: Okay, I need you to slow down on that last set of sentences there, ’cause I think that’s really important. Exit liquidity, tell me what you mean by exit liquidity.

[00:51:25] Molly White: So if you launch a cryptocurrency token and you want to scam somebody, you can’t actually make any money off of it unless you convince someone to buy it. And so that’s really what I’m referring to with exit liquidity is, you know, these people who are told that this is the hot new token and you buy it now you’re going to get in early and then make a ton of money. They are often exit liquidity, meaning that once they buy in, the person who created the token sells all of the tokens and takes off with their money essentially. It causes the tokens that these people purchase to go to zero so they can no longer get out of those positions, and the person who created the token makes a lot of money.

[00:52:08] Bob: But you also said before that, that the retail investors are at a severe information disadvantage, especially in this situation. Can you talk about that a little bit more?

[00:52:16] Molly White: So a lot of the regulations that exist in the financial system that we’re used to when it comes to securities or commodities or various forms of investments that people make beyond just holding currency, a lot of those regulations come down to making sure that everyone is on a fairly level playing field, that you understand the risks that you’re taking on. If you choose to say buy a stock, every stock that is issued on the public stock exchanges have this whole literature that is published on a regular basis that explains how the company’s doing, and the outlook for their future business, and the risks that are involved and the people who are running the company, and you know a lot about them and their background. That type of information is not available in crypto. Oftentimes, cryptocurrency projects are run by anonymous people, you don’t know even who is running the company that you’re being told to, to get involved with. You don’t know anything about who’s backing these companies. You don’t know about their business practices. You don’t know anything about whether they will continue to stay in business or what type of business they plan to do. It’s really just marketing. You get to read their marketing materials. There is no even oversight really to ensure that their marketing materials are accurate, and so it is just a breeding ground for scams because people can anonymously launch a cryptocurrency, promise people that it will be, it’ll change the whole system, and it’ll make billions of dollars, and then just take off with the money, and there’s really no oversight or enforcement stopping them.

[00:54:03] Bob: So I realize I’m invoking a legal term, and we’re not the law, we’re not a legal podcast, but that sure sounds like a Ponzi scheme to me that the early people make money and the less, people at the end are left without a chair. Why is this or not like a Ponzi scheme?

[00:54:19] Molly White: Many of these are Ponzi schemes. Just plainly speaking, crypto Ponzi schemes are a huge amount of the crypto fraud that we see. I would not say that crypto itself is a Ponzi scheme, but it is a vehicle for Ponzi schemes and we see many of them.

[00:54:36] Bob: As the person who is with the pins for the bubble, somebody’s going to blame you when the bubble bursts or when people lose money. Have you had that experience?

[00:54:44] Molly White: Absolutely. Yeah, people really don’t like it when you rain on the parade, but ultimately, I think that any asset should be able to speak for itself, and if you have to threaten people not to be critical of your asset, then there’s probably something seriously wrong. And like I said, a lot of the issues in this sector rely on, or stem from people not having adequate information about the token that they’re investing in or the company or the person behind the company. And so the more people are trying to hide that information, the more skeptical I get that something might be going on here that’s not aboveboard. But it’s very common unfortunately in the crypto world for people to attack those who are critical or skeptical, or even just asking questions about a project because so much of crypto’s value comes from the perception that this is an exciting token or an exciting project and, you know, the second that someone introduces doubt there, it can cause prices to go down.

[00:55:50] Bob: Okay, so all this skepticism, all well and good enough, fear, warnings, all that, but I have a friend who 7 years ago invested $1000 in X and he just bought a boat, so why shouldn’t I do this? What do you say to a person who comes to you with that?

[00:56:06] Molly White: Yeah, I hear that a lot, and you could say the same thing about someone who invested in a Ponzi scheme or any sort of scam. There are people who make money from scams, that’s why they exist. And sometimes it’s not the people who started the scam, sometimes it’s just people who got in early. But that does not mean that every person who, who buys in is going to be the winner, and in fact, it is fairly rare for that to happen. When it comes to crypto, there are certainly cryptocurrencies that are not scams. I’m not trying to claim that every crypto asset is inherently a scam, but there is an enormous amount of risk that people are taking on. And you can make a similar statement about oh, I know someone who bought Apple stock decades ago, and now they’re a billionaire. It happens. People sometimes choose the right token or the right stock or they get in at the right time. But you do need to pay attention to the sort of overall odds and the likelihood that will happen again. These days, a lot of people who are purchasing crypto assets are actually getting in pretty late. Many of the times they are getting in when the hype is at an all-time high, which often correlates with prices being at all-time highs. And so as more and more people get excited, they buy the marketing around how they can get rich just like some early investor, they often are buying at fairly high prices and ultimately, crypto goes through these boom-and-bust cycles where we see it go from tens of thousands of dollars to a fraction of that amount. And oftentimes that is when people lose serious amounts of money. We saw it happen in 2022; now crypto prices are back up, and I suspect it’s only a matter of time before we see it happen again.

 

 

The State of Cyber Resilience

Attacks against organizations’ data in storage are frequent and costly. Data storage refers to the methods and technologies used to retain digital information. On average, one attack against data in storage occurs each month, and the most significant attacks reported in the research averaged $5 million. As a result, 63 percent of respondents say securing data in storage is very or extremely important compared to other security initiatives.

Sponsored by Pure Storage, Ponemon Institute surveyed 610 IT and IT security practitioners in the United States who are knowledgeable about their organizations’ approach to their organizations’ data storage security posture.

Automation is considered key to achieving cyber resilience in data storage. Cyber resilience is the capacity of an enterprise to maintain its core purpose and integrity in the face of cyberattacks. In the context of this research, a cyber resilient enterprise is one that can prevent, detect, contain and recover from a plethora of serious threats against data, applications and IT infrastructure. The key to achieving a high level of cyber resilience in data security storage is automation, according to 66 percent of respondents.

Respondents were asked to rate their cyber resilience on a scale from 1 = low resilience to 10 = high resilience. Only 47 percent of respondent rate their cyber resilience as high to very high resilience (7+ on the 10-point scale). Fifty-five percent of respondents say cyber resilient data storage has value or high value (7+ on the 10-point scale).

 Securing sensitive data in storage is a priority because 36 percent of this data is considered mission critical and on average it can take 12 days following a data security incident to recover mission critical applications. Mission critical applications and data are essential for organizations’ operations and survival. If not recovered, operations could be significantly impacted or brought to a complete halt.

The following findings illustrate the challenges to securing data in storage

 The exploitation of vulnerabilities and ransomware are the two primary reasons a cyber incident occurs. Organizations represented in this research had an average of 7 cyber incidents that resulted in data loss in the past two years. Although challenging to identify root causes, 63 percent of respondents say the root cause was an exploitation of vulnerabilities and 61 percent say it was ransomware.

 Insiders are putting data in storage at risk. According to the research, an average of more than 5,433 employees and third parties have access to sensitive data in storage.  In the past two years, an average of 7 non-cyber incidents resulted in the loss of data. To minimize the threats from non-cyberattacks, organizations should take steps to prevent employee error or negligence (74 percent of respondents) and system hardware or software failures (69 percent of respondents).

 The biggest cost following a cyberattack against data in storage is the recovery of the up-to-date backups of critical data. Respondents were asked to calculate the most significant cost due to a cyberattack against data in storage. The four categories of the total cost of $5 million and the percentage respondents allocated to each cost are recovering up-to-date backups of critical data (31 percent), repairing or replacing affected systems and applications (26 percent), detecting and containing the incident (23 percent) and testing to ensure restored systems are functioning correctly and any vulnerabilities have been addressed (20 percent).

 Protection of data requires an accurate classification of the types of data stored. Only 45 percent of respondents say they know how much data is structured or unstructured. Fifty-three percent say stored data is structured data and 47 percent say it is unstructured. On average, 36 percent is considered “dark” or unclassified.

 Organizations are challenged to consistently manage data across all environments. Only 41 percent say they have a good or a high level of ability to manage data across all environments. Fifty-three percent of respondents say they have a good or a high level of ability to minimize downtime and data loss in the event of an attack and 49 percent of respondents say they are very or highly effective in minimizing downtime and data loss in the event of an attack.

 The most important indicators of cyber resilience in data storage security are Recovery SLAs, RTO and RPOs. Fifty-two percent of respondents measure cyber resilience in data security. Of the respondents that measure cyber resiliency, 59 percent say they measure consistency in achieving recovery SLAs.

Achieving recovery SLAs is critical to ensuring business operations can resume with minimal disruption after an incident, minimize financial and operational damage, set clear measurable goals for service providers and customers and select the best cost-effective solutions.  Fifty-six percent say they validate Recovery Time Objective (RTO) and Recovery Point Objective (RPO). RTOs and RPOs ensure that recovery efforts align with business needs by setting clear goals for how systems should be back online and how much data loss is tolerable.

Organizations prepare for the likelihood of a ransomware attack. Organizations have disaster/cyber recovery plans in place to deal with cyberattacks. Seventy percent of respondents say they have a plan for ransomware attacks, 65 percent of respondents say they have a plan for distributed denial of service (DDoS) attacks and 61 percent of respondents have plans for malware, including spyware, viruses trojans and worms.

Controlling employees’ and third parties’ access to sensitive data in storage is important to preventing non-cyberattacks. The primary root cause of a non-cyberattack was employee error or negligence.  Multi-factor authentication access controls (71 percent of respondents) and role-based access controls (RBAC) (63 percent of respondents) are used to protect stored data.

The most important control used in data storage is integration with SecOps tools such as SIEM, Extended Detection & Response (XDR) and SOAR. XDR is a cybersecurity platform that unifies and automates security data collection, analysis, and response across multiple layers of an organization’s environment, such as endpoints, networks, cloud workloads and email. SOAR seeks to alleviate the strain on IT teams by incorporating automated responses to a variety of events.

The benefits of AI in securing data in storage.  Forty-five percent of respondents say the deployment of AI-based security technologies will improve their organization’s data security storage and 53 percent of respondents say AI simplifies data security storage by performing tasks that are typically done by humans but in less time and cost.

Despite the benefits, the two most significant risks caused by AI to data storage security are incorrect predictions due to data poisoning (50 percent of respondents) and poor or misconfigured systems due to over-reliance on AI for cyber risk management.

Click here to read key findings and the full report at PureStorage.com

Why banana bread is the solution to the world’s fraud problem

Bob Sullivan

By any measure you can find, fraud is soaring in the U.S. and around the world.  I spent an hour on WHYY radio recently discussing the causes for this, but I can boil it down to one concept: big, uncaring companies have dehumanized customers and employees alike, creating a perfect playground for criminal mischief.

I write a lot of stories that reveal how much systems let people down and set them up to be victims of crimes. You’ll often hear me lament that big tech companies or financial institutions don’t do more to stop crimes.

Today, I have a different story to tell at The Perfect Scam podcast. It’s about a crime that *almost* happened, but didn’t — thanks in large part to well-trained bank employees who followed a well-designed system…with care.  But there’s another important element to this near-miss crime that plays a huge role: It happened in a small community, at a small bank, where employees had a personal connection to the victim.  Like this:

“The young man who is an assistant manager up there went to high school with at least one of my grandsons.”

And this:

“The lady at the bank, the one who was the person who called me initially, my son had a coffee truck in Rogersville for about a year and a half, and this bank manager loved his coffee. So she had come through his line so many times, and so knew me because of that.”

It’s human nature: When you know someone, or you know someone you know will know someone, you are far more likely to step in and ask questions when something seems amiss. After all, who could go to bed at night knowing they helped criminals steal $25,000 from an 83-year-old woman who is a pillar of the community?

I realize I’m telling this story upside down, giving you the punchline without the setup. That’s because the punchline *is* the story here. It’s the only part of this story which is a surprise. The rest follows an all-too-familiar refrain. Listen for yourself by clicking here. But here’s the setup.

Samuel, the would-be victim, has lived in this small town outside Springfield, Mo., for most of her 83 years.  She got a menacing call from someone claiming he was from a federal agency investigating a crime, and he needed her help.  Many calls later, Samuel was manipulated into a bank visit where she would ask for $25,000 to be wired to a nonexistent company.  But the teller and manager 1 asked so many questions that Samuel left without the money and headed for another branch.  By the time she got there, the bank had already put an alert on her account, and tellers put up multiple speed bumps. Ditto for branch No. 3.  Critically, bank employees did this with kindness, not dismissiveness or ageism, because the criminal had warned Samuel that a bank employee was “in on it.”  As I’ve written elsewhere, rudeness only pushes victims into the arms of criminals, who are very good at sounding compassionate.

The bank also thoughtfully notified Samuel’s children, who are also named on her account. The kids got mom off the phone with the criminal, got her home, and eventually persuaded her that she was talking to a criminal.  The whole episode was over in a couple of days, and the family didn’t lose a dime.

As a show of thanks, Samuel made banana bread and took some to each bank employee who played a role in foiling the crime.

I love a happy ending. And I love banana bread. I’m only half kidding when I suggest in this episode that baked goods are the answer to America’s fraud problems.  What I’m suggesting, of course, is that the human touch is missing from most cybersecurity initiatives.  We spend billions on software…we’re calling it AI now…. but we overlook the front-line workers who are often the difference between disaster and a close call.

I realize Linda Samuel’s story has a unique set of circumstances.  Many of us don’t live in a town where we can walk or quickly drive to a small, community bank.  Years of industry consolidation have ensured that.  In many cases, we only have a choice of one or two gigantic banks.  This is a mistake, and if you’re curious about the problem of hyper-consolidation and monopoly power, I’d invite you to visit the American Economic Liberties Project and the work of Matt Stoller, author of the “BIG” Substack newsletter.

For now, suffice to say it’s unlikely Linda Samuel’s story would have had the same ending if her money had been parked at Bank of Gigantica.

I do know many, many cybersecurity workers at these large institutions who care a lot about fraud, and often write code that stops crimes. When I have a chance to speak to tech worker audiences, I often remind them that no firefighter wins an award for a house fire that is stopped because a fire inspection forced a safety upgrade — the work these individuals do can be just as invisible and thankless, so I thank them for it.

But I’ll repeat myself — poor customer service is our greatest cybersecurity vulnerability.  This story makes that point by showing the alternative: good customer service can be our best crime-fighting tool.

We’re never going to get a handle on fraud unless banana bread, once again, is part of the equation.  Know Your Customer shouldn’t be a check box on a compliance form.  It should be standard operating procedure.   And it’s worth the investment.

 

The 2025 Study on Cyber Insecurity in Healthcare: The Cost and Impact on Patient Safety and Care

Larry Ponemon

Healthcare organizations’ ability to protect confidential patient data and ensure the highest quality of medical care is increasingly at risk, underscoring the need for a more human-centric security approach, our Cyber Insecurity in Healthcare study has found.

This fourth annual report was conducted to determine the healthcare industry’s effectiveness in reducing human-targeted cybersecurity risks and disruptions to patient care. With sponsorship from Proofpoint, Ponemon Institute surveyed 677 IT and IT security practitioners in healthcare organizations who are responsible for participating in such cybersecurity strategies as setting IT cybersecurity priorities, managing budgets and selecting vendors and contractors.

Healthcare organizations remain frequent targets, with cyberattacks continuing to disrupt patient care. According to the research, 93 percent of organizations surveyed experienced at least one cyberattack in the past 12 months. For organizations in that group, the average number of cyberattacks was 43, a 3-point increase from 40 in 2024.

The cyberattacks analyzed that took place over a two-year period in this research are cloud/account compromises, supply chain attacks, ransomware and business email compromise (BEC)/spoofing/impersonation. Among the organizations that experienced the four types of cyberattacks, an average of 72 percent report disruption to patient care, a 3-point jump from 69 percent in 2024.

While the cost of cyberattacks has declined, they remain a significant financial burden. We asked respondents to estimate the single most expensive cyberattack experienced in the past 12 months from a range of less than $10,000 to more than $25 million. Based on the responses, the average total cost for the most expensive cyberattack was $3.9 million, down from $4.7 million in 2024 but still substantial. This includes all direct cash outlays, direct labor expenditures, indirect labor costs, overhead costs and lost business opportunities.

Operational disruptions stemming from system availability problems remain the most expensive consequence. The following is a breakdown of the five cybersecurity cost categories for the single most expensive cyberattack as well as their average cost:

  • Disruption to normal healthcare operations cost an average of $1,210.172, a decrease from $1,469,524 in 2024.
  • Users’ idle time and lost productivity dropped to $858,832 from $995,484 in 2024. These costs were due to downtime or system performance delays.
  • The cost of the time required to ensure the impact on patient care is corrected decreased to $702,680 from an average of $853,272 in 2024.
  • The damage or theft of IT assets and infrastructure averaged $624,605,down slightly from $711,060 in 2024.
  • Remediation and technical support activities, including forensic investigations, incident response activities, help desk and delivery of services to patients saw the largest drop (28.6%) from $711,060 in 2024 to $507,491 in 2025.

 For the first time, this year’s study examined plans to secure clinical operations in the cloud. Thirty percent of respondents say their organizations have moved clinical applications to the cloud. Forty-five percent say their organizations will move clinical applications to the cloud in the next six months (9 percent), within the next year (8 percent), in the next one to two years (15 percent) or eventually (13 percent). This accelerating shift toward cloud-hosted clinical systems underscores the urgency of addressing cloud/account compromise risks, given the potential impact on patient care and service continuity.

 The report analyzes four types of cyberattacks that occurred over the past two years and their impact on healthcare organizations, patient safety and patient care delivery:

 Cloud/account compromise. A cloud/account compromise results from criminals obtaining access to credentials (e.g. user ID and passwords). The consequence is typically an account takeover where criminals then use those validated credentials to commit fraud and transfer sensitive data to systems under their control.

For the fourth consecutive year, frequent attacks against the cloud make it the top cybersecurity threat. Nearly two-thirds of respondents (64 percent) say their organizations are vulnerable or highly vulnerable to a cloud/account compromise. Seventy-two percent say their organizations have experienced cloud/account compromises, an increase from 69 percent in 2024. These organizations had an average of 21 such compromises in the past two years.

Supply chain attacks. Supplier impersonation and compromise attacks occur when a malicious actor impersonates or successfully compromises an email account in the supply chain. The attacker then observes, mimics and uses historical information to craft scenarios to spoof employees in the supply chain.

Fewer organizations are experiencing supply chain attacks. Forty-four percent of respondents say their organizations experienced an attack against its supply chains, a significant decline from 68 percent in 2024. Of these organizations, on average they experienced four supply chain attacks in the past two years. Fifty-seven percent say their organizations are very or highly vulnerable to supply chain attacks.

Ransomware. Ransomware is a sophisticated piece of malware that blocks the victim’s access to files. While there are many strains of ransomware, they generally fall into two categories. Crypto ransomware encrypts files on a computer or mobile device making them unstable. It takes the files hostage, demanding a ransom in exchange for the decryption key needed to restore the files. Locker ransomware is a virus that blocks basic computer functions, essentially locking the victim out of their data and files located on the infected devices. Instead of targeting files with encryption, cybercriminals demand a ransom to unlock the device.

Fewer organizations are paying a ransom, but the amount paid has increased. The costliest ransom paid (extrapolated value) was $1.2 million, up from $1.1 million in 2024 and a staggering 56 percent increase from $771,905 in 2022 when we first began tracking this data. This continuous rise underscores how threat actors are demanding and receiving larger payouts even as payment rates declined (33 percent in 2025 vs. 36 percent in 2024). Fifty-five percent of respondents believe their organizations are vulnerable or highly vulnerable to a ransomware attack. In the past two years, organizations that had ransomware attacks (61 percent) experienced an average of 5 such attacks.  The combination of threat exposure and escalating ransom demands creates operational and financial risk for healthcare organizations.

Business email compromise (BEC)/spoofing/impersonation. BEC attacks are a form of cybercrime that uses email fraud to attack healthcare organizations to achieve a specific outcome. Examples include invoice scams, spear phishing that are designed to gather data for other criminal activities, attorney impersonations and CEO fraud.

Concerns about these attacks have decreased significantly since 2022 when 64 percent of respondents said their organizations were very or highly vulnerable. In the 2025 research, 53 percent say their organizations are vulnerable or highly vulnerable to a BEC/spoofing/impersonation incident, a very slight decrease from 52 percent in 2024. And, 62 percent say their organizations experienced an average of four attacks in the past two years. In 2024, 57 percent said they had an average of four attacks in the past two years.

From breach to bedside: The persistent link between cyberattacks and patient safety

As in the previous report, an important part of the research is the connection between cyberattacks and patient safety. Among the organizations that experienced the four types of cyberattacks in the study, an average of 72 percent report disruption to patient care, a 3-point jump from 69 percent in 2024.

An average of 54 percent report poor patient outcomes due to increases in medical procedure complications. An average of 53 percent saw an increase in a longer length of stay and an average of 29 percent say patient mortality rates increased.

The following are additional trends in how cyberattacks have affected patient safety and patient care delivery.

  • Supply chain attacks continue to be the most likely to affect patient care. While fewer organizations in this year’s research had a supply chain attack (44 percent in 2025 vs. 68 percent in 2024), 87 percent of those respondents say it disrupted patient care, an increase from 82 percent in 2024. Patients were primarily impacted by delays in procedures and tests that resulted in poor outcomes (51 percent) and an increase in complications from medical procedures (49 percent). Mortality rates increased significantly from 26 percent in 2024 to 32 percent in 2025.
  • BEC/spoofing/impersonation attacks cause delays in procedures and tests. Sixty-two percent of respondents say their organizations experienced a BEC/spoofing/impersonation incident and had an average of four attacks. Of these respondents, 70 percent say a BEC/spoofing/impersonation attack against their organizations disrupted patient care. Sixty-five percent say the attacks caused delays in procedures and tests that have resulted in poor outcomes and 55 percent say it increased complications from medical procedures.
  • Ransomware attacks cause delays in patient care. Sixty-one percent of respondents say their organizations experienced an average of 5 successful ransomware attacks. Sixty-seven percent say ransomware attacks had a negative impact on patient care. Of these respondents, 67 percent say it resulted in longer lengths of stay, which affects organizations’ ability to care for patients. Fifty-six percent say it resulted delays in procedures and tests that resulted in a disruption to patient care.
  • Cloud-based user accounts/collaboration tools that enable productivity are most often attacked. Seventy-two percent of respondents say their organizations experienced an average of 21 cloud/account compromises, a slight increase from 20 in 2024. In this year’s study, 61 percent say the cloud/account compromises resulted in disruption in patient care, an increase from 57 percent in 2024. Sixty-one percent say cloud/account compromises increased complications from medical procedures and 52 percent say it resulted in longer length of stay. The tools most often attacked are text messaging (59 percent), Zoom/Skype/video conferencing (54 percent) and email (45 percent).
  • Data loss or exfiltration disrupts patient care and can increase mortality rates. Ninety-six percent of organizations in this research had at least two data loss or exfiltration incidents involving sensitive and confidential healthcare data in the past two years. On average, organizations experienced 18 such incidents in the past two years and 55 percent of respondents say they impacted patient care. Of these respondents, 54 percent say it increased the mortality rate and 36 percent say it caused delays in procedures and tests that resulted in poor outcomes. 

Employee negligence because of not following policies (35 percent of respondents), privilege access abuse (25 percent) and employee sends PII or PHI to an unintended recipient via email (25 percent) are the primary root causes of the incident.

 For the fourth year in a row, the data reinforces a sobering reality: cyber threats aren’t just IT security issues, they’re clinical risks. When care is delayed, disrupted or compromised due to a cyberattack, patient outcomes are impacted, and lives are potentially put at risk. 

Other key trends in cyber insecurity

Concerns about insecure mobile apps (eHealth) remained the top issue for the second consecutive year. However, respondents who cited this issue decreased from 59 percent of respondents in 2024 to 55 percent of respondents in 2025. Organizations are less worried about employee-owned mobile devices or BYOD (a decrease from 53 percent of respondents in 2024 to 49 percent of respondents in 2025) and cloud/account compromise (a decrease from 55 percent in 2024 to 49 percent in 2025 rounding out the top three spots. Thirty-eight of respondents identified generative AI to AI tools as a cyber concern, a new category in this year’s survey.

The top two barriers to achieving a strong cybersecurity posture continue to be a lack of in-house expertise and clear leadership. Forty-three percent of respondents cite insufficient in-house expertise, while 40 percent point to a lack of clear leadership. Fewer organizations view limited budgets as a primary deterrent, with 37 percent citing it in 2025, down from 40 percent in 2024. The annual IT budget in 2025 is $66.2 million. Of that 21 percent is allocated to information security.

Organizations continue to rely on security training and awareness programs to reduce risks caused by employees. But are they effective?  Negligent employees pose a significant risk to healthcare organizations. While more organizations (76 percent in 2025 vs. 71 percent of respondents in 2024) are taking steps to address the risk of employees’ lack of awareness about cybersecurity threats, but do they really reduce risks? Sixty-three percent say they conduct regular training and awareness programs. Fifty-one percent say their organizations monitor the actions of employees and 47 percent of respondents say their organizations use simulations of phishing attacks.

Multi-factor authentication (MFA) and secure email gateway are the top two technologies to reduce email and other email-based attacks. The use of MFA increased from 49 percent of respondents in 2024 to 54 percent of respondents in 2025. This is followed by secure email gateway (SEG) (52 percent of respondents in 2025 vs. 45 percent of respondents in 2024) and patch & vulnerability management (51 percent of respondents in 2025 vs. 52 percent in 2024).

Privileged access management (PAM) is the technology most often used to prevent identity risk and lateral movement in the network (59 percent of respondents). This is followed by identity and access management (53 percent of respondents) and alerts from SIEM to gain visibility (50 percent of respondents). 

Trends in AI and machine learning in healthcare

AI can increase the productivity of IT security personnel and reduce the time and cost of patient care and administrators’ work. For the second year, we include in the research the impact AI is having on security and patient care. Fifty-seven percent of respondents say their organizations have embedded AI in cybersecurity (30 percent) or embedded in both cybersecurity and patient care (27 percent). Fifty-five percent of these respondents say AI is very effective in improving organizations’ cybersecurity posture.

Fifty-five percent of respondents agree or strongly agree that AI-based security technologies will increase the productivity of their organizations’ IT security personnel. Fifty-six percent of respondents agree or strongly agree that AI simplifies patient care and administrators’ work by performing tasks that are typically done by humans but in less time and at a lower cost.

While only 40 percent of respondents use AI and machine learning to understand human behavior. Of these respondents, 55 percent say understanding human behavior to protect emails is very important. 

While AI offers benefits, there are issues that may deter wide-spread acceptance. Sixty percent of respondents say it is difficult or very difficult to safeguard confidential and sensitive data used in organizations’ AI.

AI technologies are maturing and stabilizing. While the No.1 challenge to the effectiveness of AI-based security technologies is interoperability (34 percent of respondents), the challenge of a lack of mature and/or stable AI technologies decreased from 34 percent of respondents to 28 percent of respondents. The second most difficult challenge are errors and inaccuracies in data inputs ingested by AI technology (33 percent of respondents).

 AI-based data loss prevention (DLP) is effective in preventing data loss incidents caused by employees and malicious insiders. AI-based DLP refers to using artificial intelligence and machine learning techniques to enhance DLP solutions, making them more effective at identifying and preventing sensitive data from being leaked or misused. This includes things like automatically classifying sensitive data, detecting anomalous user behavior and adapting to evolving threats.

Twenty-three percent of respondents say their organizations have adopted AI-based DLP and another 29 percent plan to adopt it in in six months (14 percent) or in one year (15 percent). Fifty-six percent of respondents say AI-based DLP is very or highly effective in preventing employee data loss incidents and 50 percent say this technology is very or highly effective in preventing malicious insider data loss incidents.

To read the full report, visit Proofpoint’s website. 

Facebook plays role in one-third of all scams — and earns 10% of its revenue that way

Bob Sullivan

Only an egghead inside a Big Tech company would devise a plan to fight crime that involves charging criminals more to access victims.  Behavioral economics, right!

When you make 10 percent of your revenue from crime, how else would you try to stop it? Reuters recently reported these stunning facts, based on internal Meta documents, in a story you should really read immediately. 

If you’ve ever reported an ongoing crime to Facebook/Meta — say, your account has been hijacked by a crypto scammer — you know the firm largely ignores these active crime scenes.  Well, the documents Reuters examines show Facebook ignores 95% of those complaints. I’ve been writing about this problem for years. And years. Soldiers’ accounts are often stolen and used for romance scams, for example.  This is heartbreaking to the financial victim, but also utterly maddening and violating to the soldier whose picture and profile are used to defraud people. Not only does Meta not care, it makes bank off these crimes, Reuters says.

“Meta internally projected late last year that it would earn about 10% of its overall annual revenue – or $16 billion – from running advertising for scams and banned goods, internal company documents show,” the story, written by , says.  You might remember him as the reporter who broke the Frances Haugen whistleblower story — she alleged that Instagram has piles of research showing it was harming kids, but did little to stop that.

Meta’s fraud filters are so promiscuous that they allow ads even if analysis shows 94% confidence the ad is a scam, the story says.

Even worse — Facebook grooms victims. Facebook’s algorithm pushes people into the arms of criminals. Users who click on scam ads get a healthy helping of more scam ads.

This allegation that Facebook profits from scams has been around for a long time.  I covered this lawsuit in 2021 which claimed that — not only does Meta cash in on scams — but it has actively recruited criminals and their posts, and has even held special training for them.

Somewhere along the line, you’ve heard the cynical phrase that facing government fines for breaking the law is “just the cost of doing business.”  Well, these documents put hard numbers on that concept. While Facebook has been hit by some of the largest regulatory fines in history, the company earns so much cash that it just isn’t afraid of fines. Again, from the story:

  • “Meta has internally acknowledged that regulatory fines for scam ads are certain, and anticipates penalties of up to $1 billion, according to one internal document. But those fines would be much smaller than Meta’s revenue from scam ads, a separate document from November 2024 states. Every six months, Meta earns $3.5 billion from just the portion of scam ads that ‘present higher legal risk,’ the document says, such as those falsely claiming to represent a consumer brand or public figure or demonstrating other signs of deceit. That figure almost certainly exceeds ‘the cost of any regulatory settlement involving scam ads.’

This was the theme in a podcast series called “Too Big to Sue” I hosted for Duke University. Big Tech is so powerful and rich now that it really isn’t subject to regulation by nation-states.  That’s why the push for platform accountability is so crucial.

Other bombshells in this story:

  • A May 2025 presentation by its safety staff estimated that the company’s platforms were involved in a third of all successful scams in the U.S.
  • Meta has also placed restrictions on how much revenue it is willing to lose from acting against suspect advertisers, the documents say. In the first half of 2025, a February document states, the team responsible for vetting questionable advertisers wasn’t allowed to take actions that could cost Meta more than 0.15% of the company’s total revenue. That works out to about $135 million out of the $90 billion Meta generated in the first half of 2025.
  • Meta also was ignoring the vast majority of user reports of scams, a document from 2023 indicates. By that year, safety staffers estimated that Facebook and Instagram users each week were filing about 100,000 valid reports of fraudsters messaging them, the document says. But Meta ignored or incorrectly rejected 96% of them.
  • Even when advertisers are caught red-handed, the rules can be lenient, the documents indicate. A small advertiser would have to get flagged for promoting financial fraud at least eight times before Meta blocked it, a 2024 document states. Some bigger spenders – known as “High Value Accounts” – could accrue more than 500 strikes without Meta shutting them down, other documents say.
  • To advertise on Meta’s platforms, a business has to compete in an online auction. Before the bidding, the company’s automated systems calculate the odds that an advertiser is engaged in fraud. Under Meta’s new policy, likely scammers who fall below Meta’s threshold for removal would have to pay more to win an auction. Documents from last summer called such “penalty bids” a centerpiece of Meta’s efforts to reduce scams. Marketers suspected of committing fraud would have to pay Meta more to win ad auctions, thus impacting their profits and reducing the number of users exposed to their ads.

Here’s a portion of the response Meta gave Reuters for this story: “In a statement, Meta spokesman Andy Stone said the documents seen by Reuters “present a selective view that distorts Meta’s approach to fraud and scams.” The company’s internal estimate that it would earn 10.1% of its 2024 revenue from scams and other prohibited ads was “rough and overly-inclusive,” Stone said. The company had later determined that the true number was lower, because the estimate included “many” legitimate ads as well, he said. He declined to provide an updated figure.”

 

 

From homeless to helping North Korea’s weapons program; the vexing problem of laptop farms

Source: Department of Justice

Bob Sullivan

It’s a dark, cluttered room full of bookshelves, each shelf jam-packed with laptop computers. There are dozens of them humming away, lights flickering. And each one has a Post-It note attached with a single name on it. And there’s a pink purse just hanging off the side of one of those shelves. What is that purse? And what do those laptops have to do with funding North Korea’s weapons program? That purse belonged to a woman named Christina Chapman, and those laptops … well this is a rags to riches to rags story you might not believe.

Fortunately, the Wall Street Journal’s Bob McMillan recently spoke to me for an episode of The Perfect Scam to help explain all this.

“The North Koreans, if they have a superpower, it’s identifying people who will do almost anything in task rabbit style for them,” he told me.  And that’s where Christina Chapman comes in.

When this story begins, Chapman is a down-on-her-luck 40-something woman — at times homeless, at times living in a building without working showers — who makes a Hail-Mary pass by enrolling in a computer coding school. That doesn’t work either, at first.  She chronicles her troubles in a series of TikTok videos where she shares her increasing frustration, even desperation.

“I need some help and I don’t know really how to do this. Um, I’m classified as homeless in Minnesota,” she says in one. “I live in a travel trailer. I don’t have running water. I don’t have a working bathroom. And now I don’t have heat. Um, I don’t know if anybody out there is willing to help…”

But then a company reaches out and offers her a job working as the “North American representative” for their international firm.  Her job is to manage a series of remote workers.  The opportunity seems like a godsend.  Soon, she’s able to move into a real home and eventually go on some dream vacations.   At one point, she goes to Drunken Shakespeare and gets to be Queen for a day. For a night, anyway.

But underneath it all, she knows something is wrong. The job requires her to receive laptop computers for “new hires” and set them up on her home network. That’s why there’s all those racks and all those Post-it notes.  The home office appears in some of her TikTok videos, and it looks a bit like something out of The Matrix. Every computer represents an employee. And many of them work at various U.S. companies… hundreds of companies.  And instead of logging directly into their networks, they log into Chapman’s network, and she relays their traffic to the companies they work for.

That’s not the only suspicious thing about Chapman’s job.  Each new employee must be set up with a new identity.  She files I-9 eligibility forms for each one, and often times accepts paychecks on their behalf.

Eventually, Chapman comes to understand that she’s being deceptive and breaking the law.  Clearly, she’s helping people who are ineligible to work in the U.S.  evade workplace checks.  In a private email at the time, she frets about going to prison over these deceptions.

What she doesn’t seem to know is where these ineligible workers come from. They’re all from North Korea.  And the hundreds of companies employing Champan’s remote workers are ultimately sending money to the Hermit Kingdom.

“And that is, at this point, bringing in hundreds of millions of dollars to the regime according to the Feds,” McMillan told me. “And … they like to remind us that’s being used to fund their weapons program. Which is pretty scary.”

Chapman is running what’s come to be known as a laptop farm. And while the details about her situation, revealed in McMillan’s Wall Street Journal story, are incredible, laptop farms are not unusual. Fake remote workers are a rampant problem.

“It seems basically if you work for a Fortune 500 company, I would be shocked if you haven’t had a North Korean at least apply for a job there. And many of them have hired people,” he said.

Eventually, one of Chapman’s clients does something suspicious, and the company complains to the FBI. Their investigation reveals hundreds of laptop computers are humming away in Champan’s home, essentially downloading millions of dollars from U.S. companies and funneling it to North Korea, evading U.S. sanctions.  She’s arrested and ultimately pleads guilty and is sentenced to eight years in prison.

“My impression is that when she initially started out, it was to receive a higher-paying job,” said FBI agent Joe Hooper. “She got wrapped up in actually getting paid for what she was doing, and she knew she was doing something wrong, but was looking the other way.”

 Ultimately, prosecutors say Chapman helped get North Koreans paying jobs at 300 US companies. They included a top 5 major television network, a Silicon Valley technology company, an aerospace manufacturer, an American car maker, a luxury retail store, and a US media and entertainment company. Collectively, they paid Chapman’s laptop farm workers $17 million. Over a three-year period, she made about $150,000.  So, she wasn’t really living like that queen from Drunken Shakespeare.
“They target the vulnerable and she definitely was vulnerable,” McMillan said. “She was, I think, a well-intentioned person who was just, just desperate and you do feel sad for her watching the videos because she didn’t make a ton of money, she didn’t appear to be, have any animus toward the United States. There’s no evidence really that I’ve seen that she actually knew she was working for North Korea, but at a certain point, like it was clear, it was clearly, she clearly knew she was working on a scam.”

Clark Flynt-Barr, now government affairs director for AARP (owner and producer of The Perfect Scam), used to work for Chainanalysis, which conducts cryptocurrency investigations. She told me that some North Korean remote workers hang onto their jobs for months, or even years. Some are good employees, even, and don’t know they are a pawn in their government’s effort to evade sanctions.

“They’re good at their job and they’re, in some cases, quite shocked to learn that they’re a criminal who has infiltrated the company,” she said

It’s hard for me to imagine that companies can have remote workers they know so little about — don’t they ever ask how the spouse and kids are? — but McMillan said the arrangement works well for many software developers.

“I think there are a lot of companies where software development is not necessarily their core competency, but they have to have some software…and so they hire these people who are pretty used to offshoring coding to other countries,” he said. “Basically, all they care about is, ‘Just make the software work. Do the magic, spread, spread the magic, software pixie dust and just get this done.’ ”

The remote work scam grew out of long-running efforts by North Korean hackers to steal cryptocurrency, McMillan said. Many were working to get hired by crypto firms so they could pull inside jobs, and then realized there was money to be made in simply collecting paychecks.

The good news is laptop farms are now squarely in the focus of the FBI. A DOJ press release from June indicates that search warrants were executed on 29 different laptop farms all around the country, and there was actually a guilty plea in Massachusetts.

There’s a side note to the story that’s pretty amusing; cybersecurity researchers have come to learn that many North Korean workers go by the name “Kevin” because they are fans of the Despicable Me movie franchise.  You can hear more about that, and much more from Christina Chapman’s TikTok account, if you listen to this episode of The Perfect Scam. But in case podcasts aren’t your thing, some crucial advice: Don’t tell the online world you are desperate; that makes you a target.  If you are hiring, make sure you know who you are hiring and where they live. Ask about the family! And if you are looking for a job, know that there are many criminals out there who can make almost anything sound legitimate.

And one other note that’s hardly amusing; there’s another set of victims in this story, people whose identities are used to facilitate the remote worker deception. Some of these people don’t find out about it until they get a bill from the IRS for failure to pay taxes on income earned by the criminal.  That’s why it’s important to check your credit and your Social Security earnings statement often.

Click here, or click the play button below, to listen to this episode.

New Study Reveals Insider Threats and AI Complexities Are Driving File Security Risks to Record Highs, Costing Companies Millions

Larry Ponemon

As threats continue to accelerate and increase in cost, cyber resilience has shifted from being a technical priority to being a strategic, fiscal imperative. Executives must take ownership by investing in technology that reduces risk and cost while enabling organizations to keep pace with an ever-evolving AI landscape.

The purpose of this research is to learn what organizations are doing to achieve an effective file security management program. Sponsored by OPSWAT, Ponemon Institute surveyed 612 IT and IT security practitioners in the United States who are knowledgeable about their organizations’ approach to file security.

“A multi-layered defense that combines zero-trust file handling with advanced prevention tools is no longer optional but is the standard for organizations looking to build resilient, scalable security in the AI era,” added George Prichici, VP of Products at OPSWAT. “Leveraging a unified platform approach allows file security architectures to adapt to new threats and defend modern workflows and complex file ecosystems inside and outside the perimeter.”

File security refers to the methods and techniques used to protect files and data from unauthorized access, theft, modification or deletion. It involves using various security measures to ensure that only authorized users can access sensitive files and to protect files from security threats. As shown in this research, the most serious risks to file security are data leakage caused by negligent and/or malicious insiders and not having visibility into who is accessing files and being able to control access.

Attacks on sensitive data in files are frequent and costly and indicate the need to invest in technologies and practices to reduce the threat. Sixty-one percent of respondents say their organizations have had an average of eight data breaches or security incidents due to unauthorized access to sensitive and confidential data in files in the past two years.

Fifty-four percent of respondents say these breaches and incidents had financial consequences. The average cost of incidents for organizations in the past two years was $2.7 million. Sixty-six percent of respondents say the average cost of all incidents in the past two years was between $500,000 and more than $10,000,000.

The bottom line of organizations is impacted by the loss of customer data and diminished employee and workplace productivity. These are the most common consequences from these security incidents.

Insights into the state of file security

 Insiders pose the greatest threat to file security. The most serious risk is caused by malicious and negligent insiders who leak data (45 percent of respondents). Other top risks are file access visibility and control (39 percent of respondents) and vendors providing malicious files and/or applications (33 percent of respondents). Only 40 percent of respondents say their organizations can detect and respond to file-based threats within a day (25 percent) or within a week (15 percent).

Files are most vulnerable when they are shared, uploaded and transferred. Only 39 percent of respondents are confident that files are secure when transferring files to and from third parties and only 42 percent of respondents are confident that files are secure during the file upload stage. The Open Web Application Security Project (OWASP) released principles on securing file uploads. According to 40 percent of respondents, the principle most often used or will be used is to store files on a different server. Thirty-one percent of respondents say they only allow authorized users to upload files.

The file-based environment that poses the most risk is file storage such as on-premises, NAS and SharePoint, according to 42 percent of respondents. Forty percent of respondents say web file uploads such as public portals and web forms are a security risk.

Macro-based malware and zero-day or unknown malware are the types of malicious content of greatest concern to file security. Organizations have encountered these types of malicious content and are most concerned about macro-based malware and zero-day or unknown malware according to 44 percent and 43 percent of respondents, respectively.

The effectiveness of file management practices is primarily measured by how productive IT security employees are, according to 52 percent of respondents. Other metrics include the assessment of the security of sensitive and confidential data in files (49 percent of respondents) and fines due to missed compliance (46 percent of respondents). Only about half (51 percent of respondents) say their organizations are very or highly effective in complying with various industry and government regulations that require the protection of sensitive and confidential information.

Country of origin and DLP are most likely used or will be used to improve file security management practices. Country of origin is mainly used to neutralize zero-day or unknown threats (51 percent of respondents). The main reason to use DLP is to prevent data leaks of sensitive data and to control file sharing and access (both 44 percent of respondents).

Most companies are also using or planning to use content disarm and reconstruction (66 percent of respondents), software bill of materials (65 percent of respondents), multiscanning (64 percent of respondents), sandboxing (62 percent of respondents), file vulnerability assessment (61 percent of respondents) and the use of threat intelligence (57 percent of respondents).

AI is being used to mitigate file security risks and reduce the costs to secure files. Thirty-three percent of respondents say their organizations have made AI part of their organizations’ file security strategy and 29 percent plan to add AI in 2026. To secure sensitive corporate files in AI workloads, organizations primarily use prompt security tools (41 percent of respondents) and mask sensitive information (38 percent of respondents).

Twenty-five percent of organization have adopted a formal Generative AI (GenAI) policy and 27 percent of respondents say their organizations have an ad hoc approach. Twenty-nine percent of respondents say GenAI is banned.

The security of data files is most vulnerable when transferring files to and from third parties. Only 39 percent of respondents say their organizations have high confidence in the security of files when transferring them to and from third parties.

Only 42 percent of respondents have high confidence in the security of files during the file upload stage (internal/external) and when sharing files via email or links. Forty-four percent of respondents say their organizations are highly confident in the security of files when downloading them from unknown sources. Organizations have more confidence when storing files in the cloud, on-premises or hybrid (54 percent of respondents) or in the security of backups (53 percent of respondents).

To read the key findings from this research, download the full report at OPSWAT.COM

 

The Challenges to Ensuring Information Is Secure, Compliant and Ready for AI

Larry Ponemon

The Ponemon Institute and OpenText recently released a new global report, “The Challenges to Ensuring Information Is Secure, Compliant and Ready for AI,” revealing that while enterprise IT leaders recognize the transformative potential of AI, a gap in information readiness is causing their organizations to struggle in securing, governing, and aligning AI initiatives across businesses.

The purpose of this research is to drive important insight into how IT and IT security leaders are ensuring the security of information without hindering business goals and innovation.

A key takeaway is that IT and IT security leaders are under pressure to ensure sensitive and confidential information is secure and compliant without making it difficult for organizations to innovate and pursue opportunities to grow the business.

“This research confirms what we’re hearing from CIOs every day. AI is mission-critical, but most organizations aren’t ready to support it,” said Shannon Bell, Chief Digital Officer, OpenText. “Without trusted, well-governed information, AI can’t deliver on its promise.”

The research also reveals what needs to be done to achieve AI readiness based on the experiences of the 50 percent of organizations that have invested in AI. These include preventing the exposure of sensitive information, strengthening encryption practices and reducing the risk of poor or misconfigured systems due to over-reliance on AI for cyber risk management. When deploying, organizations should develop an AI data security program, use tools to validate AI prompts and their responses, train teams to spot AI-generated behavior patterns or threat actors, use data cleansing and governance and identify and mitigate bias in AI models for safe and responsible use.

Metrics to demonstrate the value of the IT security program to the business is the top priority in the next 12 months. Some 47 percent of respondents plan to use metrics to show the value IT security brings to the organization. This is followed by acceleration of digital transformation and automation of business processes (both 44 percent of respondents). Forty percent of respondents say a top three priority is the identification and prioritization of threats affecting business operations.

Organizations recognize the need to make AI part of their security strategy, but difficulties in adoption exist.

 Fifty percent of respondents say their organizations are using AI as part of their security strategy, but 57 percent of respondents rate the adoption of AI as very difficult to extremely difficult and 53 percent of respondents say it is very difficult or extremely difficult to reduce potential AI security and legal risks. Foundational to success is to ensure AI is secure, compliant and governed.

AI deployment has the support of senior leaders. Compared to other IT initiatives, 57 percent of respondents say AI initiatives have a very or very high priority. Fifty-five percent of respondents say their CEOs and Boards of Directors consider the use of AI as part of their IT and security programs as very or extremely important. A possible reason for such support is that 54 percent of respondents are confident or very confident of their organizations’ ability to demonstrate ROI from AI initiatives.

 CEOs, CIOs and CISOs are most likely to have authority for setting AI strategy. Fifteen percent of CEOs, 14 percent of CIOs and 12 percent of CISOs have final authority for such AI initiatives as technology investment decisions and the priorities and timelines for deployment.

 Despite leadership’s support for AI, IT/IT security and business goals may not be in alignment. Less than half (47 percent of respondents) say IT/IT security and business goals are in alignment with those who are responsible for AI initiatives. Fifty percent of respondents say their organizations have hired or are considering hiring a chief AI officer or a chief digital officer to lead AI strategy. Such an appointment of someone dedicated to managing the organization’s AI strategy may help bridge gaps between the goals and objectives of IT/IT security with those who have final authority over AI strategy.

Concerns about privacy can cause delays in AI adoption. The inadvertent infringement of privacy rights is considered the top risk caused by AI. Forty-four percent of respondents say their biggest concern is making sure risks to privacy are mitigated. Other concerns are weak or no encryption (42 percent of respondents) and poor or misconfigured systems due to over-reliance on AI for cyber risk management.

Developing a data security program and practice is considered the most important step to reduce risks from AI. Fifty-three percent of respondents say it is very difficult or extremely difficult to reduce potential AI security and legal risks. To address data security risks in AI, 46 percent of respondents say they are developing a data security program and practice. Other steps are using tools to validate AI prompts and their responses (39 percent of respondents), training teams to spot AI-generated behavior patterns or threat actors (39 percent of respondents), using data cleansing and governance (38 percent of respondents) and identifying and mitigating bias in AI models for safe and responsible use (38 percent of respondents).

Despite being a priority, the top governance challenge is insufficient budget for investments in AI technologies. Thirty-one percent of respondents say there is insufficient budget for AI-based technologies. This is followed by 29 percent of respondents who say there is not enough time to integrate AI-based technologies into security workflows, 28 percent of respondents who say IT and IT security functions are not aligned with the organization’s AI strategy and 28 percent of respondents say their organizations can’t recruit personnel experienced in AI-based technologies.

 The adoption of GenAI and Agentic AI

 GenAI is considered very or highly important to organizations’ IT and overall business strategy because it improves operational efficiency and worker productivity. Of the 50 percent of organizations that have adopted AI, 32 percent have adopted GenAI as part of their IT or overall business strategy and 26 percent will adopt GenAI in the next six months. Fifty-eight percent of these respondents say GenAI is important to highly important to their organizations’ IT and overall business strategy.

 GenAI supports security operations and employee productivity. The most important GenAI use cases are supporting security operations (e.g. analyzing alerts, generating playbooks) (39 percent of respondents), improving employee productivity (e.g. drafting documents, summarizing content) (36 percent of respondents), assisting with software development (e.g. code generation or debugging) (34 percent of respondents) and accelerating threat detection or incident response (34 percent of respondents).

 Copyright and other legal risks are the biggest challenges to an effective GenAI program. Respondents were asked to identify the biggest challenges to an effective GenAI program. Forty-three percent of respondents say copyright and other legal risks are the top challenge to an effective GenAI program. Thirty-seven percent of respondents say lack of in-house expertise and 36 percent of respondents say regulatory uncertainty and changes are barriers to an effective GenAI program.

 Organizations are slow to adopt Agentic AI as part of their overall IT and business strategy. While 32 percent of respondents who are using AI have adopted GenAI, only 19 percent have adopted Agentic AI. Only 31 percent of the organizations that have adopted Agentic AI say it is very or extremely important to their organizations’ IT and business strategy.

Organizations’ approaches to securing data and supporting business innovation

 Ensuring the high availability of IT services supports business innovation. Respondents were asked what is most critical to supporting business innovation. Forty-seven percent of respondents say it is ensuring high availability of IT services and 43 percent of respondents say it is recruiting and retaining qualified personnel. Another important step, according to 39 percent of respondents, is to reduce security complexity by integrating disparate security technologies.

 Business innovation is dependent upon IT’s agility in supporting frequent shifts in strategy. Fifty-three percent of respondents say it is very difficult to support business goals and transformation. To support innovation the most important digital assets to secure are source code (44 percent of respondents), custom data (44 percent of respondents), contracts and legal documents (42 percent of respondents) and intellectual property (42 percent of respondents).

The importance of proving the business value of technology investments

Only 43 percent of respondents say their organizations are very or highly confident in the ability to measure the ROI of investments related to securing and managing information assets. The biggest challenge in demonstrating ROI for information management and security technologies is the inability to track downstream business impacts (52 percent of respondents).

The ROI of downstream business impacts involves understanding the indirect benefits and costs that ripple outwards from an initiative, activity or technology investment. Examples to measure include reduced errors and rework, increased efficiency and productivity and reduced compliance risks. Other challenges are the difficulty in quantifying intangible benefits (51 percent of respondents) and competing priorities (47 percent of respondents).

 Organizations are eager to see the ROI from security technologies.  Calculating ROI is important to proving the business value of IT security investments. It is helpful in making informed decisions about IT security strategies and investments, evaluating performance and calculating profitability. ROI from investments is expected to be shown within six months to one year according to 55 percent of respondents. Forty-five percent of respondents say the timeline is one year to two years (21 percent) or no required timeframe (24 percent).

 Security strategies and technology investments should address the risks of ransomware and malicious insiders.  Fifty-three percent of respondents say their organizations had a data breach or cybersecurity incident in the past two years. The average number of incidents was three. During this time, only 28 percent of respondents say cybersecurity incidents have decreased (18 percent) or decreased significantly (10 percent). Ransomware and malicious insiders are the most likely cyberattacks, according to 40 percent and 37 percent of respondents, respectively. The data most vulnerable to insider risks are customer or client data (58 percent of respondents), financial records (46 percent of respondents) and source code (43 percent of respondents).

 Malicious insiders pose a significant risk to data security. Encryption for data in transit (39 percent of respondents), email data loss prevention (35 percent of respondents), and encryption for data at rest (35 percent of respondents) are primarily used to reduce the risk of negligent and malicious insiders.

 Organizations find it difficult to reduce insider or malicious data loss incidents without jeopardizing trust. Fifty-one percent of respondents say their organizations are effective or very effective in their ability to monitor insider activity across hybrid and/or remote environments. Only 41 percent of respondents say their organizations are effective or very effective in creating trust while taking steps to reduce data loss incidents caused by negligent or malicious insiders.

 Reducing complexity in organizations’ IT security architecture is needed to have a strong security posture. Seventy-three percent of respondents say reducing complexity is essential (23 percent), very important (23 percent) and important (27 percent). Complexity increases because of new or emerging cyber threats (52 percent of respondents), the Internet of Things (46 percent of respondents) and the rapid growth of unstructured data (44 percent of respondents).

 Accountability for reducing complexity is essential. To reduce complexity the most essential steps are to appoint one person to be accountable (59 percent of respondents), streamline security and data governance policies (56 percent of respondents) and reduce the number of overlapping tools and platforms (55 percent of respondents). On average, organizations have 15 separate cybersecurity technologies

To read more key findings and download the entire report, click here. (PDF)

Yes, there is a 9-1-1 for scam victims. Get to know the guardian angels of the Internet — AARP’s Fraud Watch Network

Bob Sullivan

Many years ago, a very smart book editor I worked with (Jill Schwartzman at Dutton now) gently admonished me because I failed to include resources for consumers in my tirades about the mistreatment of consumers.  So was born concepts like “Red Tape Tips” I’d include at the end of my columns and an appendix in each book listing consumer advocacy organizations.  But that experience forced me to face a stark reality: Most of these organizations don’t really take phone calls. While there are plenty of well-meaning non-profit groups out there who try to fix broken policies that favor the Gotcha Capitalists and criminals — there are hardly any organizations set up to field calls from people who are hurting and need help right now.

There’s no 9-1-1 for a consumer who’s about to get ripped off.

Actually, there is. It’s AARP’s Fraud Watch Network helpline. And I’m proud to say that my work on AARP’s Perfect Scam podcast helps highlight the important work they do.

First, let me say I don’t fault the folks who created or work at various grassroots consumer organizations. They often toil away with skeleton staffs and meager funding, true Davids in a battle against billion-dollar Goliaths.  But it’s just not practical for them to take calls and offer customer support to individual victims or take on their cases.

And yes, if you are the victim of a crime, you can and should call 9-1-1 (or the non-emergency line) and report that to the police. Unfortunately, many in-progress scams are difficult to report — “what’s the crime?” — and local police aren’t always set up to offer on-the-spot advice or empathetic listening.

That’s why I’m happy to talk about the Fraud Watch Network. It’s staffed Monday-Friday, from 9 a.m. to 9 p.m., mainly by trained volunteers. They reach out to every caller within 24 hours or so. It’s staffed mainly by volunteers who are ready to help victims within a day or so, and offer both empathetic listening and practical advice.  They’ve stopped millions of dollars in criminal transactions by giving people a place to turn when they’re in crisis.

Who are these guardian angels of this dangerous digital age? In this week’s Perfect Scam episode, I spotlight two volunteers who do this work.  Like most helpline volunteers, Dee Johnoson and Mike Alfred are both former victims who once called the helpline, and now they are two of the 150 volunteers who give their time because they are called to help others.

At this link, you can find a partial transcript of the episode, in case podcasts aren’t your thing. I do hope you’ll listen, however. You’ll really like Mike and Dee. I want readers to see their kindness and empathy in action — those are in short supply these days, I fear.  But more than anything, I want readers to know that there is a 9-1-1 for scams.  If you or someone you love is caught up in an Internet crime right now, I urge you to call the AARP Fraud Watch Network Helpline at 877-908-3360 or visit the website. You’ll get near-immediate help from experts who really care.

You can also email me, of course, at the address on my contact page. Or you can email The Perfect Scam team at theperfectscampodcast@aarp.org.

AARP’s Helpline is part of AARP’s Fraud Watch Network. In addition to volunteers helping victims, the network has roughly a thousand trained volunteers working in their communities and online to spread the message of fraud prevention. To learn more, visit