Deepfake Deception: How AI Harms the Fortunes and Reputations of Executives and Corporations

The fortunes and reputations of executives and corporations are at great risk because of the ability of cybercriminals to target vulnerable executives with artificial images or videos for the purposes of extortion and physical harm. As more evidence of the reality and likelihood of deepfake attacks emerge, awareness of the need to take action to prevent these threats is growing. More than half of IT security practitioners (54 percent) surveyed in this research say deepfake is one of the most worrying uses of artificial intelligence (AI).


Click here to download the full report


The purpose of the research – sponsored by BlackCloak Inc. but conducted independently by the Ponemon Institute —  is to learn important information about how organizations view the deepfake risk against board members and executives and how these attacks can be prevented.  According to the research, executives were targeted by a fake image or video an average of three times. Another serious threat covered in this research for the second year is the risk to executives’ digital assets and their personal safety. In this year’s study, attacks by cybercriminals against executives and their families increased from 42 percent to 51 percent of organizations represented in the research.

It is not if, but when your executives and board members will be a target of a deepfake attack, and it is likely they will not even know it.  Respondents were asked to rate the likelihood of a deepfake attack, the difficulty in detecting it and the confidence in the executives’ ability to know that they are being targeted. Respondents said an attack is highly likely (66 percent), it is very difficult to detect (59 percent) and there is no confidence that executives would recognize an attack (37 percent).

The following findings illustrate the severity of deepfake and digital asset attacks

  • Is the person calling your company’s CEO a trusted colleague or a criminal? Forty-two percent of respondents say their organizations’ executives and board members have been targeted an average of three times by a fake image. Or worse, 18 percent are unsure if such an attack occurred. Of those targeted, 28 percent of respondents say it was by impersonating a trusted entity such as a colleague, executive, family member or known organization. Twenty-one percent of respondents say executives and board members received urgent messages such as the requirement of immediate payment or information about a security breach detected.
  • It is difficult to detect imposters seeking to do harm. Executives must understand that a zero-trust mindset is essential to not becoming a deepfake victim because 56 percent of respondents say It is essential to distinguish between what is authentic and what is fake in messages. For example, imposter accounts are social media profiles engineered for malicious activities, such as a deepfake attacks. The two types of deepfakes of greatest concern are social imposters (53 percent of respondents) and financial fraudsters (37 percent of respondents).
  • Executives need training and a dedicated team to respond to deepfake attacks. Despite the threat from deepfake cybercriminals, 50 percent of respondents say their organizations do not plan to train executives on how to recognize an attack. Only 11 percent of respondents currently train executives to recognize a deepfake and only 14 percent have an incident response plan with a dedicated team when a deepfake occurs.
  • Threatening activities may go undetected because of a lack of visibility into erroneous activities. Only 34 percent of respondents say their organizations have high visibility into the erroneous activity happening within their organization to prevent deepfake threats. Fifty-two percent of respondents say it is highly likely that their organization will evaluate technologies that can reduce the risks from deepfakes targeting executives. Fifty-three percent of respondents say technologies that enable executives to verify the identity and authentication of messages they receive are highly important.
  • The financial consequences of deepfake attacks are not often measured and therefore not known. Only 36 percent of respondents say their organizations measure how much a deepfake attack can cost. If they do, the top two metrics used are the cost to detect, identify and remediate the breach and the cost of staff time to respond to the attack.
  • Organizations are in the dark about the severity of the financial consequences from a cyberattack involving digital assets. Forty-three percent of respondents measure the potential consequences of a cyberattack against their executives and in 2023 only 39 percent of respondents said they had metrics in place. Forty percent of respondents say their organizations measure the financial consequences against the business due to a cyberattack against the personal lives of executives and digital assets, a slight decrease from 2023.
  • Metrics used to determine the financial consequences of a digital cyberattack against executives remain the same since 2023. The top two metrics for cyberattacks against executives are the cost of staff time (62 percent of respondents) and the cost to detect, identify and remediate the breach (51 percent of respondents).
  • Despite the vulnerability of executives’ digital assets, most training occurs following an attack. Most training is done after the damage is done, according to 38 percent of respondents in 2023 and 2024.
  • Attacks against executives and family members increase. Organizations need to assess the physical and digital asset risks to executives and their families. In 2023, 42 percent of respondents of respondents said there were attacks against executives and family members. This increased to 51 percent in 2025.
  • Online impersonations increased significantly since 2023. The most prevalent attacks continue to be malware on personal or family devices (58 percent of respondents in 2024 and 56 percent of respondents in 2023), exposure of home address, personal cell and personal email (50 percent of respondents down from 57 percent of respondents in 2023). However, online impersonations increased significantly from 34 percent of respondents in 2023 to 41 percent of respondents in 2024.
  • While still a low number, more organizations are increasing budgets and other resources because of the need to protect executives and their digital assets. Since 2023 48 percent of respondents say their organizations incorporate the risk of cyberthreats against executives in their personal lives, especially high-profile individuals in its cyber, IT and physical security strategies and budget, an increase from 42 percent of respondents. More organizations have a team dedicated to preventing and/or responding to cyber or privacy attacks against executives and their families, an increase from 38 percent to 44 percent of respondents.
  • More cybercriminals are targeting IP and executive’s home network. Organizations should be concerned that their company information, including IP and executives’ home networks, have become more vulnerable since 2023. The theft of intellectual property and improper access to the executive’s home network have increased from 36 percent of respondents to 45 percent of respondents and 35 percent of respondents to 41 percent of respondents, respectively. Significant consequences were the theft of financial data (48 percent of respondents) and loss of important business partners (40 percent of respondents).
  • The likelihood of physical attacks and attacks against executives’ digital assets has not decreased in the past year. Sixty-two percent of respondents in 2023 and 2024 say it is highly likely a cybersecurity attack will be made against executives’ digital assets and 50 percent in both years say there will be a physical threat against executives. As discussed previously, organizations are slow to train executives on how to avoid a successful attack against their digital assets. Sixty-eight percent of respondents say it is highly likely that an executive would unknowingly reuse a compromised password from their personal accounts inside the company and 52 percent of respondents say an executives’ significant other or child would click on an unsolicited email that takes them to a third-party website.
  • More organizations are providing self-defense training. Self-defense training has increased since 2023 from 53 percent of respondents to 63 percent of respondents in 2025. Slightly more organizations are assessing the physical risk to executives and their families from 41 percent to 46 percent of respondents. Forty-one percent assess the risk to executives’ digital assets when working at home.
  • Why is it difficult to protect executives’ digital assets? The top two challenges are due to remote working and not making protection of digital assets a priority when executives work outside the office, 53 percent and 51 percent of respondents, respectively. As a consequence of not training executives to protect their digital assets, only 38 percent of respondents say their executives and families understand the threat to their personal digital assets and only 32 percent of executives take personal responsibility for the security and safety of their digital assets.
  • Confidence in CEOs’ and executives’ ability to do the right thing to stop cyberattacks continues to be low. While there is an increase in confidence in the CEO or executive knowing how to protect their personal computer from viruses (32 percent of respondents, an increase from 26 percent of respondents in 2023), it is still too low. Also, there is a significant decrease in executives knowing how to determine if an email is phishing (23 percent of respondents from 28 percent in 2023). Organizations lack confidence in their executives knowing how to set up their home network security (25 percent of respondents percent of respondents and 26 percent of respondents in 2023) and knowing if their email or social media accounts are protected with dual factor authentication (20 percent of respondents and 16 percent of respondents in 2023).
  • Difficulty in stopping cyberattacks against executives and their digital assets remains high. It continues to be highly difficult to have sufficient visibility into executives’ home networks cyberattacks (63 percent of respondents), to have sufficient visibility into executives’ personal devices (66 percent of respondents), sufficient visibility into executives’ personal email accounts (67 percent of respondents), sufficient visibility into executives’ password hygiene (60 percent of respondents) and sufficient visibility into executives’ privacy footprint (65 percent of respondents).

To read the rest of this report, visit BlackCloak’s website

Follow a sextortion scam unfold as it ends in unimaginable tragedy

Jordan: Please, bro….It’s over. You win, bro.
Dani: Okay. Goodbye.
Jordan: I am KMS RN….which stands for, I’m killing myself right now….because of you.
Dani: Good. Do that fast or I will make you do it. I swear to God.
And that message … was the last one that he sent to Dani.

Bob Sullivan

Some stories are much harder to write than others. I’m not sure I’ve had a more difficult project than this week’s episode of The Perfect Scam. For this story, I speak with Jordan DeMay’s parents, and the detective who investigated Jordan’s sextortion and suicide.

You’ve probably heard about Jordan’s case, or other cases like it. Sextortion is extortion fine-tuned for the social media age.  A criminal pretends to be an attractive person and approaches a victim with a simple private message or text, then slowly escalates the conversation until the victim shares explicit photos. Then the criminal threatens to share these photos with friends and family unless the victim pays a “ransom” — and even then, criminals continue to apply pressure, issuing more and more demands.  While high-profile cases of sextortion often involve teenagers, anyone can be the target of a sextortion scam. They are powerful; the pressure criminals exert can be immense. Most times, criminals are working off fine-tuned scripts, learned from YouTube or purchased from a criminal service. Or they are trained as “employees” of a large criminal enterprise.

We are all saturated with unwelcome texts right now, many appearing as accidental wrong-number connections.  My cell phone number begins with a Seattle area code, so I get messages with vague requests like, “I’ll be in Seattle for a couple of days. Where should I go to dinner?”  Unsolicited private messages on Instagram and Facebook often begin the same way. Many of those messages are attempted sextortion scams.

That’s why it’s so important to understand how they work. And this week’s episode gives a rare, blow-by-blow account of a sextortion in progress.  At times, it’s hard to hear. It was certainly hard to talk about. But this is the kind of story you shouldn’t turn away from. John DeMay and Jennifer Buta are incredibly brave and compassionate, despite enduring pain no human was meant to experience. And Detective Lowell Larson dispenses deep wisdom that only arises from years of very serious, meaningful work.

The scale of the sextorition problem is probably wider than you think, and might be even wider than law enforcement knows.  John DeMay has identified more than 40 sextortion victims who’ve commited suicide; and many people believe the real number is much higher.  Jordan deleted all his social media content before taking his own life. Only subsequent investigation revealed the truth about the attack he suffered  It’s possible many suicide stories end without someone like Detective Larson completing a thorough investigation.

That’s why it’s important for everyone understand how sextorion works. I hope you’ll listen to this episode – Jordan’s parents have so many powerful things to say, and how they say it is just as powerful —  but if you aren’t into podcasts, there’s a partial transcript below. It inlcudes a text version of the dialog between Jordan and his attackers.  But perhaps more important, it also includes Detective Larson’s advice about what parents can do to help their children navigate this increasingly complex and threatening digital world — and it includes some of the wisdom that John and Jennifer have to share.

But I’d like begin with the end of this story, because Jordan’s 17 years of life add up to much, much more than those years would suggest. Here’s his mom:

“Jordan was this larger than life perso, and I don’t think he knew it. And so for this to happen to him and be this … landmark case and have this media attention. Sometimes I just sit back and I’m like, of course. Of course this happened with you because you were this bright light and the center of attention. Here you are in the afterlife still holding that. It’s just that it’s no longer your voice. It’s my voice with your story.”

—————Partial transcript————-

The below transcript includes in-depth discussion of suicide. If you are in crisis, call or text 988 and get help right now.

[00:05:11] John DeMay: It was a Thursday night. He was at his mother’s house for that week, but he had had to come to our house a little bit earlier. We were getting ready to leave on vacation for two weeks. The next morning on Friday, we were heading down to Florida for, uh, for a beach vacation that we do every year, and he was really excited about that. And it was one of our, one of our favorite trips that we do every year. So we were packed and ready to go, and he came strolling into the house. I saw him for the last time at around 10, 10 15 that night. And I just had passed him outside on the, on the patio and he was rolling his bag in, coming from his girlfriend’s house. And I had just told him goodnight. I’m, I’m heading to bed and cut you in the morning. And that’s what I did. I went to sleep. My wife was up finishing laundry, getting the, getting our other two kids bags packed. Jordan was downstairs in his room getting his bags packed. He was doing some laundry.

[00:06:01] Bob: doing laundry packing, saying goodbye to his girlfriend before trip. Normal teenage stuff. And then Jordan gets a private message, a one-word message from someone he doesn’t know. In fact, I’ll bet you’ve received one just like it, probably more than once. The message says it’s from a woman named Dani Robertts. It comes at 10:19 PM.

[00:06:25] Lowell Larson: So the very first conversation that occurred between Dani Robertt’s profile and Jordan DeMay started out with Dani asking, “Hey.”

Jordan – “Who is you?”

Dani – “I’m Dani from Texas, but in Georgia at the moment.”

Jordan – “Nice.”

Dani – “Yeah. Hope I didn’t invade your privacy. Just bored.”

Jordan – “Nah, you good.”

[00:06:49] Bob: The conversation bounces back and forth with simple chat like that for about an hour. You can imagine Jordan stuffing clean clothes into a suitcase while chatting, and then at 11:29 PM…

[00:07:02] Lowell Larson: Dani – “What do you do for fun?”

Jordan  – “Lift, play sports and listen to music. What about you?”

Dani  –  “Sound fun. Well, I like hanging out with friends and playing sexy games. Sorry, that came out wrong. My bad,”

Dani – “Sorry if I got upset. Just bored, to be honest. I thought you might want to do something fun. It is actually a sneak pic exchange. No screenshots. You’re down? It’s just for fun though. Nothing else. It’s actually a live mirror pic exchange showing your sexy body, no screenshots. You get what I mean?”

Dani – “Yeah. And it’s set up view once after viewing it disappears. Of course. I can go first if you like, but you’re home, right?

Jordan – “Yeah.”

Dani – “Cool. Can you just take a mirror snap showing you’re ready when I’m. Then I will go first with the sexy pic.”

[00:08:02] Bob: The game goes on for another hour, two hours. The pictures are innocent enough at first, Detective Larson says. Within three hours after that first, “Hey,” Jordan sends a revealing picture of himself and Dani pounces instantly

[00:08:20] Lowell Larson: After Jordan had sent an unclothed picture at 1:23 AM, Dani Robertt”s account sends three photo collages with a message, “I have screenshot all your followers and tags and can send this nudes to everyone and also send your nudes to your family and friends until it goes viral. All you have to do is cooperate with me and I won’t expose you.”

Jordan – “What I gotta do?”

Dani – “Just pay me right now and I won’t expose you.”

Jordan – “How much?”

Dani – “$1,000. Deal or no deal?”

Jordan – “I don’t have a grand.”

[00:09:02] Bob: Whoever is on the other side of the keyboard is now extorting Jordan, and he doesn’t know what to do

[00:09:11] Lowell Larson: And this goes back and forth for a time and then is basically negotiated down where Jordan agrees to pay $300. And Dani agrees to accept that and not expose him. So he sends $300 via Apple Pay. Dani tells Jordan that she’s deleting everything,

[00:09:33] Bob: But Dani doesn’t. The cruelty and the pressure continue,

[00:09:39] Lowell Larson: Dani comes back and says that she wants more money to delete his images off of a different platform. And so they go back and forth and they start negotiating again. And basically Dani is looking to obtain another $800 to delete the images off of Google.

[00:09:59] Bob: And the demands continue. Jordan tries desperately to figure out what to do. The person making these demands exerts maximum pressure.

[00:10:10] Lowell Larson: You know, a troubling thing is the Dani Robertts account would, she’s asking for more and more money, would start giving it a countdown. Next message would be 14. Next one 13, 12. You know, and so every message coming in was the countdown, which is, uh, kind of very powerful for someone that’s very scared.

[00:10:32] Bob: Scared, and from his messages, feeling out of options.

[00:10:37] Lowell Larson: And basically Jordan tells her that he doesn’t need him to have $500, and eventually Jordan agrees to send the remaining money that he has, and that’s $55.

[00:10:49] Bob: Jordan sends every last dollar he can cobble together, but the cruelty gets so much worse. Jordan begins to express how desperate he feels that he doesn’t want to go on and five hours into this nightmarish encounter…

[00:11:07] Lowell Larson: And at one point Dani Robertts at 3:28 AM says, “Okay, then I will watch you die a miserable death.” And Jordan says, “Please, bro.” Later on at about 3:43 AM Jordan says, “It’s over. You win, bro.”

Dani – “Okay. Goodbye.”

Jordan – “I am KMS RN”

[00:11:30] Lowell Larson: Which stands for, “I’m killing myself right now.” And then he says, “Because of you.”

Dani – “Good. Do that fast or I will make you do it. I swear to God.”

[00:11:41] Lowell Larson: And the message that I read to you was the last one that he sent to Dani Robertts.

[00:11:54] Bob: Morning breaks and John DeMay gets up and thinks about final preparations for that family beach trip they will go on after Jordan gets home from school that day, but a text from Jordan’s mom causes immediate alarm.

[00:12:07] John DeMay: Jennifer had texted me and asked me if Jordan was at school that day, and I said, “Well, that’s kind of interesting.” So I got up, my wife and my, my two girls were up already getting ready for school. And I looked out the kitchen window and I saw Jordan’s car still parked in the driveway at 7:30 and that was really odd. He’s usually long gone by 7:10, 7:15, and frankly, he never, ever misses school and you know, so I didn’t know if he slept in or, or what was going on. So I went downstairs into his bedroom and I opened up the door and I found him. He had shot himself in his bed.

—-Later in the epiosde—

[00:27:54] Bob: So what change needs to happen? Jordan’s death raises a whole wide set of complex issues. Recall that horrible night began with a simple one-word message the kind many of us receive on a regular basis.

[00:28:09] John DeMay: At this point I’ve been speaking all over the world really and traveling and presentations and parent nights and law enforcement conferences and in Washington DC and, and what I’m finding is, especially from the law enforcement community, that the sextortion stuff in the last couple of years has gotten so rampant that most feel that it’s really not even a, an if you’re a teenager and get exposed to this, it’s when you are going to get exposed to it. To some level, and we all, we all get these random messages from random different people, from different parts of the world and, and friend requests and things and, and oftentimes those are the very beginnings of what could be a sextortion scheme. There’s a lot of groups and, and individuals that are doing this at a very high volume. It’s a numbers game.

[00:28:55] Bob: These text messages that we’re all getting right now where it, it could be just something like, “How are you?” Or it could be, “Hey, I’m in Seattle” ’cause I have a Seattle area code.

[00:29:02] John DeMay: Right.

[00:29:03] Bob: “Where should I go to dinner?” or whatnot. But, and, and behind that might be someone starting a sextortion scheme.

[00:29:08] John DeMay: That’s correct. Yeah. And you almost have to assume that at this point. Um, and when I talk to teenagers and parents, I tell them that’s what it is, you know, because it. It’s probably not anything else. People do reach out and there are people that have good intentions of meeting other people on the planet, but you know when, when some really amazingly beautiful woman is just reaching out to you randomly and then wants to, you know, now we’re into your conversation and wants to talk about sex with you, there’s probably a pretty good indicator that this isn’t what it seems.

[00:29:35] Bob: And I know your son’s story shows it can happen, it can escalate very, very quickly, right?

[00:29:40] John DeMay: Oh, absolutely. I mean, I, if you looked at every single sextortion suicide that’s happened, it’s happened, you know, under six hours for sure. Most of ’em, there are a few that have drug out over time, but a lot of ’em are literally within 30 minutes to two hours.

[00:29:54] Bob: They’ve tested these scripts, I’m sure, and then they can manipulate really, really anybody, right?

[00:30:00] John DeMay: Yeah, a hundred percent. And in our case, uh, particularly, and, and, and probably a lot of others, fast forwarding, you know, with the information that we all have now, the perpetrators, um, from Jordan’s case were, were they were educated and trained by a online group called the Yahoo Boys. And the Yahoo Boys was basically a, a loosely organized group that put together basically a training manual they had, you could go right on YouTube. The, the video’s up for were up for years. You could learn about sextortion, learn how to do it, how to get your victims could purchase scripts from them. Uh, they taught you how to get hacked accounts and buy materials, everything. So everything you need to know to learn how to do this particular crime was right on YouTube for anybody to see. And our group of suspects used that organization and were trained how to, to how to do it. It shows the professionalism in this industry and in this type of crime that young people don’t understand and parents don’t either. And I, I tell young people that it’s not your fault, right? I mean, this is, this is a crime. And these people are professionals. They know exactly what to do and what to say and how to say it. They know how you’re gonna act. They know what you’re going to say. These are all things that they’ve done time and time and time again. So they’re very well read in, in what happens. And, um, I try to stress that to them. And, uh, that’s, I think that’s the biggest piece. So when they understand, Hey, this isn’t, you know, I made a mistake, but this isn’t my fault, you know, it really is not

[00:31:31] Bob: so warning parents and teenagers, really anyone with a cell phone that they will be targeted by a extortion scam. That’s the first thing John wants, but he wants more change. He wants tech companies to do more.

[00:31:45] John DeMay: At the end of the day, the, the social media companies are, are the one that are creating the atmosphere for all this to happen. It’s really unfortunate as I meet, uh, more whistleblowers, um, from these companies and meet politicians and major players in the game, it’s, it gets scarier and scarier and scarier.

[00:32:02] Bob: Jennifer also wants criminals around the world to know that thanks to the successful prosecution of our son’s attackers, well, criminals shouldn’t feel safe just because they are far away.

[00:32:14] Jennifer Buta: I think that’s a huge message, that it doesn’t matter that you’re in a different country. You can be found, you can be arrested, and you can be held accountable for what you’ve done. I hope that it’s a deterrent for people. I know in Nigeria, you know, for this crime, the punishment is not harsh at all, and so that’s not really a deterrent for them there. And knowing that they can be brought here and face our justice system. Hopefully that prevents them from, you know, taking it to this level where they’re telling children to take their lives.

[00:32:52] Bob: Both John and Jennifer spend a lot of time talking about Jordan’s death now, hoping they can do as much as possible to prevent other tragedies

[00:33:01] Bob: What kind of reactions do you get when you, when you talk to people about this?

[00:33:06] Jennifer Buta: I mean, there’s several, it depends on, you know. The day for me. Sometimes, sometimes I feel better in talking about his story because if I tell someone I’ve just educated them and hopefully they’ll tell someone, and that gives me hope that another family won’t have to go through what I’ve gone through. And sometimes it’s really difficult to talk about it because you’re constantly reliving the nightmare of what did happen to Jordan. I get an overload of messages through my social media, of parents saying, thank you for sharing this story. Because I told my kid about it and it happened to them and they knew what to do. They remembered Jordan’s story. Wow. And they came to me for help. I also have parents that have reached out to me and said, this happened to my child. I don’t know what to do. I don’t know what to do. I don’t know where to go with the law enforcement things. Can you help me with this? And even yesterday, I received a message from actually our local government offices, someone contacted them trying to reach me because they were going through that situation and they wanted to talk to me.

[00:34:20] Bob: That’s an amazing thing that you’re doing, but gosh, that also feels like such a, a burden to be picking through all these emails is you have to be customer service to the world. That sounds like a lot.

[00:34:29] Jennifer Buta: It is. Sometimes it gets heavy and I can’t get to everyone. Um, at one point it was just, it was too overwhelming to respond to everyone.

[00:34:40] Bob: One point that Jennifer wants to make sure parents here and law enforcement hears is that without Jordan’s girlfriend coming forward to report the sextortion message she received. Jordan’s parents probably would never have learned the truth.

[00:34:55] Jennifer Buta: Absolutely. That was when I found out what happened to Jordan, that was one of the first thoughts in my head was, how many kids has this happened to? And their parents think that they just took their own life, but don’t know that there was actually something else behind it because they didn’t think to check their social media. Or maybe it was deleted from their social media like it was in Jordan’s case. And you know, one of the things that I think we’ve learned through Jordan’s case is this has taught law enforcement about financial sextortion, and that when they come upon a case where someone has taken their life, maybe take that extra step and check if it was something like financial extortion.

[00:35:37] Bob: The most important message they want to share is to reach a child, a teen who might find themselves in what feels like a desperate situation. And make sure they know that help is available. Make sure they know to reach out. John takes such calls and messages all the time.

[00:35:54] John DeMay: Well, I, I can tell you there’s, there’s hundreds of stories, hundreds of them, and it, it really, It gives me the fuel to, to keep going on the awareness side for sure. And it, and it, and it provides me with purpose to push for change legislatively and, and the other things that I’m doing. But just last week, last, it was last Thursday night. Last Thursday night, I did a presentation at our local middle school for sixth grade, seventh grade, and eighth grade. We did three individual presentations on sextortion for each of those grades. And that was about eight weeks ago, 10 weeks ago. I did that right when, right before, uh, Christmas break. I think it was, and last Thursday night at about eight o’clock at night, I was sick as a dog. I had the flu covid, something was happening. I was out for days and I was riding my couch and I had a Facebook messenger pop up, and it was a 14-year-old student at that school that said, John, I need help. I know you came to the school and talked, but I honestly don’t remember what you said, but I need, I need some help. And as I, you know, I got right on it and I started chatting and he was just extorted 15 minutes before that, 20 minutes before that sent an image. Wow. And he was freaking out. So I was able to talk him off the ledge, and I messaged with him for about an hour. You know, every, every scenario is a little bit different. This, I wanted to try to get on the phone with this. Young guy and he just wasn’t interested in talking. He just wanted to message and that was totally fine. I’m like, yep, we can totally, we can message, it’s totally fine. Whatever you’re comfortable with. And I just kept engaging with him. It’s like a hostage negotiation. You just want to keep them communicating, keep him talking. You know, I, at one point I even, I even sent him a picture of Jordan and said, Hey, this is my son. He’s gone today. I wish every day that he would’ve taken the, the two minutes to walk upstairs at three o’clock in the morning and come and get me. And I, I don’t, I don’t get that. And I, and I don’t want this to be you. I want you to go tell your dad, you know, he’s going to appreciate you for this. And so we’re just trying to keep it positive and, and really make him understand that it’s not his fault really being a, he, he’s really a, a victim of a really heinous crime right now. And, um, he needs to treat it that way. Just, uh, talked to this kid and I, I told him about Jordan and, you know, told him how strong he was because he, you know, reaching out and, um, this is the right thing to do and it’s not your fault. And we went through everything. And by the end of the night he had messaged me back and said, “Hey, I just, I really appreciate the help. The cops are here. I told my dad and really, I really can’t thank you enough.” And, um, you know, and it just all worked out really good. I’ve, I’ve been following up with him in the mornings and stuff before school just to make sure he is good. And, you know, that’s the stuff that, that makes a difference because when someone asks, “Hey, can you come to the school and talk?”, It’s like, well, you know, “yeah, I guess so let’s do it. And uh, and then you get stuff like that, that happens and, and then the answer is, “Of course I will.” Right?

[00:38:33] Bob: We began this episode explaining that Detective Larson gives plenty of talks about sextortion now. He has a lot of important things to share with parents and kids. He often shows the dialogue we had him read earlier between Jordan and Dani.

[00:38:48] Bob: When you show this to groups, uh, what kind of reaction do you get?

[00:38:51] Lowell Larson: I mean, there’s, you can hear a pin drop in the room. It’s, uh, very chilling.

[00:38:57] Bob: Have you ever had somebody come up to you, you know, after a talk or, or maybe a day later or something and say, you know, “Hey, can we talk, this is happening to me,” or, or, “You know, someone, I know this has happened to.”

[00:39:07] Lowell Larson: yeah, I’ve, it hasn’t occurred at following the presentation, but for years now, we’ve, you know, been public about this case and I’ve received numerous calls from even friends and family saying that they know of someone that this happened to. In fact, uh, one person I heard, he said, you know, that thing, he went and talked to his parents and he says, you know, that thing that happened to that kid and Marquette, that’s happening to me right now. So that’s exactly what the message we’re trying to put out there is that obviously we tell people don’t send anything out online that, you know, you wouldn’t want on the front paper type of things, but, and not to send naked images. But the reality is it’s gonna happen. And the message that we wanna send to people is, please don’t do it. But if it does occur, please tell your family or friends and reach out for help. Don’t you know. Obviously don’t do what Jordan did, and you know, there’s programs that we can do. There’s things that we can do to help minimize this. In fact, there is a program run by. The National Center for Exploited and Missing Children, we often call it NCMEC off of the acronym, and they have a program called Take It Down. And that program is NCMEC working with family of minors that have some type of sexually exploited image out on the internet, and they will do everything they can to try to remove that image from the internet. So that’s a tremendous resource that’s out there. We also tell everyone to, if this happens to them. Is to stop communications with the person regardless of whatever threats that they do to. Disable their account, but don’t delete the account because if we need evidence off the account, we don’t want them to delete it. Screenshot anything they can regarding any information and to contact their local authorities along with obviously contacting friends or family.

[00:41:04] Bob: I can’t help but ask this question. This story is incredibly hard to hear, honestly. Uh, but you do this every day. How do you handle working with this, these sort of horrible crimes?

[00:41:16] Lowell Larson: I guess it comes down to someone’s gotta do it and uh, obviously we’re dealing with this, a horrible, horrible situation. We can’t bring Jordan back, but what can we do to go after the people that did this to him and then prevent it occurring to other people. So that’s where I find my strength.

[00:41:38] Bob: What does Detective Larson want people to learn from what happened to Jordan?

[00:41:43] Lowell Larson: Well, and I think it, it’s not just about sextortion, it’s just what your podcast is about scams. So just don’t think well. You know this happened to a young male and I’m not a young male or, or whatever. Think about the basics of the scam is that when you put a cell phone in your hand or utilize a computer, you are now potentially a victim to anyone in the world and you need to be very careful with what you do on that device. And you gotta be very careful of people contacting you on that device and verifying who you are talking to. Very common. What we see in these scams is it’s an unsolicited contact, which we have here. You have the rapport building, you have the pressure that’s put on someone, for instance, the, the, the scam of someone impersonating a law enforcement agency and saying, you have a warrant for their arrest and, and putting that stress on them about having to do something right now, or you’re gonna be arrested. You know, in Jordan’s case it was, you have to pay me right now, or you’re gonna be exposed. So the, so the themes are the same, and that pressure often causes people to do stuff that later on they look back and they’re like, oh, that was, why did I do that? You know, so you just gotta slow down. You gotta verify who you’re talking to. For instance, like I said earlier, with the law enforcement scam, if someone says they’re contacting you from the Marquette County Sheriff’s Office and they have a warrant for your arrest, you know, there’s nothing wrong with saying, well, I just need to verify who you are. I will contact the sheriff’s office and who should I ask for? And so you, you know, independently determine what the correct phone number is, and you call. And you try to verify that. So, you know, in, in Jordan’s case, it was the trusting of the, what someone told them or who, who they were, you know, trusting of a profile. And it’s, as you know, you can be anyone or anything on the internet. And so. There was that trust. And then the other cautionary tale I have is trust of the technology. For instance, in this case, they’re using a, a segment of Instagram that allows the picture to disappear after a certain amount of time, and it doesn’t allow for a screen capture. But it was simply that technology was simply overcome by taking another device and taking a picture of the original device. So if you know mm-hmm. Like, like Snapchat, if, if it doesn’t allow you to screenshot it. Because of the, the platform or however it’s set up, or if it sends a notification, if you screenshot it, if you just take another cell phone and put it over and take a picture of your cell phone with the image on it, you’ve just overcome that technology. So people are, you know, trusting of the technology and that you can’t do this, but you know, there’s always a way, there’s very often a way that you can defeat that technology that people have the trust in

[00:44:38] Bob: Detective Larson also includes in his presentations, a piece of advice for parents that, I think it’s just so wise, and perhaps for some of you counterintuitive.

[00:44:49] Lowell Larson: I believe the most prevalent reason why kids don’t tell their parents something bad happens online is because they’re afraid to lose their internet privileges. So, you know, if, if you’re a parent that has the attitude, well, if my kid shows me that, that’s it, they’re not gonna have a cell phone. They’re not allowed to use this anymore. I look at it a different way. I look at it as thank your child for being so mature to bringing that information forward to you and that you know that you can trust them, because if something bad happens, you will be made aware of it. So I think that’s the attitude shift that we need to have as parents is that if something bad happens online, we don’t want the nuclear option of we’re taking away all their privileges. If they bring something forward to you, I think they need to be congratulated for having that maturity. Obviously, you know, we can have a courageous conversation of what got them, got them into that point, but we need to really harness that of that maturity that they put forward.

[00:45:50] Bob: Keeping that open line of communication between parents and kids is absolutely essential, and so is having these sometimes difficult conversations about sex, but. It’s so much easier said than done. Jennifer has some wisdom about that.

[00:46:05] Jennifer Buta: I think that parents need to have open conversations about sextortion, just like they do with warnings about anything else to their children when they’re growing up. You know, if that’s about alcohol or substances or driving. I think that it needs to be a normalized conversation with your family that this can happen, and if it does, even if you fall into it, you need to seek help from an adult because it spirals. So quickly that it’s hard to tell what it is to do. And even for the kids, you know, I think they need to know that they are the victim in this no matter what they are being pursued for the wrong reasons and. There is nothing worth taking your life over. There are so many people that want to help you. If you find yourself in this situation, there is light at the other side.

[00:46:59] Bob: It, it must be such, I mean, you as a parent, you have hard conversations all the time. Right? A lot. Especially when, once you have teenagers. But this conversation strikes me as particularly like really, really challenging. Do you have any suggestions as to how to even get started?

[00:47:13] Jennifer Buta: One of the things that I have suggested and that, you know, schools have done is talk about Jordan’s story or talk about another children’s story that this happened to. I think that’s a really good icebreaker to open it up and then go into you know, what to do if it does happen, because it is a real thing, and that’s what parents need to realize. It’s a real thing and your child isn’t an exception from it. You know, Jordan was about to turn 18 years old. There was absolutely no reason for me not to trust him or to take his phone away to investigate what he’s doing on social media. He was all set to go to college and that made him the perfect target for this crime because it made him vulnerable of being exposed and being judged when he had all of this going on for him. So if it can happen to my son, it can happen to anyone. And having those conversations, that is our greatest asset right now to prevent things from happening to kids.

 

Creating a Cybersecurity Infrastructure to Reduce Third-Party and Privileged Internal Access Risks: A Global Study

Organizations’ sensitive and confidential data continue to be vulnerable to risks created by third parties and internal privileged users. A key takeaway from this new research is that too much privileged access and the difficulty in managing permissions are barriers to reducing these risks.

The purpose of this research, sponsored by Imprivata, is to learn important information about how well organizations are managing third-party remote access risks as well as risks posed by internal users with privileged access.

Ponemon Institute surveyed 1,942 IT and IT security practitioners in the US (733), UK (398), Germany (573) and Australia (238) who are familiar with their organizations’ approach to managing privileged access abuse, including processes and technologies used to secure third party and privileged end user access to their networks and corporate resources. Industries represented in this research are healthcare, public sector, industrial and manufacturing and financial services.

According to the findings, organizations spend an average of $88,000 annually to detect, respond and recover from third-party data breaches and privileged access abuse. To prevent these abuses, the IT security team spends an average of 134 hours weekly analyzing and investigating the security of third party and internal privileged access practices and allocate an average of $43 million or 25 percent of their annual $171 million IT security budget to reduce third party and privileged internal access risks.

“Third-party access is necessary to conduct global business, but it is also one of the biggest security threats and organizations can no longer remain complacent,” said Joel Burleson-Davis, Senior Vice President of Worldwide Engineering, Cyber, at Imprivata. “While some progress has been made, organizations are still struggling to effectively implement the proper tools, resources, and elements of a strong third-party risk management strategy. Cybercriminals continue capitalizing on this weakness, using the lack of visibility and uncertainty across the third-party vendor ecosystem to their advantage.”

Both third-party/vendor and privileged internal user data breaches and cyberattacks are a security risk for organizations. According to the research, organizations need to prioritize equally reducing privileged access risks caused by third parties/vendors and internal users.  Some 47 percent of respondents say their organizations experienced a data breach or cyberattack that involved one of their third parties/vendors accessing their networks. Forty-four percent of respondents say these incidents involved internal users with privileged access.

Following is a summary of the research findings. 

To avoid security incidents, organizations need to assign the appropriate amount of access and no more. According to the research, granting too much privileged access to insiders causes more data breaches and cyberattacks than when given to third parties. Thirty-four percent of respondents say these incidents were the result of a third party/vendor having too much privileged access. However, 45 percent of respondents say it was caused by providing internal users with too much privileged access.

Third party security incidents are more likely to result in regulatory fines (50 percent vs. 30 percent of internal users) and lawsuits (41 percent vs. 24 percent of internal users). The primary consequences of a privileged user access data breach and cyberattack were the loss of business partners (51 percent of respondents), loss of reputation (44 percent of respondents) and employee turnover (43 percent of respondents).

Assigning the right amount of privileged access is critical to not only preventing security incidents but to ensuring third parties and employees have enough access to be productive.  An average of 20 third parties/vendors and an average of 20 employees have privileged access rights. The challenge for those managing permissions is to be able to determine the correct level of privileged access required without providing too much access. However, less than half (49 percent of respondents) say their organizations provide third parties/vendors with enough access and nothing more to perform their responsibilities. Forty-seven percent of respondents say their organizations provide employees with the appropriate amount of access to do their work.  

Organizations without an inventory of third parties/vendors say it is primarily due to a lack of resources. Complexity of multiple internal tech platforms is a barrier to having an inventory of privileged internal users. Fifty percent of respondents say their organizations do not have a comprehensive inventory of all third parties with access to their networks due to the lack of resources to track third parties (45 percent of respondents), no centralized control over third-party relationships (37 percent of respondents) and complexity in third-party relationships (27 percent of respondents). Only 47 percent of respondents have a comprehensive inventory of all privileged internal users due to complexity of multiple internal tech platforms (53 percent of respondents), no centralized control over internal user privileges (44 percent of respondents) and lack of resources to track internal user privileges (41 percent of respondents).

Reducing third party and privileged internal access risks can be overwhelming because of the many factors that complicate the process of managing permissions. Forty-four percent of respondents say managing third party/vendor permissions can be overwhelming and a drain on internal resources. Almost half (48 percent of respondents) say managing internal privileged access is difficult. Reasons for the difficulty in managing permissions is because of the complexity of insider user roles and third- party relationships and the number of access change requests due to personnel changes, mergers and acquisitions and organization restructuring.

The lack of a consistent, enterprise-wide privileged user access management approach can lead to gaps in governance and oversight. Only 42 percent of respondents say their organizations have a strategy that ensures technologies, policies and practices are used consistently across the organization to reduce privileged access risks. Twenty-six percent of respondents say the strategy is not applied consistently and 19 percent of respondents say it is ad hoc or informal (Q5).

Artificial intelligence (AI) and machine learning (ML) can increase efficiency and decrease human error taking steps to reduce privileged access abuse. Forty percent of respondents say AI and ML is part of their strategy to reduce privileged access abuse.

The primary benefits are improved efficiency of efforts to manage third party and internal privilege access abuse (59 percent of respondents), reduced human error related to managing third-party and internal privileged access (51 percent of respondents) and increased support for the IT security team dedicated to managing third party and internal privileged access abuse (50 percent of respondents).

Preventing privileged access abuse can be overwhelming, requiring technologies and processes that enable organizations to effectively monitor and audit who has access to sensitive and confidential data. However, only 46 percent of respondents monitor and review provisioning systems and only 41 percent of respondents say their organizations conduct regular privileged user training programs. Instead, 57 percent of respondents say their organizations depend upon thorough background checks before issuing privileged credentials and 55 percent of respondents say their organizations conduct manual oversight by supervisors and managers.

The primary barrier to granting and enforcing privileged user access rights is the inability to apply access policy controls at the point of change request (67 percent of respondents). Other barriers are the length of time it takes to grant access to privileged users (not meeting the organization’s SLA with the business), too expensive to monitor and control all privileged users and granting access to privileged users is staggered, all 61 percent of respondents.

Monitoring third-party and vendor access can reduce third party and vendor access risk. However, only 41 percent of respondents are monitoring third-party and vendor access to the network. Reasons for not monitoring third party and vendor access to sensitive and confidential information are confidence in the third party’s ability to secure information (59 percent of respondents), the business reputation of the third party (45 percent of respondents) and the lack of internal resources to check or verify (44 percent of respondents).

Recommendations to mitigate third-party and privileged internal access risks

  • Implement the principle of least privilege. Grant users only the minimum access required to perform their duties. Regularly review and audit access and conduct periodic reviews to identify and revoke unnecessary permissions.
  • Maintain an inventory of third parties and internal users with privileged access. Without such inventories many organizations don’t have a unified view of privileged user access across the enterprise.
  • Leverage access management tools such as Vendor Privileged Access Management (VPAM) and Privileged Access Management (PAM) to secure and manage an organization’s privileged access to information resources and ensure each user has minimal, controlled access, reducing the chance of a third-party vendor breach and providing organizations with control and visibility. According to the research, 55 percent of respondents with a VPAM say it is highly effective and 52 percent of respondents with a PAM say it is highly effective.
  • Educate users on security best practices. Train employees about the importance of data protection and responsible access management. Only 41 percent of respondents say their organizations conduct regular privileged user training program.
  • Ensure there are sufficient resources, in-house expertise and in-house technologies to improve the efficiency and security of the access governance process. Specifically, to keep pace with the number of access change requests and to reduce burdensome processes for third parties and business users requesting access.
  • Automate the processes involved in granting privileged user access and reviewing and certifying privileged user access to meet growing requests for access changes.

To read the full findings of this report, visit Imprivata’s website. 

HP mandated 15-minute wait time for callers — why that was good news for criminals

Bob Sullivan

I have long said that poor customer service is a massive cybersecurity and financial vulnerability. I realize that line doesn’t always click with people right away, so I’m devoted to sharing examples with you (like this dramatic story).

Hewlett-Packard just offered up a slam dunk.

The company recently deployed a mandatory 15-minute wait for customers calling its support line. It’s hard to believe, but here’s The Register’s story about this. That story cites a staff memo which says, “The wait time for each customer is set to 15 minutes – notice the expected wait time is mentioned only in the beginning of the call.” HP’s goal was to force folks to solve problems on their own, supposedly with the help of HP’s website. But…we’ll get to that boobytrap in a moment.

On the one hand, this just sounds like everyone’s 21st Century nightmare. On the other hand, you’ve got to give HP credit for saying the quiet part out loud. I’m sure many of you suspect other companies put similar speed bumps into their phone call wait times. (I’ll share one example. Way back in 2000, the FTC fined the nation’s credit bureaus for putting a million callers on hold — after the feds required the bureaus to have a toll-free number for folks trying to fix mistakes in their credit reports.)

Here’s the problem with HP’s “don’t call us, we won’t call you,” plan: When shoving consumers online, HP can be shoving them right into the arms of criminals.

As HP printer owners know, there are constant issues with keeping printers functional. Often, due to software updates or other Acts of God, printer drivers go missing. Searching the Web for HP printer drivers can be like walking into the cantina in the original Star Wars.

Here’s a warning that a friend recently sent to me while looking for an HP driver of a slightly-older printer. Users looking for drivers often can’t find them on HP’s site, so they then end up on seedy websites — which sometimes do have legitimate versions of old drivers, and sometimes offer up software that infects users with malware.  After a helpful user points someone to a place that might have the right driver, this warning is attached:

So your printer doesn’t work, you call HP, the company does all it can to redirect you to the Web, and then…well, you probably decide you have no choice but to buy a new printer.

HP reversed course on the 15-minute wait time after the negative publicity, according to Tom’s Hardware. But this is still quite a learning moment.

Of course, it’s annoying to have people call every time their printer stops working.  But this approach of sending users out into the wilderness to find their own answers puts those users at risk — and as we know, we’re all connected, so we are all put at risk.

This scenario is not unlike a problem that plagues the travel industry.  Criminals make fake look-alike websites to hijack desperate travelers looking for a solution when their flight is canceled. It’s sometimes called “malvertising.” Then, fliers call a fake number, and give their money or personal information to a criminal on the other end of the line. If it weren’t so hard to get customer service, fliers wouldn’t be driven to rogue websites in the first place.

Poor customer service is our greatest cybersecurity vulnerability.  Hacking a company is hard. If you are a criminal, it’s much easier to get frustrated consumers to do the hard part for you. Companies should invest in customer service as part of their overall security budgets.

Google tweaks Android with smart scam-fighting update

Bob Sullivan

Google has added a novel scam-fighting technique to the beta version of its newest Android operating system, and the company deserves kudos for that.  Essentially, a software tweak will prevent users from installing (“sideloading”) rogue apps  during a phone call — adding friction to a tactic criminals often try.   It’s unclear how effective this small change might be,  but it’s great Google engineers are thinking this way.

Android Authority has all the details.

As many of you know, one of my jobs is to host The Perfect Scam podcast for AARP.  Every week I interview the victim of a horrible crime, and tell their entire story from soup-to-nuts. I’ve done more than 100 of these episodes now, and I’m incredibly proud of the work we’ve done, and very grateful to AARP for its ongoing investment to help protect people from fraud. These podcasts also create a valuable library of criminal tactics and techniques, along with a realistic view of victims’ plight.

Many emotional, societal, and financial factors contribute to making people vulnerable to romance scams, crypto scams, impersonation scams, etc. It’s easy to imagine you and your loved ones would never be the victim of such a crime, but you’re dangerously wrong. Any of us can be victimized under the right circumstances. A massive, global, and very profitable industry that’s fueled by human trafficking is now devoted to creating those “right circumstances,” and soon, artificial intelligence will be a large part of their playbook.

I often point out that every one of my stories involves touchpoints with multiple technology companies which enable these crimes.  The victim is first contacted by Facebook messenger via an affiliate group; the conversation escalates on WhatsApp; the fake customer service number ranks high on Google; the money is sent through cryptocurrency.  You get the idea.  Tech companies can and must do more to uncover criminal tactics and at least not make things so easy for the bad guys.  Some firms don’t have a great track record of this. Meta is very, very slow to take down impersonation accounts that are used for ongoing crimes, for example.

So I’m glad to throw some flowers at Google today. One technique a criminal can use is to call a victim, engage them in conversation (“We’re from your Internet provider and your modem has been hacked!”) and then walk them through sideloading a malicious app on their phone.  Google’s Android smartphone software (which I prefer) has always been more dangerous than Apple’s software because Android is a more open system. So disabling the sideloading of apps during a phone call is a good step; it’s hard to imagine a need for that capability.  Naturally, a criminal could tell a victim to hang up, install the software, and then call back. But as Android Central put it, adding this speed bump will certainly help a little, and it might help a lot. AARP research has shown that any conversation with a third party can stop a scam in its tracks, so the hang-up-and-call-back friction might create a moment for such conversations. It won’t hurt, anyway.

I’d love to see more engineers step up and add speedbumps that are designed to frustrate criminals.  If you have any ideas, I’m all ears. And I’ve got more flowers to throw!

 

 

Ransomware risk up, but some companies think they’re not a target

Despite advances in cybersecurity technologies, including artificial Intelligence (AI), organizations continue to find it difficult to detect and prevent ransomware attacks.

Research conducted by The Ponemon Institute and sponsored by Illumio, Inc. has found that eighty-eight percent of organizations experienced one or more ransomware attacks in the past three months to more than 12 months. According to the research, based on the hours and practitioners involved organizations spent an average of $146,685 to contain and remediate the largest ransomware attack experienced. In 2021, the average cost was slightly higher at $168,910.


An on-demand Webinar with many more details
on the research is available for free at Illumio’s website. 


The purpose of this research is to learn the extent of the ransomware threats facing organizations and the steps being taken to mitigate the risks and their consequences. Ponemon Institute surveyed 2,547 IT and cybersecurity practitioners in the U.S. (578), U.K. (424), Germany (516), France (471), Australia (256) and Japan (302) who are responsible for addressing ransomware attacks.

In addition to the 2024 findings, the report also presents research from a ransomware study Ponemon Institute conducted in 2021 and published in 2022. A comparison of the studies reveals changes in ransomware risks and the practices used to reduce the threats in the past three years. Since 2021, while the perception that their organization is a target of ransomware has declined from 68 percent to 54 percent of respondents, the consequences of a ransomware attack such as downtime, loss of significant revenue and brand damage has increased.

“Ransomware is more pervasive and impactful than ever, with more organizations forced to suspend operations or experiencing major business failure because of attacks,” said Trevor Dearing, Director of Critical Infrastructure at Illumio. “Organizations need operational resilience and controls like microsegmentation that stop attackers from reaching critical systems. By containing attacks at the point of entry, organizations can protect critical systems and data, and save millions in downtime, lost business, and reputational damage.”

Since 2021 organizations have become more vulnerable to the risks of ransomware because of AI-generated attacks and unrestricted lateral movement in cybersecurity.

AI-generated attacks refer to cyber threats that leverage AI to deceive and compromise individuals, organizations and systems. These attacks are becoming increasingly sophisticated, imitating the language and style of legitimate emails to trick users into letting the ransomware in. Other attacks use AI to improve the ransomware’s performance or automate some aspects of the attack path. Fifty-one percent of respondents say their organizations are highly or extremely concerned that their organizations may experience such an attack.

Lateral movement refers to methods cyber criminals use to explore a compromised network to find vulnerabilities, escalate access privileges and reach their ultimate target. It is called lateral movement because of the way the attacker moves sideways from device to device, a hallmark of most successful ransomware attacks.

According to the findings, since 2021 unpatched systems have become increasingly vulnerable to being exploited by attackers moving laterally. Fifty-two percent of respondents in this year’s research say unpatched systems are targeted for lateral movement, an increase from 33 percent of respondents in 2021. Targeting cached credentials increased from 42 percent of respondents in 2021 to 48 percent of respondents in 2024.

The following findings highlight organizations’ efforts to mitigate ransomware attacks.

Organizations are slow to adopt AI to combat ransomware. Although AI is considered helpful for reducing ransomware attacks by increasing overall SecOps efficiency and detecting ransomware activity within the environment, only 42 percent of respondents say their organizations have specifically adopted AI to help combat ransomware.

Since 2021 more organizations believe their security controls will protect them from ransomware attacks. Confidence in mitigating a variety of ransomware risks has increased significantly, especially with respect to their current security controls (32 percent of respondents in 2021 vs. 54 percent of respondents in 2024). Multi-factor authentication and automated patching/updates are the top two technologies used to combat ransomware, 37 percent and 36 percent of respondents, respectively. Only 27 percent of respondents say their organizations use segmentation/microsegmentation.

Since 2021, more organizations are assigning responsibility for stopping ransomware attacks to one organizational function. Ninety-two percent of respondents say one person or function is most responsible for addressing the threat of ransomware. The most responsible are the CISO (21 percent of respondents) or the CIO/CTO (21 percent of respondents). In 2021, 82 percent of respondents said one person or function was most responsible.

To prevent ransomware attacks, organizations should secure the cloud and endpoints. Forty-nine percent of respondents say the cloud is most vulnerable in a ransomware attack followed by the endpoint, at 45 percent of respondents. Desktops/laptops continue to be the devices most often compromised by criminals.

Phishing continues to be the most common way ransomware is delivered. Phishing and Remote Desktop Protocol (RDP) compromises continue to be the primary methods used to unleash ransomware. Ransomware is typically spread through emails that contain links to malicious web pages or attachments. Infection can also occur when a user visits an infected website and malware is downloaded without the user’s knowledge. RDP is one of the main protocols used for remote desktop sessions.

Insider negligence can delay an effective response to ransomware and increase the negative consequences. To improve prevention and reduce the time it takes to respond, organizations should address negligent user behavior and the lack of security awareness. Training programs should focus on how users can make better decisions about the content they receive through email, what they view or click in social media, how they access the web and other common practices. Because no cybersecurity control can prevent every attack, containment and response strategies ware equally critical.

Forty-four percent of respondents say their organizations are not prepared to quickly identify and contain the ransomware attack. This indicates the importance of having incident response plans, skilled respondents and key controls to stop an attack from spreading.

Ransomware attacks can reduce revenues due to downtime, lost customers and brand damage. Since 2021, organizations that had to shut down to recover from the attack increased from 45 percent to 58 percent in 2024. Respondents that report a loss of significant revenue increased from 22 percent of respondents to 40 percent of respondents.

Since 2021, more organizations are reporting that brand damage was a consequence of the ransomware attack (an increase from 21 percent to 35 percent of respondents). The findings also reveal that recovering from damage to brand can cost organizations the most following a ransomware attack. In 2021, the highest cost was due to legal and regulatory actions.

Part 2. Key findings

In this section of the report, we provide an analysis of the research. Whenever possible, we present the findings from the 2021 study to show three-year trends in ransomware threats and risks.  The complete audited findings are presented in the Appendix of this report. We have organized the report according to the following topics.

  • The ransomware security gap
  • Anatomy of a ransomware attack
  • The response to ransomware demands
  • Country differences

The ransomware security gap

Fewer organizations pay the ransom. Since 2021, more respondents say their organizations will never pay the ransom even if it means losing data, an increase from 43 percent of respondents to 51 percent of respondents. In an October 2, 2019 Public Service Announcement (PSA), the FBI urges victims not to pay the ransom. According to the PSA, the payment of the ransom does not guarantee that the exfiltrated data will be returned, as shown in this research. The FBI also warns that paying might embolden attackers to target other victims.

Other trends are the decline in the belief that their organizations are targeted, (54 percent of respondents in 2024 vs 68 percent of respondents in 2021). A little more than half of respondents continue to say prevention of ransomware is a high priority.

To read the rest of this study, visit Illumio’s website. 

 

Is the Great Atlantic Data Firewall going up after all?

Bob Sullivan

Are European companies on the brink of another potentially crippling data border dispute with the U.S.? I’ve spent a lot of time in Ireland recently, so I’m acutely sensitive to the possibility.

As tech companies here try to position themselves for Trump 2.0, downstream impacts from the new presidents’ flurry of executive orders and sackings are quickly being digested. But one issue stands out: the ability of US firms to operate with EU data is, once again, threatened.  At worst, the issue could potentially cause EU schools and businesses to stop working immediately with US cloud providers like Google and Amazon, with potentially catastrophic results.

As history shows, that worst-case scenario is likely to be avoided, but yet again, the tenuous nature of international privacy agreements between the U.S. and its largest trading partner has been betrayed.

To review, E.U. citizens enjoy fundamental privacy rights not granted to U.S. citizens, in part because Congress has yet to pass a federal privacy law.  Back in 1998, the EU mandated that data on its citizens cannot be exported outside the nation unless it is treated with EU-level care and its citizens are guaranteed EU-level privacy protections.  This seeming impossible stalemate has never really been permanently resolved, but it has been papered over several times by “agreements.” The first such deal was called “Safe Harbor” back in 2000. It was declared invalid by an EU court in 2015, and then replaced by “Privacy Shield,” declared invalid in 2020.  That was replaced two years later by the Transatlantic Data Privacy Framework, which stands today. Maybe.

This week, new President Donald Trump required all Democrat members of an organization called the Privacy and Civil Liberties Oversight Board to resign, a not-unexpected step. But that leaves the board with only one member, rendering it essentially non-functional. That’s important because the Transatlantic Data Privacy Framework rests on the ability of this “independent” civil liberties board to deal with complaints by EU citizens about data mistreatment.  Legal scholars worry the board’s demise could mean demise of this latest data-sharing agreement.

In reality, the “court” established to hear such EU citizens’ dispute has yet to adjudicate a single case, according to one of its lawyers.  So the Great Atlantic Data Firewall is likely not as immanent as some suggest; we’ve been on this brink many times before.

However, the executive order which President Biden signed initiating the entire Transatlantic Data Privacy Framework is due to be reviewed by the Trump administration within 45 days and it’s easy to see that baby being tossed with the bath water.  Then, real questions about a potential data-sharing wall arising over the Atlantic Ocean could be raised.

Perhaps, as Max Shrems suggests, it’s time to find a more permanent solution to this thorny problem?   The best way to understand all that’s going on is to head over to NOYB.eu and read Schrems’ thoughts on the situation.

 

Certificate Lifecyle Management, PKI and Software Supply Chain Security in Financial Services

The purpose of this research is to determine how effective the financial services industry is in managing the certificate lifecycle, PKI and securing the software supply chain. As shown in this research, 62 percent of respondents say their organizations experienced one or more outages or security incidents due to an issue with digital certificates that resulted in diminished service quality or availability. Forty-eight percent of respondents say their organizations have been impacted by one or more software supply chain attacks or exploits in the past year. Some of the adverse consequences included putting customers at risk due to a system compromise and prolonged disruption to operations.

Sponsored by DigiCert, Ponemon Institute surveyed 2,546 IT and IT security practitioners in the United States (507 respondents), the United Kingdom (295 respondents), Canada (272 respondents), DACH (Germany and Switzerland 363 respondents), France (361 respondents), Australia (237 respondents), Japan (252 respondents) and Singapore (259 respondents). Forty eight percent of respondents work in banking and 52 percent are in the insurance industry. All respondents are familiar with their organization’s PKI and involved in certificate lifecycle management (CLM). Ninety-six percent of respondents either have responsibility (47 percent) or share responsibility with others (49 percent) in setting and/or implementing their organizations’ software supply chain security strategy

Conducting inventories to identify every certificate is critical for crypto-agility and becoming quantum-ready. A key takeaway from the research is that more than half of respondents (51 percent) say their organizations are not taking an inventory to identify every certificate within the organization. Similarly, 51 percent of respondents do not know how many digital certificates, including private root or privately signed, their organizations have. Thirty-six percent of respondents agree, the most important feature of a CLM solution is the continuous discovery of public and internal certificates. Another 36 percent of respondents say lifecycle automation using standard and proprietary interfaces is another top two important feature.

The following research findings describe the current state of CLM, PKI and software supply chain security.

  • Most organizations are in the dark about their certificate inventory and the kind of certificates they have. As discussed above, a key takeaway from the research is that more than half of respondents (51 percent) say their organizations are not taking an inventory to identify every certificate within the organization. Similarly, 51 percent of respondents do not know how many digital certificates, including private root or privately signed, their organizations have. Without this visibility, organizations are at risk because of unsecured certificates within their organization.
  • A CLM solution must support multiple CAs to allow for redundancy and to accommodate the decentralized nature of PKI within enterprises. Thirty-three percent of respondents say support for multiple CAs is one of the most important features when choosing a CLM solution.
  • Certificate outages are common mostly due to expirations or revocations, which can be solved by a CLM solution. Sixty-two percent of respondents say their organizations experienced one or more outages due to an issue with digital certificates. These outages were mainly due to expired certificates, revoked certificates and misconfigured certificates. These risks can be mitigated with an automated CLM system which streamlines the process of CLM through a variety of automated workflows done within a single platform.
  • The most important feature of PKI solutions is the ability to consolidate management of public CA and private CA certificates. According to respondents, the most important feature when choosing a PKI, is a single vendor for public CA and private CA certificates (46 percent of respondents). Also important is scalability and performance (46 percent of respondents. The PKI technologies most often used are service provider/cloud provider managed private PKI (44 percent of respondents), internal private PKI (42 percent of respondents) and managed PKI service (e.g. SaaS PKI or PKI as a service) (29 percent of respondents)
  • Digital certificates are also known as a public key certificate and used to cryptographically verify the ownership of a public key. Digital certificates are for sharing public keys to be used for encryption and authentication. According to the research, the most important use case for digital certificates is user authentication for WiFi, VPN or other network access (59 percent of respondents). Authenticating cloud workloads (55 percent of respondents) indicates progress in modernizing digital certificate security. Another important use case is digital signatures for electronic documents (54 percent of respondents).
  • Software supply chain attacks are growing, primarily from security issues with open source software. Forty-eight percent of respondents say their organizations have been impacted by one or more software supply chain attacks in the past year. Most of these attacks were caused by malware, vulnerabilities or other threats in open source software. The two top consequences were customers at risk due to a system compromise and prolonged disruption to operations.

To read the full findings of this report, visit Digicert’s website.

The Second Annual Global Study on the Growing API Security Crisis

Application Programming Interfaces (APIs) benefit organizations by connecting systems and data, driving innovation in the creation of new products and services and enabling personalized products and services. As organizations realize the benefits of APIs, they are aware of how vulnerable APIs are to exploitation. In fact, 61 percent of participants in this research believe the API risk will significantly increase (21 percent) or increase (40 percent) in the next 12 to 24 months.

As defined in the research, an API is a set of defined rules that enables different applications to communicate with each other. Organizations are increasingly using APIs to connect services and to transfer data, including sensitive medical, financial and personal data. As a result, the API attack surface has grown dramatically.

Sponsored by Traceable, the purpose of this research is to understand organizations’ awareness and approach to reducing API security risks. In this year’s study, Ponemon Institute surveyed 1,548 IT and IT security practitioners in the United States (649), the United Kingdom (451) and EMEA (448) who are knowledgeable about their organizations’ approach to API security.

Why APIs continue to be vulnerable to exploitation.  This year, 54 percent of respondents say APIs are a security risk because they expand the attack surface across all layers of the technology stack. It is now considered organizations’ largest attack surface. Because APIs expand the attack surface across all vectors it is possible to simply exploit an API and obtain access to sensitive data and not have to exploit to other solutions in the security stack. Before APIs, hackers would have to learn how to attack each one they were trying to get through, learning different attacks for different technologies at each layer of the stack.

Some 53 percent of respondents say traditional security solutions are not effective in distinguishing legitimate from fraudulent activity at the API layer. The increasing number and complexity of APIs makes it difficult to track how many APIs exist, where they are located and what they are doing.  As a result, 55 percent vs. 56 percent of respondents say the volume of APIs makes it difficult to prevent attacks.

The following findings illustrate the growing API crisis and what steps should be taken to improve API security 

  • Organizations are having multiple data breaches caused by an API exploitation in the past two years, which result in financial and IP losses. These data breaches are likely to occur because on average only 38 percent of APIs are continually tested for vulnerabilities. As a result, organizations are only confident in preventing an average of 24 percent of attacks. To prevent API compromises, APIs should be monitored for risky traffic performance and errors. 
  • Targeted DDoS attacks continue to be the primary root cause of the data breaches caused by an API exploitation. Another root cause is fraud, abuse and misuse. When asked to rate the seriousness of fraud attacks, almost half of respondents (47 percent) say these attacks are very or highly serious. 
  • Organizations have a very difficult time discovering and inventorying all APIs and as a result they do not know the extent of risks to their APIs. Many APIs are being created and updated so organizations can quickly lose control of the numerous types of APIs used and provided. Once all APIs are discovered it is important to have an inventory that provides visibility into the nature and behavior of those APIs.
  • According to the research, the areas that are most challenging to securing APIs and should be made a focus of any security strategy are preventing API sprawl, stopping the growth in API security vulnerabilities and prioritizing APIs for remediation.
  • Third-party APIs expose organizations to cybersecurity risks. In this year’s research, an average of 131 third parties are connected to organizations’ APIs. Recommendations to mitigate third-party API risk include creating an inventory of third-party APIs, performing risk assessments and due diligence and establishing ongoing monitoring and testing. Third-party APIs should also be continuously analyzed for misconfiguration and vulnerabilities.
  • To prevent API exploitations, organizations need to make identifying API endpoints that handle sensitive data without appropriate authentication more of a priority. An API endpoint is a specific location within an API that accepts requests and sends back responses. It’s a way for different systems and applications to communicate with each other, by sending and receiving information and instructions via the endpoint.
  • Bad bots impact the security of APIs. A bot is a software program that operates on the Internet and performs repetitive tasks. While some bot traffic is from good bots, bad bots can have a huge negative impact on APIs. Fifty-three percent of respondents say their organizations experienced one or more bot attacks involving APIs. The security solutions most often used to reduce the risk from bot attacks are web application firewalls, content delivery network deployment and active traffic monitoring on an API endpoint.

Generative AI and API security 

  • Generative artificial intelligence is being adopted by many organizations for its many benefits such as in business intelligence, content development and coding. In this research, 67 percent of respondents say their organizations have currently adopted (21 percent), in the process of adopting (30 percent) or plan to adopt generative AI in the next year (16 percent). As organizations embrace generative AI they should also be aware of the security risks that negatively affect APIs.
  • The top concerns about how generative AI applications affect API security are the increased attack surface due to additional API integrations, unauthorized access to sensitive data, potential data leakage through API calls to generative AI services and difficulty in monitoring and analyzing traffic to and from generative AI APIs.
  • The main challenges in securing APIs used by generative AI applications are the rapid pace of generative AI technology development, lack of in-house expertise in generative AI and API security and the lack of established best practices for securing generative AI API.
  • The top priorities for securing APIs used by generative AI applications are real-time monitoring and analysis of traffic to and from generative AI APIs, implementing strong authentication and authorization for generative AI API calls and comprehensive discovery and cataloging of generative AI API integrations.
  • Organizations invest in API security for generative AI-based applications and APIs to be able to identify and block sensitive data flows to generative AI APIs and safeguard critical data assets, improve efficiency of technologies and staff and real time monitoring and analysis of traffic to and from LLM APIs to quickly detect and respond to emerging threats.

To read key findings in this report, visit the Traceable.com website.

 

Facebook acknowledges it’s in a global fight to stop scams, and might not be winning

Bob Sullivan

Facebook publicly acknowledged recently that it’s engaged in a massive struggle with online crime gangs who abuse the service and steal from consumers worldwide. In a blog post, the firm said it had removed two million accounts just this year that had been linked to crime gangs, and was fighting on fronts across the world, including places like Myanmar, Laos, Cambodia, the United Arab Emirates and the Philippines. But in a nod to how difficult the fight is, the firm acknowledged it needs help.

“We know that these are extremely persistent and well-resourced criminal organizations working to evolve their tactics and evade detection, including by law enforcement,” the firm wrote. “We’ve seen them operate across borders with little deterrence and across many internet platforms in an attempt to ensure that any one company or country has only a narrow view into the full picture of scam operations. This makes collaboration within industries and countries even more critical.”

I’ve been writing about the size and scope of scam operations for years, but lately, I’ve tried to ring the alarm bell about just how massive these crime gangs have become (See, “They’re finding dead bodies outside call centers).  If you haven’t heard about a tragic victim in your circle of friends recently, I’m afraid you will soon.  There will be millions of victims and perhaps $1 trillion in losses by the time we count them all..and behind each one, you’ll find a shattered life.

Facebook’s post focused on a crime that is commonly called “pig butchering” — a term I shun and will not use again because it is so demeaning to victims. Often, the crime involves the long-term seduction of a victim, followed by an eventual invitation to invest in a made-up asset like cryptocurrency.  The scams are so elaborate that they include real-sounding firms, with real-looking account statements. They can stretch well into a year or two.  Behind the scenes, an army of criminals works together to keep up the relationship and to manufacture these realistic elements. As I’ve described elsewhere, hundreds of thousands of these criminals are themselves victims, conscripted into scam compounds via some form of human trafficking.

Many victims don’t find out what’s going on until they’ve sent much of their retirement savings to the crime gang.

“Today, for the first time, we are sharing our approach to countering the cross-border criminal organizations behind forced-labor scam compounds under our Dangerous Organizations and Individuals (DOI) and safety policies,” Facebook said. “We hope that sharing our insights will help inform our industry’s defenses so we can collectively help protect people from criminal scammers.”

It’s a great development that Facebook is sharing its behind-the-scenes work to combat this crime. But the firm can and must do more. Its private message service is often a critical tool for criminals to ensure victims; its platform full of “friendly” strangers in affinity groups is essential for victim grooming.  It would be unfair to say Facebook is to blame for these crimes; but I also know no one works there who wants to go home at night thinking the tool they’ve built is being used to ruin thousands of lives.

How could Facebook do more? One version of the scam begins with the hijacking of a legitimate account that already enjoys trust relationships.  In one typical fact pattern, a good-looking soldier’s account is stolen, and then used to flirt with users.  The pictures and service records are often a powerful asset for criminals trying to seduce victims.

Victims who’ve had their accounts hijacked say it can take months to recover their accounts, or to even get the service to take down their profiles being used for scams. As I’ve written before, when a victim tells Facebook that an account is actively being used to steal from its members, it’s hard to understand why the firm would be slow to investigate.  Poor customer service is our most serious cyber vulnerability.

In another blog post from last month, Facebook said it has begun testing better ways to restore hijacked accounts.  That’s good, too. But I’m here to tell you the new method the firm says it’s using — uploaded video selfies — has been in use for at least two years.  You might remember my experience using it. So, what’s the holdup? If we are in the middle of an international conflict with crime gangs stealing hundreds of millions of dollars, you’d think such a tool would be farther along by now.

Still, I take the publication of today’s post — in which Facebook acknowledges the problem — as a very positive first step.  I’d hope other tech companies will follow suit, and will also cooperate with the firm’s ideas around information sharing.  Meta, Facebook’s parent, is uniquely positioned to stop online crime gangs; its ample resources should be a match even for these massive crime gangs.