Optimizing What Matters Most: The State of Mission-Critical Work

In this study, mission critical work refers to tasks, systems or processes within an organization that are essential for its operations and survival. If they fail or are disrupted, the entire operations could be significantly impacted or even brought to a complete halt. It is the most critical work that must be done without interruption to maintain functionality. The difference between mission critical and business critical systems is that mission critical is vital to the core mission or primary functions of the organization. Business critical systems are crucial to business operations and support the organizations’ core processes.

In addition to the impact on the sustainability of organizations, mission-critical failures and disruptions can have a ripple effect with significant economic consequences for government and industry sectors. This is particularly the case when failures and disruptions involve critical infrastructure, which encompasses systems and assets essential for the functioning of a society and its economy. These failures can disrupt supply chains, impact business productivity, and lead to economic losses.

To reduce disruptions and failures, organizations need to assess their ability to manage the risks to tasks, systems and/or processes as well as to protect and secure sensitive and confidential data in mission critical workflows.  However, as shown in this research, confidence in understanding the risk, security and privacy vulnerabilities in mission-critical workflows is low.

Respondents were asked to rate their confidence in the privacy and security and their ability to understand the risk profile of their organization’s mission-critical workflows on a scale of 1 = no confidence to 10 = highly confident. Only 47 percent of respondents say they are very or highly confident in understanding the risk profile of mission-critical workflows. Slightly more than half of respondents (52 percent) are very or highly confident in the privacy and security of mission-critical workflows.

The importance of optimizing mission-critical workflows

In the past 12 months, 64 percent of organizations report they experienced an average of 6 disruptions or failures in executing mission-critical workflows. Respondents say cyberattacks are the number one reason mission critical failures and disruptions occur. To prevent these incidents, 61 percent of organizations in this research believe a strong security posture is critical.

The disruption or failure of mission-critical workflows can result in the loss of high-value information assets. This is followed by data center downtime, which not only prevents mission critical work from being completed but can have severe financial consequences. Sixty-three percent of respondents say the number one metric used to measure the cost of a disruption or failure is the cost of downtime of critical operations. According to a study conducted by Ponemon Institute in 2020, the average cost of one data center downtime was approximately $1 million. Forty-six percent of respondents say that the organizations’ survivability was affected because of a complete halt to operations.

A strong security posture and knowledgeable mission-critical staff are the most important factors to prevent mission-critical disruption and failures. Organizations need to secure mission-critical workflows to avoid disruptions or failures (61 percent of respondents) supported by a knowledgeable mission-critical staff (57 percent of respondents). Also important is an enterprise-wide incident response plan (51 percent of respondents).

Few organizations have risk mitigation strategies in place as part of their mission-critical collaboration tools. According to 47 percent of respondents, their organizations use mission-critical collaboration tools. However, only 39 percent of respondents have risk mitigation strategies in place. Of these respondents, 59 percent of respondents say they have backup procedures to prevent data loss, 54 percent of respondents say they have contingency plans to handle unexpected events.

Cyberattacks and system glitches were the primary causes of the disruption or failure. To reduce the likelihood of a disruption or failure, organizations need to ensure the security of their mission-critical workflows. Fifty percent of respondents cite cyberattacks as the cause of disruption or failure followed by 49 percent who say it was a system glitch. Sixty-one percent say a strong security posture is the most important step to preventing disruptions and failures.

Measuring the financial consequences of a disruption or failure can help organizations prioritize the resources needed to secure mission-critical workflows. Fifty-three percent of respondents say their organizations measured the cost of the disruption or failure in executing mission-critical workflows. The metrics most often used are the cost of downtime of critical operations (63 percent of respondents), which is the number two consequence of a disruption or failure. Other metrics are the cost to recover the organization’s reputation (51 percent of respondents) and the cost to detect, identify and remediate the incident (50 percent of respondents).

Organizations should consider increasing the role of IT and IT security functions in assessing cyber risks that threaten workflow’s reliability. Despite the threat of a cyberattack targeting mission-critical workflows, only 16 percent of respondents say the CISO and only 10 percent of respondents say the CIO are most responsible for executing mission-critical workflows securely. The function most responsible is the business unit leader, according to 26 percent of respondents.

A dedicated team supports the optimization of mission-critical workflows. Fifty-six percent of respondents say their organizations have a team dedicated to managing mission-critical workflows. The 44 percent of organizations without a dedicated team say it is very or highly difficult to accomplish the goals of mission-critical workflows. According to the research presented in this report, a dedicated team gives organizations the following advantages.

  • Increased effectiveness in prioritizing critical communications among team members
  • More likely to be able to prevent disruptions and failures in executing mission-critical workflows
  • More likely to measure the costs of a disruption or failure to improve the execution of mission-critical workflows
  • Improved efficiency of mission-critical workflow management and effectiveness in streamlining mission-critical workflows
  • More likely to use mission-critical collaboration tools

 Mission-critical workflows require setting clear objectives, understanding the requirements, mapping workflows and managing risks. The two most often used activities to manage mission-critical are analyzing current workflow processes (47 percent of respondents) and training mission-critical employees (44 percent of respondents). Only 34 percent of respondents say their organizations are very or highly effective in prioritizing critical communication among team members.

Mission-critical workflows can be overly complex and inefficient. Taking steps to automate repetitive tasks where possible and to regularly review and update workflows are only used by 38 percent and 36 percent of organizations, respectively. Only 46 percent of respondents say their organizations are very or highly effective in streamlining mission-critical workflows to improve their efficiency and very or highly efficient in managing mission-critical workflows.

Ineffective communication about the execution of mission-control workflows can put organizations’ critical operations at risk. Sixty percent of respondents say it is the lack of real-time information sharing and 58 percent of respondents say it is the lack of secure information sharing that are barriers to effectively executing mission-critical workflows.

Enterprise-wide incident response plans should be implemented to reduce the time to respond, contain and remediate security incidents that compromise mission-critical workflows. Fifty-one percent of respondents say an enterprise-wide incident response plan is critical to the prevention of disruption and failures. Fifty-nine percent of organizations measure effectiveness based on how quickly compromises to mission-critical workflows are addressed. Organizations also measure their ability to prevent and detect cyberattacks against mission-critical workflows.

Organizations are adopting AI to improve the management of mission-critical workflows. However, organizations need to consider the potential AI security risks to mission-critical workflows. Fifty-one percent of respondents say their organizations have deployed AI. Most often AI is used to automate repetitive tasks (60 percent) and secure data used and data harvested by Large Language Models (LLMs). The top AI security risks according to respondents are potential leakage or theft of confidential and sensitive data (53 percent) and potential backdoor attacks on their AI infrastructure such as sabotage or malicious code injection (48 percent).

Mission-critical collaboration tools are considered very or highly effective, but adoption is slow. Only 47 percent of respondents use mission-critical collaboration tools. However, 54 percent of respondents say these tools are very or highly effective in making workflows efficient with minimum disruption to critical operations. The features considered most important are data encryption (61 percent of respondents), data loss prevention (56 percent of respondents) and the ability to securely enable real-time communication between teams (56 percent of respondents).

To read the rest of this report, including key findings, please visit Mattermost.com

US (finally) issues warning about crypto ATMs

Bob Sullivan

Finally, crypto ATMs are getting a bit of the attention that they deserve.

As host of AARP’s The Perfect Scam podcast, I talk to crime victims every week.  A few years ago, a majority had their money stolen via bogus gift card transactions. Today, it feels like almost every person is the victim of a cryptocurrency scam, and many have their money stolen through crypto ATMs.

I’m sure you’ve seen these curious machines in convenience stores and gas stations, which are also known as convertible virtual currency (CVC) kiosks.  Put cash in, and you can send or receive crypto around the world.

Crypto ATMs, in theory, democratize crypto. Someone who wouldn’t feel comfortable buying crypto online can do so in a familiar way, using a machine that works just like the ones we’ve used to get cash for many years.  Perhaps you won’t be surprised to hear that crypto ATMs are a bad deal.  Set aside crypto volatility and high transaction fees for a moment: No one who feels uncomfortable opening an online crypto account should be buying or transmitting crypto. Period.

And yet, these crypto ATMs are sprouting up like weeds, at a time when old-fashioned ATMs are disappearing. There were roughly 4,000 crypto ATMs in 2019, and there were more than 37,000 by January of this year.

I know that because  the U.S. Treasury’s Financial Crime Enforcement Network — FinCEN — published a notice Aug. 4 warning financial institutions about crypto ATMs and their connection to crime. The agency also said many of these devices are being put into service without registering as money service businesses with FinCEN, and their operators are sometimes failing to report suspicious activity.

As I mentioned, there really isn’t a use case for these fast-proliferating devices.  Well, there’s one. When a criminal has a victim confused and manipulated, the fastest way to steal their money is to persuade them to drive to the nearest crypto ATM and feed the machines with $100 bills. I’ve talked to countless victims who’ve told me harrowing, tragic tales of crouching in the dark corner of a gas station, shoving money into one of these machines, terrified they are being watched.  In fact, they aren’t. Employees are told not to get involved. So victims drive away, their money stolen in the fastest way possible.  The transfer is nearly instant, faster than a wire transfer, and irrevocable.

That means it’s the perfect gadget for criminals like the Jalisco Cartel in Mexico to steal cash from Americans. Particularly elderly Americans, FinCEN says. According to FTC data, people aged 60 and over were more than three times as likely as younger adults to report a loss using a crypto ATM.

“These kiosks have increasingly facilitated elder fraud, especially among tech/customer supports scams, government impersonation, confidence/romance scams, emergency/person-in-need scams, and lottery/sweepstakes scams,” FinCEN said. And the losses are huge. “In 2024, the FBI’s IC3 received more than 10,956 complaints reporting the use of CVC kiosks, with reported victim losses of approximately $246.7 million. This represents a 99 percent increase in the number of complaints and a 31 percent increase in reported victim losses from 2023.”

In other words, we have a five-alarm fire on our hands.  One that’s been blazing in broad daylight for at least a year and yet…every week, I continue to interview victims who crouched near a crypto ATM for days on end, stuffing bills into these machines, thinking they were doing the right thing.

Banks and kiosk operators should do much more. The current daily limits on transactions aren’t low enough; victims are just instructed to drive all over town, or make daily deposits for weeks on end, so criminals can steal hundreds of thousands of dollars this way.  Regulators should do more, too.  If the majority of transactions flowing through a certain kiosk can be traced to fraud, the machine should be removed immediately. It’s not impossible. The UK ordered all cryto ATMS shut down recently.

Tech can enhance our lives; it can also be weaponized. And when it is, we shouldn’t stand idly by and act as if we are powerless to stop the pain it is causing our most vulnerable people.

The State of Identity and Access Management (IAM) Maturity

Larry Ponemon

Identity Management Maturity (IDM) refers to the extent to which an organization effectively manages user identities and access across its systems and applications. It’s a measure of how well an organization is implementing and managing Identity and Access Management (IAM) practices. A mature IDM program ensures that only authorized users have access to the resources they need, enhancing security, reducing risks and improving overall efficiency.

Most organizations remain in the early to mid-stages of Identity and Access Management (IAM) maturity, leaving them vulnerable to identity-based threats. This new study of 626 IT professionals by the Ponemon Institute, sponsored by GuidePoint Security, highlights that despite growing awareness of insider threats and identity breaches, IAM is under-prioritized compared to other IT security investments. All participants in this research are involved in their organizations’ IAM programs.

Key Insights:

  • IAM is underfunded and underdeveloped.

Only 50 percent of organizations rate their IAM tools as very or highly effective, and even fewer (44 percent) express high confidence in their ability to prevent identity-based incidents. According to 47 percent of organizations, investments in IAM technologies trail behind other security investment priorities.

  • Manual processes are stalling progress.

 Many organizations still rely on spreadsheets, scripts and other manual efforts for tasks like access reviews, deprovisioning and privileged access management—introducing risk and inefficiencies.

  • High performers show the way forward.

 High performers in this research are those organizations that self-report their IAM technologies and investments are highly effective (23 percent). As a result, they report fewer security incidents and stronger identity controls. These organizations also lead other organizations represented in this research in adopting biometric authentication, authentication, identity threat detection and integrated governance platforms.

  • Technology and expertise gaps persist.

 A lack of tools, skilled personnel and resources is preventing broader progress. Many IAM implementations are driven by user experience goals rather than security or compliance needs.

Bottom Line:

Achieving IAM maturity requires a strategic shift—moving from reactive, manual processes to integrated, automated identity security. Organizations that treat IAM as foundational to cybersecurity, not just IT operations, are best positioned to reduce risk, streamline access and build trust in a dynamic threat landscape.

Part 2. Introduction: Including a Peek at High Performer Trends

The purpose of an Identity and Access Management program (IAM) is to manage user identities and access across systems and applications. A mature IAM program ensures that only authorized users have access to the resources they need to enhance security, reduce risks and improve overall efficiency.

This survey, sponsored by GuidePoint Security, was designed to understand how effective organizations are in achieving IAM maturity and which tools and practices are critical components of their identity and access management programs. A key takeaway from the research is that organizations’ continued dependency on manual processes as part of their IAM programs is a barrier to achieving maturity and reducing insider threats. Such a lack of maturity can lead to data breaches and security incidents caused by negligent or malicious insiders.

Recent examples of such events include former Tesla employees in 2023 who leaked sensitive data about 75,000 current and former employees to a foreign media outlet. In August 2022, Microsoft experienced an insider data breach where employees inadvertently shared login credentials for GitHub infrastructure, potentially exposing Azure servers and other internal systems to attackers.

According to the research, investments in IT security technologies are prioritized over IAM technologies.  Without the necessary investments in IAM, organizations lack confidence in their ability to prevent identity-based security incidents. Respondents were asked to rate effectiveness in their organizations’ tools and investments in combating modern identity threats on a scale from 1 = not effective to 10 = highly effective, their confidence in the ability to prevent identity-based security incidents from 1 = not confident to 10 = highly confident and the priority of investing in IAM technologies compared to other security technologies from 1 = not a priority to 10 = high priority.

Only half (50 percent of respondents) believe their tools and investments are very effective and only 44 percent of respondents are very or highly confident in their ability to prevent identity-based security incidents. Less than half of the organizations (47 percent of respondents) say investing in IAM technologies compared to other IT security technologies is a high priority.

Best practices in achieving a strong identity security posture

To identify best practices in achieving a strong identity security posture, we analyzed the responses of the 23 percent of IT professionals who rated the effectiveness of their tools and investments in combating modern identity threats as highly effective (9+ on a scale from 1 = low effectiveness to 10 = high effectiveness). We refer to these respondents and their organizations as high performers. Seventy-seven percent of respondents rated their effectiveness on a scale from 1 to 8. We refer to this group as “other” in the report.

Organizations that have more effective tools and investments to combat modern identity threats are less likely to experience an identity-based security incident. Only 39 percent of high performers had an identity-based security incident.

High performers are outpacing other organizations in the adoption of automation and advanced identity security technologies.  

  • Sixty-four percent of high performers vs. 37 percent of other respondents have adopted biometric authentication.
  • Fifty-nine percent of high performers vs. 34 percent of other respondents use automated mechanisms that check for compromised passwords.
  • Fifty-six percent of high performers vs. 23 percent of other respondents have a dedicated PAM platform.
  • Fifty-three percent of high performers vs. 31 percent of other respondents use IAM platforms and/or processes used to manage machine, service and other non-human accounts or identities.

 High performers are significantly more likely to assign privileged access to a primary account (55 percent vs. 30 percent). Only 25 percent of high performers vs. 33 percent of other respondents use manual or scripted processes to temporarily assign privileged accounts.

 High performers are leading in the adoption of ITDR, ISPM and IGA platforms. 

  • Thirty-seven percent of high performers vs. 12 percent of other respondents have adopted IDTR.
  • Thirty-five percent of high performers vs. 15 percent of other respondents have adopted ISPM.
  • Thirty-one percent of high performers vs. 9 percent of other respondents have adopted IGA platforms.

 Following are highlights from organizations represented in this research

 Identity verification solutions are systems that confirm the authenticity of a person’s identity, typically in digital contexts, such as online transactions or applications. These solutions use various methods to verify a person’s identity and ensures only authorized users have access to the resources they need.

Few organizations use identity verification solutions and services to confirm a person’s claimed identity. Only 39 percent of respondents say their organizations use identity verification solutions and services. If they do use identity verification solutions and services, they are mainly for employee and contractor onboarding (37 percent of respondents). Thirty-three percent of respondents say it is part of customer registration and vetting, and 30 percent of respondents say it is used for both employee/contractor and customer.

Reliance on manual processes stalls organizations’ ability to achieve maturity. Less than half of organizations (47 percent) have an automated mechanism that checks for compromised passwords. If they do automate checks for compromised passwords, 37 percent of respondents say it is for both customer and workforce accounts, 34 percent only automate checks for customer accounts, and 29 percent only automate checks for workforce accounts.

 To close the identity security gap, organizations need technologies, in-house expertise and resources. However, as discussed previously, more resources are allocated to investments in IT security. Fifty-four percent of respondents say there is a lack of technologies. Fifty-two percent say there is a lack of in-house expertise, and 45 percent say it is a lack of resources.

 Security is not a priority when making IAM investment decisions.  Despite many high-profile examples of insider security breaches, 45 percent of respondents say the number one priority for investing in IAM is to improve user experience. Only 34 percent of respondents say investments are prioritized based on the increase in number of regulations or industry mandates or the constant turnover of employees, contractors, consultants and partners (31 percent of respondents).

To achieve greater maturity, organizations need to improve the ability of IAM platforms to authenticate and authorize user identities and access rights. Respondents were asked to rate the effectiveness of their IAM platform in user access provisioning lifecycle from onboarding through termination, and its effectiveness authenticating and authorizing on a scale of 1 = not effective to 10 = highly effective. Only 46 percent of respondents say their IAM platform is very or highly effective for authentication and authorization. Fifty percent of respondents rate the effectiveness of their IAM platforms’ user access provisioning lifecycle from onboarding through termination as very or highly effective.

Policies and processes are rarely integrated with IAM platforms in the management of machine, service and other non-human accounts or identities. Forty-four percent of respondents say their IAM platform and/or processes are used to manage machine, service and other non-human accounts or identities. Thirty-nine percent of respondents say their organizations are in the adoption stage of using their IAM platform and/or processes to manage machine, service and other non-human accounts. Of these 83 percent of respondents (44 percent + 39 percent), 39 percent say the use of the IAM platform to manage machine, service and other non-human accounts or identities is ad hoc. Only 28 percent of these respondents say management is governed with policy and/or processes and integrated with the IAM platform.

IAM platforms and/or processes are used to perform periodic access review, attestation, certification of user accounts and entitlements but mostly it is manual. While most organizations conduct periodic access review, attestation and certification of user accounts and entitlements, 34 percent of respondents say it is manual with spreadsheets, and 36 percent say their organizations use custom in-house built workflows. Only 17 percent of respondents say it is executed through the IAM identity governance platform. Only 41 percent of respondents use internal applications and resources based on their roles and needs, to streamline onboarding, offboarding and access management. An average of 38 percent of internal applications are managed by their organizations’ IAM platforms.

Deprovisioning non-human identities, also known as non-human identity management (NHIM), focuses on removing or disabling access for digital entities like service accounts, APIs, and IoT devices when they are no longer needed. This process is crucial for security, as it helps prevent the misuse of credentials by automated systems that could lead to data breaches or system compromises.

Deprovisioning user access is mostly manual. Forty-one percent of respondents say their organizations include non-human identities in deprovisioning user access. Of those respondents, 40 percent say NHI deprovisioning is mostly a manual process. Twenty-seven percent of respondents say the process is automated with a custom script and 26 percent say it is automated with a SaaS tool or third-party solution.

Few organizations are integrating privileged access with other IAM systems and if they do the integration is not effective. Forty-two percent of respondents say PAM is running a dedicated platform. Twenty-seven percent say privileged access is integrated with other IAM systems, and 31 percent of respondents say privileged access is managed manually. Of these 27 percent of respondents, only 45 percent rate the effectiveness of their organizations’ IAM platforms for PAM as very or highly effective.

To read the full findings of this report, visit Guidepoint’s Website. 

Minnesota assassin used data brokers as a deadly weapon

I’ve called Amy Boyer the first person killed by the Internet…with only a hint of a stretch in my assertion.  She was stalked and murdered by someone who tracked her down using a data broker …in 1999.  I told her story in a documentary podcast called “No Place to Hide” published five years ago, on the 20th anniversary of her death.

The dark events that took place in Minnesota last month shows we’ve learned just about nothing, a sold 25 years after Amy’s unnecessary death.

Bob Sullivan

When alleged assassin Vance Boelter left his home on June 13, he had a list of 45 state politicians in his car, and enough ammunition to kill dozens of them. He also had a notebook full of their personal information, including home addresses. That notebook also had detailed information on 11 different Internet data brokers — how long their free trials were, how he could get home addresses. Most of them have names you’ve probably seen in online ads — I’ve redacted them in the image above to avoid giving them any unnecessary publicity.

Belter stalked his victims digitally. He ultimately killed Rep. Melissa Hortman and her husband, and shot a second state legislator and his wife, before his rampage ended. The horrific attack could have been even worse — and it was fueled, in part, by data brokers.

As stories of political violence mount in the U.S., a fresh spotlight is being shined on security for public officials — politicians, judges, government bureaucrats, even corporate executives. But America has failed for decades to take even basic steps to protect our privacy, failing again and again to pass a federal privacy law, even failing to do much about the hundreds of data brokers that profit off of selling our personal information.

What was the role of data brokers in this horrific crime and what more could be done to protect elected officials — protect all of us — going forward? I host a podcast for Duke University called Debugger, and in this recent episode, I talk with David Hoffman, a professor at Duke University and director of the Duke Initiative for Science and Society.

Would Boelter have found his victims without data brokers? Perhaps, perhaps not. We’ll never know.  But why do we seem to be making things so easy for stalkers, for murderers? Why do we pretend to be helpless bystanders when there are simple steps our society can take to make things harder for stalkers?

————Partial transcript————–

(lightly edited for clarity)

David Hoffman: We’ve known for quite a while that people have been actually been getting killed because of the accessibility of data from data brokers. These people search websites and people search data brokers are really the bottom feeders of the Internet economy. What we haven’t seen is something of such high profile as this particular instance, and it’s my hope that it’s going to serve as a catalyst for us to take some of the very reasonable policy actions that we could do to address this and make sure something like this doesn’t happen in the future.

Bob: It’s not just elected officials or CEOs of companies that are at risk for this, right? Who who else might be at risk from from digital stalking and from the information that can be gleaned from a data broker?

David Hoffman: I think some of the cases that we’ve seen have been, for instance, victims of domestic violence and stalking. But it can be just about anyone who, for one reason or another, has someone to fear … who can find out who they are, where they live, and other personal information about them and their family.

Bob: I know Duke has done some research on data brokers and their impact on national security and other issues.What kind of research have you done and what have you found?

David Hoffman: We’ve actually led a program on data broker research for six years now, and what we have done is shown the value that people are providing for the data so that… it actually has economic value, people are paying for it, and that they are creating the kinds of lists and selling them that are horrific.

Let me give you an example. We have found that there are entities out there that are collecting lists for sale of personal information about veterans and members of the military. We have found that there are people out there creating lists about people who are in the early stages of Alzheimer’s and dementia, and those people are selling those lists to scam artists, particularly because those people are at risk.

So we have actually done research where we’ve gone out and we have purchased this data from data brokers, and then we have analyzed what we have received and we see a tremendous amount of sensitive information, including information about sexual orientation of individuals and healthcare information.

Bob: Are there natural security risks as well from the sale of information at data brokers?

David Hoffman: You can imagine for the list that I described for members of the military and veterans …. not just information about them, but understanding information about their families, the issues that there could be for blackmail and for people trying to compromise people’s security clearances and get access to information.

Bob: I know there’s long been this perception that, you know, “You have no privacy, get over it.” There’s this helpless feeling many of us have that our information is out there. It’s hard for me to imagine sitting here…How could I make sure no one could find my home address, for example? Is there anything that Congress could do or policymakers could do to make this situation any better?

David Hoffman: Absolutely. I think there’s a number of things that people could do. So first of all, we have to take a look at where these entities are getting a lot of this information.

You know, for decades we have had public records that actually store people’s addresses, but before those records were digitized and made available on the internet, you would have to go to a clerk’s office in an individual county or city have to know what you’re looking for and be able to file an access request to get that information.

Now what we have done all across the United States is provide ready and open access to all of that information so that these nuts can access it en masse and be able to process it and then to further sell it. We need privacy laws that include the protection of public records because we include (personal information) when we purchase real estate or small business filings that we do, or a court case that we might be involved in. Yes, those produce public records, but we never intended those to be readily available to everyone at a moment’s notice on their computer or by automated bots that will go and collect them and then be able to provide that information to anybody who wants to provide a relatively small money, amount of money, usually under $20,

Bob: And in many cases free. One of the chilling elements of the affidavit I read … in the Minnesota case … he’s got a list of…how long the free trials are, what information you can get from each site… so you often don’t have to pay anything to get this kind of information, right?

David Hoffman: That’s absolutely right. And this just demonstrates once again, how important it should be that we have a comprehensive privacy law in the United States like they have in almost every other developed country around the world that would provide … protection for this kind of information. This isn’t something that’s going to chill innovation. This is not the kind of innovation that we need…people to actually create sort of spy-on-your-neighbor websites where you can learn all of this about anyone at that point in time.

We can still have innovation. We can still drive social progress with the use of data while providing much stronger protections for it.

The 2025 Study on the State of AI in Cybersecurity

INTRODUCTION: The overall threat from China and other adversaries has only increased over time and has accelerated and been exacerbated with technological innovation and their access to AI. In an article from January 2025, Jen Easterly, former Director of the Cybersecurity and Infrastructure Security Agency (CISA), lays out some of the risks to US critical infrastructure. CISA defines critical infrastructure as encompassing 16 sectors from utilities to government agencies to banks and the entire IT industry. Outages happen consistently across all sectors and vulnerabilities are everywhere. So, the key for all Cyber programs is continuing to improve upon early detection and early response. 

After the Crowdstrike outage in 2024 that affected thousands of hospitals, airports and businesses worldwide, Easterly said, “We are building resilience into our networks and our systems so that we can withstand a significant disruption or at least drive down the recovery time to be able to provide services, which is why I thought the CrowdStrike incident — which was a terrible incident — was a useful exercise, like a dress rehearsal, for what China may want to do to us in some way and how we react if something like that happens,” she said.  “We have to be able to respond very rapidly and recover very rapidly in a world where [an issue] is not reversible.” –

What will organizations do to combat persistent threats and cyberattacks from increasingly sophisticated adversaries?  A goal of this research MixMode sponsored is to provide information on how industry can leverage AI in their cybersecurity plans to detect attacks earlier (be predictive) and improve their ability to recover from attacks more quickly.     


Organizations are in a race to adopt artificial intelligence (AI) technologies to strengthen their ability to stop the constant threats from cyber criminals. This is the second annual study sponsored by MixMode on this topic. The purpose of this research is to understand how organizations are leveraging AI to effectively detect and respond to cyberattacks.

Ponemon Institute surveyed 685 US IT and IT security practitioners in organizations that have adopted AI in some form. These respondents are familiar with their organization’s use of AI for cybersecurity and have responsibility for evaluating and/or selecting AI-based cybersecurity tools and vendors.

Since last year’s study, organizations have not made progress in their ability to integrate AI security technologies with legacy systems and streamline their security architecture to increase AI’s value. More respondents believe it is difficult to integrate AI-based security technologies with legacy systems, an increase from 65 percent to 70 percent of respondents. Sixty-seven percent of respondents, a slight increase from 64 percent of respondents, say their organizations need to simplify and streamline their security architecture to obtain maximum value from AI. Most organizations continue to use AI to detect attacks across the cloud, on-premises and hybrid environments.

The following research findings reveal the benefits and challenges of AI: How organizations are using AI to improve their security posture.

In just one year since the research was first conducted, organizations are reporting that their security posture has significantly improved because of AI.  The biggest changes are improving the ability to prioritize threats and vulnerabilities (an increase from 50 percent to 56 percent of respondents), increasing the efficiency of the SOC team (from 43 percent to 51 percent) and increasing the speed of analyzing threats (from 36 percent to 43 percent).

Since 2024, the maturity of AI programs has increased. Fifty-three percent of organizations have achieved full adoption stage (31 percent of respondents) or mature stage (22 percent of respondents). This is an increase from 2024 when 47 percent of respondents said they had reached the full adoption stage (29 percent of respondents) or mature stage (18 percent of respondents).

AI-based security technologies increase productivity and job satisfaction. Seventy percent of respondents say AI increases the productivity of IT security personnel, an increase from 66 percent in 2024. Fifty-one percent of respondents say AI improves the efficiency of junior analysts so that senior analysts can focus on critical threats and strategic projects. Sixty-nine percent of respondents say since the adoption of AI, job satisfaction has improved because of the elimination of tedious tasks, an increase from 64 percent.

Forty-four percent of respondents are using AI-powered cybersecurity tools or solutions. By leveraging advanced algorithms and machine learning techniques. AI-powered systems analyze vast amounts of data, identify patterns and adapt their behavior to improve performance over time.

Forty-three percent of respondents are using pre-emptive security tools to stay ahead of cybercriminals. Pre-emptive security tools apply AI-based data analysis to cybersecurity so organizations can anticipate and prevent future attacks. The benefits include the ability to preemptively deter threats and minimize damages, prioritize tasks effectively and address the most important business risks first. Pre-emptive security data can guide response teams, offer insights into the attack’s objectives, potential targets and more. The result is continuous improvement to ensure more accurate forecasts and reduce costs associated with handling attacks

Respondents say pre-emptive security is used to identify patterns that signal impending threats (60 percent), assess risks to identify emerging threats and potential impact (57 percent) and is used to harness vast amounts of online metadata from various sources as an input to predictive analytics (52 percent).

Pre-emptive security will decrease the ability of cybercriminals to direct targeted attacks. Fifty-two percent of respondents in organizations that use pre-emptive security say that without it cybercriminals will become more successful at directing targeted attacks at unprecedented speed and scale while going undetected by traditional, rule-based detection. Forty-nine percent say investments are being made in pre-emptive AI to stop AI-driven cybercrimes.

Fifty-eight percent of respondents say their SOCs use AI technologies. The primary benefit of an AI-powered SOC is that alerts are resolved faster, according to 57 percent of respondents. In addition to faster resolution of alerts, 55 percent of respondents say it frees up analyst bandwidth to focus on urgent incidents and strategic projects. Fifty percent of respondents say it applies real-time intelligence to identify patterns and detect emerging threats.

An AI-powered SOC is effective in reducing threats. Human analysts are effective as the final line of defense in the AI-powered SOC. Fifty-seven percent of respondents say AI in the SOC is very or highly effective in reducing threats and 50 percent of respondents say their human analysts are very or highly effective as the final line of defense in the AI-powered SOC.

More organizations are creating one unified approach to managing both AI and privacy security risks, an increase from 37 percent to 52 percent of respondents.  In addition, 58 percent of respondents say their organizations identify vulnerabilities and what can be done to eliminate them.

The barriers and challenges to maximizing the value from AI

While an insufficient budget to invest in AI technologies continues to be the primary governance challenge, more organizations say an increase in internal expertise is needed to validate vendors’ claims. The lack of internal expertise to validate vendors’ claims increased significantly from 53 percent to 59 percent of respondents. One of the key takeaways from the research is that 63 percent of respondents say the decision to invest in AI technologies is based on the extensiveness of the vendors’ expertise.

As the number of cyberattacks increase, especially malicious insider incidents, organizations lack confidence in their ability to prevent risks and threats. Fifty-one percent of respondents say their organizations had at least one cyberattack in the past 12 months, an increase from 45 percent of respondents in 2024.

Only 42 percent say their organizations are very or highly effective in mitigating risks, vulnerabilities and attacks across the enterprise. The attacks that increased since 2024 are malicious insiders (53 percent vs. 45 percent), compromised/stolen devices (40 percent vs. 35 percent) and credential theft (49 percent vs. 53 percent). The primary types of attacks in 2024 and 2025 are phishing/social engineering and web-based attacks.

The effectiveness of AI technologies is diminished because of interoperability issues and an increase in a heavy reliance on legacy IT environments. The barriers to AI-based security technologies’ effectiveness are interoperability issues (63 percent, an increase from 60 percent of respondents), can’t apply AI-based controls that span across the entire enterprise (59 percent vs. 61 percent of respondents) and can’t create a unified view of AI users across the enterprise (56 percent vs 58 percent of respondents). The most significant trend is the increase in the heavy reliance on legacy IT environments, an increase from 36 percent to 45 percent of respondents.

Complexity challenges the preparedness of cybersecurity teams to work with AI-powered tools. Only 42 percent of respondents say their cybersecurity teams are highly prepared to work with AI-powered tools. Fifty-five percent of respondents say AI-powered solutions are highly complex.

AI continues to make it difficult to comply with privacy and security mandates and to safeguard confidential and personal data in AI. Forty-eight percent of respondents say it is highly difficult to achieve compliance and 53 percent of respondents say it is highly difficult to safeguard confidential and personal data in AI

To read key findings and the rest of this report, visit MixMode’s website.

Trying to avoid a heart attack, Instagram attacked me with ads

This confronted me in the waiting room.

Bob Sullivan

It’s almost like they want you to be sick…

I went for a relatively routine cardiovascular screening the other day, and I’ve never hated social media more. Before I even walked into the exam room, I was pounded by ads telling me the test I was taking wasn’t good enough (But there is this other test…); the drugs I might be taking wouldn’t work (But here’s a rare supplement pill …); and the things I was doing woudn’t make me healthier (you know what will? Droping all social media).

The ad cluster above is a tiny fraction of the digital attack I suffered in the days around my doctor visit.  I didn’t capture them all, I was busy, trying to get healthy. My smartphone did not help. The digital surveillance I was subjected to just made me furious, and I know my case is tame.  There are endless stories about cruel baby ads that follow around women after a miscarriage, for example.

The thing is, I’m a technology reporter who specializes in privacy.  I’m a visiting scholar at Duke University where I work on research around privacy.  I use social media for work; I believe I have to.  And I’ve tried my best to defend my accounts against just this kind of attack without completely locking them down (at which point, they are no longer social media).  I just checked — I have unchecked “share my information with our partners” and every other such option wherever it can be found.  I’ve even banned weight loss ads from my feed, which is a fascinating stand-alone option. So is Meta’s tsk tsking of my prudish choices – “No, don’t make my ads more relevant…your ads will use less of your information and be more likely to apply to other people.”

Sounds like a dream.

Here’s why I am so repulsed by this. Meta/Facebook/Instagram knew exactly where I was and what I was doing.  But it didn’t just show me relevant ads.  It knew I’d be vulnerable. It knew I might get bad news. And then it targeted me with crazy, untested products that would probably make me sicker. It’s vile and it needs to stop.  This isn’t capitalism and it isn’t free speech. It’s using technology to attack people when they are nearly defenseless.

How did Instagram know what I was doing? Well, in theory, it could have just made an educated guess based on my age.  More likely it had noted some Internet searches I’d made recently, perhaps had access to some kind of “de-identigied” information in my email or my calendar tool, and perhaps it made some deductions from my smartphone’s location information.  I don’t know. What I do know is that I spent a good long while in the waiting room clicking through menus trying to get those ads to disappear. And I failed.

Yes, I didn’t have to look at Instagram while in the waiting room, but I would have been the only one in the waiting room not staring at a Smartphone. What else do you do when waiting for a health test? Those rooms are already crammed with anxious energy unlike anything humans ever experience elsewhere — souls of all ages facing everything from routine tests to a final exam which quite literally might mean life or death to them.  It is no place to practice surveillance capitalism.

Targeted health advertising is dangerous, social media companies. Turning it off should be easy.  Congress, hurry up and make them do that with a federal privacy law.

 

2025 Cybersecurity Threat and Risk Management Report

As the intensity and frequency of cybersecurity incidents increase, companies are mobilizing their defenses against potential threats. As shown in this research, to improve their security posture organizations are allocating more resources to arm their IT security teams with artificial intelligence (AI), machine learning (ML), Secure Access Service Edge (SASE) or Security Service Edge (SSE) and Security Orchestration Automation and Response (SOAR).

The purpose of this research, sponsored by Optiv but conducted independently by Ponemon Institute, is to learn the extent of the cybersecurity threats facing organizations and the steps being taken to manage the risks of potential data breaches and cyberattacks. Ponemon Institute surveyed 620 IT and IT cybersecurity practitioners in the U.S. who are knowledgeable about their organizations’ approach to threat and risk management practices.

Most organizations are increasing their cybersecurity budgets. In this year’s study, 79 percent of respondents say their organizations are making changes to their cybersecurity budget. Of these respondents, 71 percent say cybersecurity budgets are increasing, with the average budget at $24 million. Only 29 percent of respondents say budgets will decrease. The budget increase correlates with the heightened volume of threat vectors with 66 percent of respondents reporting cybersecurity incidents have increased significantly or increased in the past year, up from 61 percent in 2024.

Cybersecurity budgets are most often based on assessments of threats and risks facing the organization. The use of risk and threat assessments increased significantly from 53 percent of respondents in 2024 to 67 percent of respondents in 2025. Effectiveness in reducing security incidents is the second most often used factor to decide on budget allocation (56 percent respondents in 2025 and 61 percent of respondents in 2024).

Best practices in achieving a strong cybersecurity posture

Fifty-eight percent of respondents rate their organizations in reducing cybersecurity threats as highly effective. These respondents are referred to as high performers and their best practices are shown below.

High performers are more likely to have a Cybersecurity Incident Response Plan (CSIRP) that is applied consistently across the entire enterprise. Sixty percent of high performers have an enterprise-wide CSIRP vs. 45 percent of other respondents. High performers also rate the effectiveness of their organizations’ CSIRP higher (80 percent of respondents vs. 49 percent of respondents).

High performers are briefing C-level executives and/or board members more often than other respondents. Regular briefings to leadership are important to ensuring IT and IT security functions have the necessary resources and support to reduce cybersecurity risks and threats. Seventy-two percent of high performers report on the state of the cybersecurity risk management program to C-level executives monthly (40 percent) or quarterly (32 percent). Only 16 percent of the other respondents brief leadership monthly and 19 percent say they provide briefings quarterly.

 High performers are ahead of other organizations in implementing a SASE or SSE. Forty-six percent of high performers have fully implemented a SASE or SSE vs. only 16 percent of other respondents.

 High performers are more likely to say they have the right number of separate cybersecurity tools. Only 33 percent of high performers have too many cybersecurity tools owned by their organizations vs. 48 percent of other respondents. High performers also are significant users of SOAR. Fifty-three percent of high performers use SOAR significantly vs. 25 percent of other respondents.

 Effective monitoring and observing AI usage and threats requires visibility into AI systems. Sixty-four percent of high performers have this visibility vs. only 42 percent of other respondents.

 The following findings suggest progress in managing cybersecurity risks and threats.

 Cybersecurity incidents continue to increase. In 2025, 66 percent of respondents say cybersecurity incidents increased significantly (31 percent of respondents) or increased (35 percent of respondents), a slight increase from 61 percent in 2024. Fifty-eight percent of respondents in the 2025 study say their organizations had a data breach or cybersecurity incident in the past two years. Fifty-four percent of organizations represented in this research had at least four to more than five data breaches or cybersecurity incidents in the past two years.

 Organizations plan to increase investments in assessments of their security processes and governance practices. The most important investment in the coming year is an internal assessment of their organizations’ security processes and governance practices (63 percent in 2025 and 60 percent in 2024). Other top areas planned for investment are more cybersecurity tools (56 percent in 2025 and 51 percent in 2024) and cloud security (46 percent in 2025 and 42 percent in 2024).

 Cybersecurity Incident Response Plans (CSIRPs) are considered effective in reducing risks and threats. A Cybersecurity Incident Response Plan (CSIRP) is a documented strategy that outlines how an organization will respond to and manage cybersecurity incidents, like data breaches or ransomware attacks, to minimize damage and restore operations quickly.

In 2025, 51 percent of respondents say their organizations have a CSIRP that is applied consistently across the entire enterprise, an increase from 46 percent in 2024. The frequency of CSIRP reviews has increased to 61 percent of respondents (each quarter 25 percent or twice per year 36 percent) from 52 percent of respondents in 2024 (each quarter 23 percent or twice per year 29 percent). More organizations are also providing a formal report of the CSIRP to C-level executives and the board of directors (45 percent in 2024 vs. 39 percent in in 2024).

CSIRPs are becoming more effective in minimizing the consequences of a cybersecurity incident, an increase from 50 percent of respondents in 2024 to 57 percent of respondents in 2025, since 2024, the effectiveness of the CSIRP in mitigating cyber risks has also increased significantly from 50 percent to 58 percent in 2025.

Since 2024, more organizations measure the effectiveness of their cybersecurity risk management program based on reduction in the time to patch software application vulnerabilities. Faster patching of vulnerabilities is considered critical to an effective cybersecurity risk program. Forty-four percent of respondents say they are using this metric, an increase from 37 percent of respondents.

The other most used metric is the time to detect a data breach or other security incident (44 percent of respondents in 2025 vs. 47 percent of respondents in 2024). Assessment of supply chain security increased from 30 percent to 36 percent of respondents. The time to recover from a data breach or other security incident decreased in importance from 41 percent to 36 percent of respondents.

Organizations are adopting SASE and SOAR to better manage cybersecurity risks and threats. Sixty-six percent of respondents say their organization has fully implemented (31 percent) or partially implemented (35 percent) SASE. Only 15 percent of respondents say there are no plans to implement SASE. The significant and moderate use of SOAR continues to be an important part of organizations’ efforts to reduce cybersecurity threats (73 percent of respondents in 2024 and 72 percent of respondents in 2025).

The number of cybersecurity tools is just right. Only 44 percent of respondents say their organizations have too many cybersecurity tools to achieve a strong cybersecurity posture. The average number of separate cybersecurity technologies has not changed in the past year. In 2025, respondents say their organizations have an average of 55 and last year the average was 54.

Recommendations for improvement as cybersecurity incidents continue to increase

A lack of visibility into the existence and location of vulnerabilities puts organizations at risk. The biggest challenge to having an effective vulnerability management plan is the lack of understanding of every potential source of vulnerability, including laptops, desktops, servers, firewalls, networking devices and printers, according to 74 percent of respondents. Only periodically scanning, analyzing, reporting and responding to vulnerabilities reduces effectiveness, according to 67 percent of respondents.

Automation successfully reduces the time to respond to vulnerabilities. Thirty-four percent of respondents say automation has significantly shortened the time to respond to vulnerabilities and 23 percent of respondents say it has slightly shortened the time to respond.

Visibility and control of assets helps organizations identify potential security gaps and address vulnerabilities before they are exploited. Asset inventory management programs monitor and maintain an organization’s assets. However, only 42 percent of respondents say their organizations include an asset inventory program as part of managing risks created by vulnerabilities. Thirty-nine percent of respondents say their organizations assign their asset inventory to both assigned owners and ranked criticality of assets.

 To read the full report, including key findings, visit Optiv.com

Can the new pope give Big Tech and AI a conscience?

Bob Sullivan

The AI Pope? The Pope for AI times, anyway.

Just over a century ago, Pope Leo XIII stood for workers and humanity over the creeping inhumanity of the Industrial Revolution.  It seems Leo the XIV is spoiling for the same kind of fight, and boy do we need that.

It’s been almost ten years since I wrote a story titled, “A billion useless people…but not one seems very concerned.” I’ve worried about it almost every day since then.  Long before the moniker AI was on every tech publicist’s lips,  smart people around the world were already predicting that robots would soon eliminate whole classes of work. Oxford University ranked 700 jobs at risk of “computerization” and…well, most people will be surprised how high their career ranks on the list (I’m looking at you, lawyers).

I now realize “a billion” was probably being optimistic.  What will the world look like when there are no jobs for most people? On a parallel track, I’ve long been worried about the land hoarding problem — there are no homes for young people today, and there are no properties for small business, owners either.

In one sense, I am comfortable saying that promises of a coming AI world are wildly overexaggerated, in a dot-com-bubble kind of way. Computers are good at repetition and bad at exceptions. Real life is full of exceptional circumstances that will foil AI for years to come. Just watch what happens when a self-driving car meets an urban parking situation, for example.

But AI will do what Big Tech always wants new tech to do — help corporations cut costs.  Customer service, already hanging on by a thread, will soon be doomed forever to the land of chatbots.  But that’s just a symptom. In the next few years, you’re going to see Wall Street cheer every time a company lays off workers and credits entreprise-wide AI implementation.  Some hired economist will blah-blah-blah about retraining workers for even better jobs.  Tell that to all the 50-something single moms out there who must find new careers to get health insurance for their kids.

This “progress” feels pretty inevitable. That’s why it’s so important that a world figure like the new pope said he was ready to take on this challenge. In his very first speech to the College of Cardinals, he warned that AI is a threat to “human dignity, justice and labor.” That’s quite a tech-savvy statement for a 69-year-old missionary.  The man holds a mathematics degree, so Big Tech would be unwise to underestimate him. This New York Times story offers a bit more insight into the complex tech debate the new pope has waded into.

As always, the issues is even more complex, and more fundamental, than AI.  And that’s why the name Leo XIV matters.  The name Pope Leo XIII doesn’t fall trippingly off the tongue, even for Catholics, but his papacy came at a time of similar upheaval in world economics — the late Industrial Revolution.

Leo XIII’s signature publication is called Rerum Novarum — strictly speaking, “Of New Things.”  A refreshingly simple name for a momentous topic.  I know what you’re thinking: why should any economist care what a pope says? Read it for yourself and you’ll see why: it’s remarkably balanced and thoughtful, with conclusions that are still highly relevant today.

At the time, capitalists were running roughshod over workers by forming gigantic, all-powerful trusts — think Standard Oil and the Rockefellers. Naturally, worker revolts were increasingly common, and worker anger helped fuel the rise of communism and socialism.

Rerum Novarum, published in 1891 — sometimes referred to as “Rights and Duties of Capital and Labor” — called out for worker dignity and fair pay.  One example: “Some opportune remedy must be found quickly for the misery and wretchedness pressing so unjustly on the majority of the working class.”

On the other hand, it also described the importance of private property.  In the same breath, the document rejects socialism and property redistribution outright, saying it would give governments an outsized role in controlling individual lives. It says, “Their contentions are so clearly powerless to end the controversy that were they carried into effect the working man himself would be among the first to suffer. They are, moreover, emphatically unjust, for they would rob the lawful possessor, distort the functions of the State, and create utter confusion in the community. ”

On the other, other hand, the encyclical cautions against what I have come to call property hoarding. While individuals should have the right to own property — to possess that which they have invested themselves in  — property should ultimately be used for the common good. Rerum Novaram laid the groundwork for a later encyclical written by Pope Pius XI during the Depression warning about the “twin rocks of shipwreck” — individualism on one side and collectivism on the other. Quiet a poetic turn of phrase, but also, quite a pragmatic, dualistic view of the world. The kind of balance our Blue vs Red world is sorely lacking today.

I’m sure you are thinking that Leo XIII did not manage to stop the Russian Revolution, or the American Communist movement, or even the excessive individualism that has led to American property hoarding today. And you are right.  I hold no fantasy that the new pope can ward off the scary future that artificial intelligence might bring.  On the other hand, who else will stick up for worker rights, and individual property rights, in our time? You’ll have to wait a long time before Big Tech companies prioritize a healthy middle class.

The Catholic church has many problems of its own to address, and I hope the new pope faces them head on.  But I am thrilled that concerns about artificial intelligence were among the first words out of the new pontiff’s mouth. We can only hope more world leaders join him.

Deepfake Deception: How AI Harms the Fortunes and Reputations of Executives and Corporations

The fortunes and reputations of executives and corporations are at great risk because of the ability of cybercriminals to target vulnerable executives with artificial images or videos for the purposes of extortion and physical harm. As more evidence of the reality and likelihood of deepfake attacks emerge, awareness of the need to take action to prevent these threats is growing. More than half of IT security practitioners (54 percent) surveyed in this research say deepfake is one of the most worrying uses of artificial intelligence (AI).


Click here to download the full report


The purpose of the research – sponsored by BlackCloak Inc. but conducted independently by the Ponemon Institute —  is to learn important information about how organizations view the deepfake risk against board members and executives and how these attacks can be prevented.  According to the research, executives were targeted by a fake image or video an average of three times. Another serious threat covered in this research for the second year is the risk to executives’ digital assets and their personal safety. In this year’s study, attacks by cybercriminals against executives and their families increased from 42 percent to 51 percent of organizations represented in the research.

It is not if, but when your executives and board members will be a target of a deepfake attack, and it is likely they will not even know it.  Respondents were asked to rate the likelihood of a deepfake attack, the difficulty in detecting it and the confidence in the executives’ ability to know that they are being targeted. Respondents said an attack is highly likely (66 percent), it is very difficult to detect (59 percent) and there is no confidence that executives would recognize an attack (37 percent).

The following findings illustrate the severity of deepfake and digital asset attacks

  • Is the person calling your company’s CEO a trusted colleague or a criminal? Forty-two percent of respondents say their organizations’ executives and board members have been targeted an average of three times by a fake image. Or worse, 18 percent are unsure if such an attack occurred. Of those targeted, 28 percent of respondents say it was by impersonating a trusted entity such as a colleague, executive, family member or known organization. Twenty-one percent of respondents say executives and board members received urgent messages such as the requirement of immediate payment or information about a security breach detected.
  • It is difficult to detect imposters seeking to do harm. Executives must understand that a zero-trust mindset is essential to not becoming a deepfake victim because 56 percent of respondents say It is essential to distinguish between what is authentic and what is fake in messages. For example, imposter accounts are social media profiles engineered for malicious activities, such as a deepfake attacks. The two types of deepfakes of greatest concern are social imposters (53 percent of respondents) and financial fraudsters (37 percent of respondents).
  • Executives need training and a dedicated team to respond to deepfake attacks. Despite the threat from deepfake cybercriminals, 50 percent of respondents say their organizations do not plan to train executives on how to recognize an attack. Only 11 percent of respondents currently train executives to recognize a deepfake and only 14 percent have an incident response plan with a dedicated team when a deepfake occurs.
  • Threatening activities may go undetected because of a lack of visibility into erroneous activities. Only 34 percent of respondents say their organizations have high visibility into the erroneous activity happening within their organization to prevent deepfake threats. Fifty-two percent of respondents say it is highly likely that their organization will evaluate technologies that can reduce the risks from deepfakes targeting executives. Fifty-three percent of respondents say technologies that enable executives to verify the identity and authentication of messages they receive are highly important.
  • The financial consequences of deepfake attacks are not often measured and therefore not known. Only 36 percent of respondents say their organizations measure how much a deepfake attack can cost. If they do, the top two metrics used are the cost to detect, identify and remediate the breach and the cost of staff time to respond to the attack.
  • Organizations are in the dark about the severity of the financial consequences from a cyberattack involving digital assets. Forty-three percent of respondents measure the potential consequences of a cyberattack against their executives and in 2023 only 39 percent of respondents said they had metrics in place. Forty percent of respondents say their organizations measure the financial consequences against the business due to a cyberattack against the personal lives of executives and digital assets, a slight decrease from 2023.
  • Metrics used to determine the financial consequences of a digital cyberattack against executives remain the same since 2023. The top two metrics for cyberattacks against executives are the cost of staff time (62 percent of respondents) and the cost to detect, identify and remediate the breach (51 percent of respondents).
  • Despite the vulnerability of executives’ digital assets, most training occurs following an attack. Most training is done after the damage is done, according to 38 percent of respondents in 2023 and 2024.
  • Attacks against executives and family members increase. Organizations need to assess the physical and digital asset risks to executives and their families. In 2023, 42 percent of respondents of respondents said there were attacks against executives and family members. This increased to 51 percent in 2025.
  • Online impersonations increased significantly since 2023. The most prevalent attacks continue to be malware on personal or family devices (58 percent of respondents in 2024 and 56 percent of respondents in 2023), exposure of home address, personal cell and personal email (50 percent of respondents down from 57 percent of respondents in 2023). However, online impersonations increased significantly from 34 percent of respondents in 2023 to 41 percent of respondents in 2024.
  • While still a low number, more organizations are increasing budgets and other resources because of the need to protect executives and their digital assets. Since 2023 48 percent of respondents say their organizations incorporate the risk of cyberthreats against executives in their personal lives, especially high-profile individuals in its cyber, IT and physical security strategies and budget, an increase from 42 percent of respondents. More organizations have a team dedicated to preventing and/or responding to cyber or privacy attacks against executives and their families, an increase from 38 percent to 44 percent of respondents.
  • More cybercriminals are targeting IP and executive’s home network. Organizations should be concerned that their company information, including IP and executives’ home networks, have become more vulnerable since 2023. The theft of intellectual property and improper access to the executive’s home network have increased from 36 percent of respondents to 45 percent of respondents and 35 percent of respondents to 41 percent of respondents, respectively. Significant consequences were the theft of financial data (48 percent of respondents) and loss of important business partners (40 percent of respondents).
  • The likelihood of physical attacks and attacks against executives’ digital assets has not decreased in the past year. Sixty-two percent of respondents in 2023 and 2024 say it is highly likely a cybersecurity attack will be made against executives’ digital assets and 50 percent in both years say there will be a physical threat against executives. As discussed previously, organizations are slow to train executives on how to avoid a successful attack against their digital assets. Sixty-eight percent of respondents say it is highly likely that an executive would unknowingly reuse a compromised password from their personal accounts inside the company and 52 percent of respondents say an executives’ significant other or child would click on an unsolicited email that takes them to a third-party website.
  • More organizations are providing self-defense training. Self-defense training has increased since 2023 from 53 percent of respondents to 63 percent of respondents in 2025. Slightly more organizations are assessing the physical risk to executives and their families from 41 percent to 46 percent of respondents. Forty-one percent assess the risk to executives’ digital assets when working at home.
  • Why is it difficult to protect executives’ digital assets? The top two challenges are due to remote working and not making protection of digital assets a priority when executives work outside the office, 53 percent and 51 percent of respondents, respectively. As a consequence of not training executives to protect their digital assets, only 38 percent of respondents say their executives and families understand the threat to their personal digital assets and only 32 percent of executives take personal responsibility for the security and safety of their digital assets.
  • Confidence in CEOs’ and executives’ ability to do the right thing to stop cyberattacks continues to be low. While there is an increase in confidence in the CEO or executive knowing how to protect their personal computer from viruses (32 percent of respondents, an increase from 26 percent of respondents in 2023), it is still too low. Also, there is a significant decrease in executives knowing how to determine if an email is phishing (23 percent of respondents from 28 percent in 2023). Organizations lack confidence in their executives knowing how to set up their home network security (25 percent of respondents percent of respondents and 26 percent of respondents in 2023) and knowing if their email or social media accounts are protected with dual factor authentication (20 percent of respondents and 16 percent of respondents in 2023).
  • Difficulty in stopping cyberattacks against executives and their digital assets remains high. It continues to be highly difficult to have sufficient visibility into executives’ home networks cyberattacks (63 percent of respondents), to have sufficient visibility into executives’ personal devices (66 percent of respondents), sufficient visibility into executives’ personal email accounts (67 percent of respondents), sufficient visibility into executives’ password hygiene (60 percent of respondents) and sufficient visibility into executives’ privacy footprint (65 percent of respondents).

To read the rest of this report, visit BlackCloak’s website

Follow a sextortion scam unfold as it ends in unimaginable tragedy

Jordan: Please, bro….It’s over. You win, bro.
Dani: Okay. Goodbye.
Jordan: I am KMS RN….which stands for, I’m killing myself right now….because of you.
Dani: Good. Do that fast or I will make you do it. I swear to God.
And that message … was the last one that he sent to Dani.

Bob Sullivan

Some stories are much harder to write than others. I’m not sure I’ve had a more difficult project than this week’s episode of The Perfect Scam. For this story, I speak with Jordan DeMay’s parents, and the detective who investigated Jordan’s sextortion and suicide.

You’ve probably heard about Jordan’s case, or other cases like it. Sextortion is extortion fine-tuned for the social media age.  A criminal pretends to be an attractive person and approaches a victim with a simple private message or text, then slowly escalates the conversation until the victim shares explicit photos. Then the criminal threatens to share these photos with friends and family unless the victim pays a “ransom” — and even then, criminals continue to apply pressure, issuing more and more demands.  While high-profile cases of sextortion often involve teenagers, anyone can be the target of a sextortion scam. They are powerful; the pressure criminals exert can be immense. Most times, criminals are working off fine-tuned scripts, learned from YouTube or purchased from a criminal service. Or they are trained as “employees” of a large criminal enterprise.

We are all saturated with unwelcome texts right now, many appearing as accidental wrong-number connections.  My cell phone number begins with a Seattle area code, so I get messages with vague requests like, “I’ll be in Seattle for a couple of days. Where should I go to dinner?”  Unsolicited private messages on Instagram and Facebook often begin the same way. Many of those messages are attempted sextortion scams.

That’s why it’s so important to understand how they work. And this week’s episode gives a rare, blow-by-blow account of a sextortion in progress.  At times, it’s hard to hear. It was certainly hard to talk about. But this is the kind of story you shouldn’t turn away from. John DeMay and Jennifer Buta are incredibly brave and compassionate, despite enduring pain no human was meant to experience. And Detective Lowell Larson dispenses deep wisdom that only arises from years of very serious, meaningful work.

The scale of the sextorition problem is probably wider than you think, and might be even wider than law enforcement knows.  John DeMay has identified more than 40 sextortion victims who’ve commited suicide; and many people believe the real number is much higher.  Jordan deleted all his social media content before taking his own life. Only subsequent investigation revealed the truth about the attack he suffered  It’s possible many suicide stories end without someone like Detective Larson completing a thorough investigation.

That’s why it’s important for everyone understand how sextorion works. I hope you’ll listen to this episode – Jordan’s parents have so many powerful things to say, and how they say it is just as powerful —  but if you aren’t into podcasts, there’s a partial transcript below. It inlcudes a text version of the dialog between Jordan and his attackers.  But perhaps more important, it also includes Detective Larson’s advice about what parents can do to help their children navigate this increasingly complex and threatening digital world — and it includes some of the wisdom that John and Jennifer have to share.

But I’d like begin with the end of this story, because Jordan’s 17 years of life add up to much, much more than those years would suggest. Here’s his mom:

“Jordan was this larger than life perso, and I don’t think he knew it. And so for this to happen to him and be this … landmark case and have this media attention. Sometimes I just sit back and I’m like, of course. Of course this happened with you because you were this bright light and the center of attention. Here you are in the afterlife still holding that. It’s just that it’s no longer your voice. It’s my voice with your story.”

—————Partial transcript————-

The below transcript includes in-depth discussion of suicide. If you are in crisis, call or text 988 and get help right now.

[00:05:11] John DeMay: It was a Thursday night. He was at his mother’s house for that week, but he had had to come to our house a little bit earlier. We were getting ready to leave on vacation for two weeks. The next morning on Friday, we were heading down to Florida for, uh, for a beach vacation that we do every year, and he was really excited about that. And it was one of our, one of our favorite trips that we do every year. So we were packed and ready to go, and he came strolling into the house. I saw him for the last time at around 10, 10 15 that night. And I just had passed him outside on the, on the patio and he was rolling his bag in, coming from his girlfriend’s house. And I had just told him goodnight. I’m, I’m heading to bed and cut you in the morning. And that’s what I did. I went to sleep. My wife was up finishing laundry, getting the, getting our other two kids bags packed. Jordan was downstairs in his room getting his bags packed. He was doing some laundry.

[00:06:01] Bob: doing laundry packing, saying goodbye to his girlfriend before trip. Normal teenage stuff. And then Jordan gets a private message, a one-word message from someone he doesn’t know. In fact, I’ll bet you’ve received one just like it, probably more than once. The message says it’s from a woman named Dani Robertts. It comes at 10:19 PM.

[00:06:25] Lowell Larson: So the very first conversation that occurred between Dani Robertt’s profile and Jordan DeMay started out with Dani asking, “Hey.”

Jordan – “Who is you?”

Dani – “I’m Dani from Texas, but in Georgia at the moment.”

Jordan – “Nice.”

Dani – “Yeah. Hope I didn’t invade your privacy. Just bored.”

Jordan – “Nah, you good.”

[00:06:49] Bob: The conversation bounces back and forth with simple chat like that for about an hour. You can imagine Jordan stuffing clean clothes into a suitcase while chatting, and then at 11:29 PM…

[00:07:02] Lowell Larson: Dani – “What do you do for fun?”

Jordan  – “Lift, play sports and listen to music. What about you?”

Dani  –  “Sound fun. Well, I like hanging out with friends and playing sexy games. Sorry, that came out wrong. My bad,”

Dani – “Sorry if I got upset. Just bored, to be honest. I thought you might want to do something fun. It is actually a sneak pic exchange. No screenshots. You’re down? It’s just for fun though. Nothing else. It’s actually a live mirror pic exchange showing your sexy body, no screenshots. You get what I mean?”

Dani – “Yeah. And it’s set up view once after viewing it disappears. Of course. I can go first if you like, but you’re home, right?

Jordan – “Yeah.”

Dani – “Cool. Can you just take a mirror snap showing you’re ready when I’m. Then I will go first with the sexy pic.”

[00:08:02] Bob: The game goes on for another hour, two hours. The pictures are innocent enough at first, Detective Larson says. Within three hours after that first, “Hey,” Jordan sends a revealing picture of himself and Dani pounces instantly

[00:08:20] Lowell Larson: After Jordan had sent an unclothed picture at 1:23 AM, Dani Robertt”s account sends three photo collages with a message, “I have screenshot all your followers and tags and can send this nudes to everyone and also send your nudes to your family and friends until it goes viral. All you have to do is cooperate with me and I won’t expose you.”

Jordan – “What I gotta do?”

Dani – “Just pay me right now and I won’t expose you.”

Jordan – “How much?”

Dani – “$1,000. Deal or no deal?”

Jordan – “I don’t have a grand.”

[00:09:02] Bob: Whoever is on the other side of the keyboard is now extorting Jordan, and he doesn’t know what to do

[00:09:11] Lowell Larson: And this goes back and forth for a time and then is basically negotiated down where Jordan agrees to pay $300. And Dani agrees to accept that and not expose him. So he sends $300 via Apple Pay. Dani tells Jordan that she’s deleting everything,

[00:09:33] Bob: But Dani doesn’t. The cruelty and the pressure continue,

[00:09:39] Lowell Larson: Dani comes back and says that she wants more money to delete his images off of a different platform. And so they go back and forth and they start negotiating again. And basically Dani is looking to obtain another $800 to delete the images off of Google.

[00:09:59] Bob: And the demands continue. Jordan tries desperately to figure out what to do. The person making these demands exerts maximum pressure.

[00:10:10] Lowell Larson: You know, a troubling thing is the Dani Robertts account would, she’s asking for more and more money, would start giving it a countdown. Next message would be 14. Next one 13, 12. You know, and so every message coming in was the countdown, which is, uh, kind of very powerful for someone that’s very scared.

[00:10:32] Bob: Scared, and from his messages, feeling out of options.

[00:10:37] Lowell Larson: And basically Jordan tells her that he doesn’t need him to have $500, and eventually Jordan agrees to send the remaining money that he has, and that’s $55.

[00:10:49] Bob: Jordan sends every last dollar he can cobble together, but the cruelty gets so much worse. Jordan begins to express how desperate he feels that he doesn’t want to go on and five hours into this nightmarish encounter…

[00:11:07] Lowell Larson: And at one point Dani Robertts at 3:28 AM says, “Okay, then I will watch you die a miserable death.” And Jordan says, “Please, bro.” Later on at about 3:43 AM Jordan says, “It’s over. You win, bro.”

Dani – “Okay. Goodbye.”

Jordan – “I am KMS RN”

[00:11:30] Lowell Larson: Which stands for, “I’m killing myself right now.” And then he says, “Because of you.”

Dani – “Good. Do that fast or I will make you do it. I swear to God.”

[00:11:41] Lowell Larson: And the message that I read to you was the last one that he sent to Dani Robertts.

[00:11:54] Bob: Morning breaks and John DeMay gets up and thinks about final preparations for that family beach trip they will go on after Jordan gets home from school that day, but a text from Jordan’s mom causes immediate alarm.

[00:12:07] John DeMay: Jennifer had texted me and asked me if Jordan was at school that day, and I said, “Well, that’s kind of interesting.” So I got up, my wife and my, my two girls were up already getting ready for school. And I looked out the kitchen window and I saw Jordan’s car still parked in the driveway at 7:30 and that was really odd. He’s usually long gone by 7:10, 7:15, and frankly, he never, ever misses school and you know, so I didn’t know if he slept in or, or what was going on. So I went downstairs into his bedroom and I opened up the door and I found him. He had shot himself in his bed.

—-Later in the epiosde—

[00:27:54] Bob: So what change needs to happen? Jordan’s death raises a whole wide set of complex issues. Recall that horrible night began with a simple one-word message the kind many of us receive on a regular basis.

[00:28:09] John DeMay: At this point I’ve been speaking all over the world really and traveling and presentations and parent nights and law enforcement conferences and in Washington DC and, and what I’m finding is, especially from the law enforcement community, that the sextortion stuff in the last couple of years has gotten so rampant that most feel that it’s really not even a, an if you’re a teenager and get exposed to this, it’s when you are going to get exposed to it. To some level, and we all, we all get these random messages from random different people, from different parts of the world and, and friend requests and things and, and oftentimes those are the very beginnings of what could be a sextortion scheme. There’s a lot of groups and, and individuals that are doing this at a very high volume. It’s a numbers game.

[00:28:55] Bob: These text messages that we’re all getting right now where it, it could be just something like, “How are you?” Or it could be, “Hey, I’m in Seattle” ’cause I have a Seattle area code.

[00:29:02] John DeMay: Right.

[00:29:03] Bob: “Where should I go to dinner?” or whatnot. But, and, and behind that might be someone starting a sextortion scheme.

[00:29:08] John DeMay: That’s correct. Yeah. And you almost have to assume that at this point. Um, and when I talk to teenagers and parents, I tell them that’s what it is, you know, because it. It’s probably not anything else. People do reach out and there are people that have good intentions of meeting other people on the planet, but you know when, when some really amazingly beautiful woman is just reaching out to you randomly and then wants to, you know, now we’re into your conversation and wants to talk about sex with you, there’s probably a pretty good indicator that this isn’t what it seems.

[00:29:35] Bob: And I know your son’s story shows it can happen, it can escalate very, very quickly, right?

[00:29:40] John DeMay: Oh, absolutely. I mean, I, if you looked at every single sextortion suicide that’s happened, it’s happened, you know, under six hours for sure. Most of ’em, there are a few that have drug out over time, but a lot of ’em are literally within 30 minutes to two hours.

[00:29:54] Bob: They’ve tested these scripts, I’m sure, and then they can manipulate really, really anybody, right?

[00:30:00] John DeMay: Yeah, a hundred percent. And in our case, uh, particularly, and, and, and probably a lot of others, fast forwarding, you know, with the information that we all have now, the perpetrators, um, from Jordan’s case were, were they were educated and trained by a online group called the Yahoo Boys. And the Yahoo Boys was basically a, a loosely organized group that put together basically a training manual they had, you could go right on YouTube. The, the video’s up for were up for years. You could learn about sextortion, learn how to do it, how to get your victims could purchase scripts from them. Uh, they taught you how to get hacked accounts and buy materials, everything. So everything you need to know to learn how to do this particular crime was right on YouTube for anybody to see. And our group of suspects used that organization and were trained how to, to how to do it. It shows the professionalism in this industry and in this type of crime that young people don’t understand and parents don’t either. And I, I tell young people that it’s not your fault, right? I mean, this is, this is a crime. And these people are professionals. They know exactly what to do and what to say and how to say it. They know how you’re gonna act. They know what you’re going to say. These are all things that they’ve done time and time and time again. So they’re very well read in, in what happens. And, um, I try to stress that to them. And, uh, that’s, I think that’s the biggest piece. So when they understand, Hey, this isn’t, you know, I made a mistake, but this isn’t my fault, you know, it really is not

[00:31:31] Bob: so warning parents and teenagers, really anyone with a cell phone that they will be targeted by a extortion scam. That’s the first thing John wants, but he wants more change. He wants tech companies to do more.

[00:31:45] John DeMay: At the end of the day, the, the social media companies are, are the one that are creating the atmosphere for all this to happen. It’s really unfortunate as I meet, uh, more whistleblowers, um, from these companies and meet politicians and major players in the game, it’s, it gets scarier and scarier and scarier.

[00:32:02] Bob: Jennifer also wants criminals around the world to know that thanks to the successful prosecution of our son’s attackers, well, criminals shouldn’t feel safe just because they are far away.

[00:32:14] Jennifer Buta: I think that’s a huge message, that it doesn’t matter that you’re in a different country. You can be found, you can be arrested, and you can be held accountable for what you’ve done. I hope that it’s a deterrent for people. I know in Nigeria, you know, for this crime, the punishment is not harsh at all, and so that’s not really a deterrent for them there. And knowing that they can be brought here and face our justice system. Hopefully that prevents them from, you know, taking it to this level where they’re telling children to take their lives.

[00:32:52] Bob: Both John and Jennifer spend a lot of time talking about Jordan’s death now, hoping they can do as much as possible to prevent other tragedies

[00:33:01] Bob: What kind of reactions do you get when you, when you talk to people about this?

[00:33:06] Jennifer Buta: I mean, there’s several, it depends on, you know. The day for me. Sometimes, sometimes I feel better in talking about his story because if I tell someone I’ve just educated them and hopefully they’ll tell someone, and that gives me hope that another family won’t have to go through what I’ve gone through. And sometimes it’s really difficult to talk about it because you’re constantly reliving the nightmare of what did happen to Jordan. I get an overload of messages through my social media, of parents saying, thank you for sharing this story. Because I told my kid about it and it happened to them and they knew what to do. They remembered Jordan’s story. Wow. And they came to me for help. I also have parents that have reached out to me and said, this happened to my child. I don’t know what to do. I don’t know what to do. I don’t know where to go with the law enforcement things. Can you help me with this? And even yesterday, I received a message from actually our local government offices, someone contacted them trying to reach me because they were going through that situation and they wanted to talk to me.

[00:34:20] Bob: That’s an amazing thing that you’re doing, but gosh, that also feels like such a, a burden to be picking through all these emails is you have to be customer service to the world. That sounds like a lot.

[00:34:29] Jennifer Buta: It is. Sometimes it gets heavy and I can’t get to everyone. Um, at one point it was just, it was too overwhelming to respond to everyone.

[00:34:40] Bob: One point that Jennifer wants to make sure parents here and law enforcement hears is that without Jordan’s girlfriend coming forward to report the sextortion message she received. Jordan’s parents probably would never have learned the truth.

[00:34:55] Jennifer Buta: Absolutely. That was when I found out what happened to Jordan, that was one of the first thoughts in my head was, how many kids has this happened to? And their parents think that they just took their own life, but don’t know that there was actually something else behind it because they didn’t think to check their social media. Or maybe it was deleted from their social media like it was in Jordan’s case. And you know, one of the things that I think we’ve learned through Jordan’s case is this has taught law enforcement about financial sextortion, and that when they come upon a case where someone has taken their life, maybe take that extra step and check if it was something like financial extortion.

[00:35:37] Bob: The most important message they want to share is to reach a child, a teen who might find themselves in what feels like a desperate situation. And make sure they know that help is available. Make sure they know to reach out. John takes such calls and messages all the time.

[00:35:54] John DeMay: Well, I, I can tell you there’s, there’s hundreds of stories, hundreds of them, and it, it really, It gives me the fuel to, to keep going on the awareness side for sure. And it, and it, and it provides me with purpose to push for change legislatively and, and the other things that I’m doing. But just last week, last, it was last Thursday night. Last Thursday night, I did a presentation at our local middle school for sixth grade, seventh grade, and eighth grade. We did three individual presentations on sextortion for each of those grades. And that was about eight weeks ago, 10 weeks ago. I did that right when, right before, uh, Christmas break. I think it was, and last Thursday night at about eight o’clock at night, I was sick as a dog. I had the flu covid, something was happening. I was out for days and I was riding my couch and I had a Facebook messenger pop up, and it was a 14-year-old student at that school that said, John, I need help. I know you came to the school and talked, but I honestly don’t remember what you said, but I need, I need some help. And as I, you know, I got right on it and I started chatting and he was just extorted 15 minutes before that, 20 minutes before that sent an image. Wow. And he was freaking out. So I was able to talk him off the ledge, and I messaged with him for about an hour. You know, every, every scenario is a little bit different. This, I wanted to try to get on the phone with this. Young guy and he just wasn’t interested in talking. He just wanted to message and that was totally fine. I’m like, yep, we can totally, we can message, it’s totally fine. Whatever you’re comfortable with. And I just kept engaging with him. It’s like a hostage negotiation. You just want to keep them communicating, keep him talking. You know, I, at one point I even, I even sent him a picture of Jordan and said, Hey, this is my son. He’s gone today. I wish every day that he would’ve taken the, the two minutes to walk upstairs at three o’clock in the morning and come and get me. And I, I don’t, I don’t get that. And I, and I don’t want this to be you. I want you to go tell your dad, you know, he’s going to appreciate you for this. And so we’re just trying to keep it positive and, and really make him understand that it’s not his fault really being a, he, he’s really a, a victim of a really heinous crime right now. And, um, he needs to treat it that way. Just, uh, talked to this kid and I, I told him about Jordan and, you know, told him how strong he was because he, you know, reaching out and, um, this is the right thing to do and it’s not your fault. And we went through everything. And by the end of the night he had messaged me back and said, “Hey, I just, I really appreciate the help. The cops are here. I told my dad and really, I really can’t thank you enough.” And, um, you know, and it just all worked out really good. I’ve, I’ve been following up with him in the mornings and stuff before school just to make sure he is good. And, you know, that’s the stuff that, that makes a difference because when someone asks, “Hey, can you come to the school and talk?”, It’s like, well, you know, “yeah, I guess so let’s do it. And uh, and then you get stuff like that, that happens and, and then the answer is, “Of course I will.” Right?

[00:38:33] Bob: We began this episode explaining that Detective Larson gives plenty of talks about sextortion now. He has a lot of important things to share with parents and kids. He often shows the dialogue we had him read earlier between Jordan and Dani.

[00:38:48] Bob: When you show this to groups, uh, what kind of reaction do you get?

[00:38:51] Lowell Larson: I mean, there’s, you can hear a pin drop in the room. It’s, uh, very chilling.

[00:38:57] Bob: Have you ever had somebody come up to you, you know, after a talk or, or maybe a day later or something and say, you know, “Hey, can we talk, this is happening to me,” or, or, “You know, someone, I know this has happened to.”

[00:39:07] Lowell Larson: yeah, I’ve, it hasn’t occurred at following the presentation, but for years now, we’ve, you know, been public about this case and I’ve received numerous calls from even friends and family saying that they know of someone that this happened to. In fact, uh, one person I heard, he said, you know, that thing, he went and talked to his parents and he says, you know, that thing that happened to that kid and Marquette, that’s happening to me right now. So that’s exactly what the message we’re trying to put out there is that obviously we tell people don’t send anything out online that, you know, you wouldn’t want on the front paper type of things, but, and not to send naked images. But the reality is it’s gonna happen. And the message that we wanna send to people is, please don’t do it. But if it does occur, please tell your family or friends and reach out for help. Don’t you know. Obviously don’t do what Jordan did, and you know, there’s programs that we can do. There’s things that we can do to help minimize this. In fact, there is a program run by. The National Center for Exploited and Missing Children, we often call it NCMEC off of the acronym, and they have a program called Take It Down. And that program is NCMEC working with family of minors that have some type of sexually exploited image out on the internet, and they will do everything they can to try to remove that image from the internet. So that’s a tremendous resource that’s out there. We also tell everyone to, if this happens to them. Is to stop communications with the person regardless of whatever threats that they do to. Disable their account, but don’t delete the account because if we need evidence off the account, we don’t want them to delete it. Screenshot anything they can regarding any information and to contact their local authorities along with obviously contacting friends or family.

[00:41:04] Bob: I can’t help but ask this question. This story is incredibly hard to hear, honestly. Uh, but you do this every day. How do you handle working with this, these sort of horrible crimes?

[00:41:16] Lowell Larson: I guess it comes down to someone’s gotta do it and uh, obviously we’re dealing with this, a horrible, horrible situation. We can’t bring Jordan back, but what can we do to go after the people that did this to him and then prevent it occurring to other people. So that’s where I find my strength.

[00:41:38] Bob: What does Detective Larson want people to learn from what happened to Jordan?

[00:41:43] Lowell Larson: Well, and I think it, it’s not just about sextortion, it’s just what your podcast is about scams. So just don’t think well. You know this happened to a young male and I’m not a young male or, or whatever. Think about the basics of the scam is that when you put a cell phone in your hand or utilize a computer, you are now potentially a victim to anyone in the world and you need to be very careful with what you do on that device. And you gotta be very careful of people contacting you on that device and verifying who you are talking to. Very common. What we see in these scams is it’s an unsolicited contact, which we have here. You have the rapport building, you have the pressure that’s put on someone, for instance, the, the, the scam of someone impersonating a law enforcement agency and saying, you have a warrant for their arrest and, and putting that stress on them about having to do something right now, or you’re gonna be arrested. You know, in Jordan’s case it was, you have to pay me right now, or you’re gonna be exposed. So the, so the themes are the same, and that pressure often causes people to do stuff that later on they look back and they’re like, oh, that was, why did I do that? You know, so you just gotta slow down. You gotta verify who you’re talking to. For instance, like I said earlier, with the law enforcement scam, if someone says they’re contacting you from the Marquette County Sheriff’s Office and they have a warrant for your arrest, you know, there’s nothing wrong with saying, well, I just need to verify who you are. I will contact the sheriff’s office and who should I ask for? And so you, you know, independently determine what the correct phone number is, and you call. And you try to verify that. So, you know, in, in Jordan’s case, it was the trusting of the, what someone told them or who, who they were, you know, trusting of a profile. And it’s, as you know, you can be anyone or anything on the internet. And so. There was that trust. And then the other cautionary tale I have is trust of the technology. For instance, in this case, they’re using a, a segment of Instagram that allows the picture to disappear after a certain amount of time, and it doesn’t allow for a screen capture. But it was simply that technology was simply overcome by taking another device and taking a picture of the original device. So if you know mm-hmm. Like, like Snapchat, if, if it doesn’t allow you to screenshot it. Because of the, the platform or however it’s set up, or if it sends a notification, if you screenshot it, if you just take another cell phone and put it over and take a picture of your cell phone with the image on it, you’ve just overcome that technology. So people are, you know, trusting of the technology and that you can’t do this, but you know, there’s always a way, there’s very often a way that you can defeat that technology that people have the trust in

[00:44:38] Bob: Detective Larson also includes in his presentations, a piece of advice for parents that, I think it’s just so wise, and perhaps for some of you counterintuitive.

[00:44:49] Lowell Larson: I believe the most prevalent reason why kids don’t tell their parents something bad happens online is because they’re afraid to lose their internet privileges. So, you know, if, if you’re a parent that has the attitude, well, if my kid shows me that, that’s it, they’re not gonna have a cell phone. They’re not allowed to use this anymore. I look at it a different way. I look at it as thank your child for being so mature to bringing that information forward to you and that you know that you can trust them, because if something bad happens, you will be made aware of it. So I think that’s the attitude shift that we need to have as parents is that if something bad happens online, we don’t want the nuclear option of we’re taking away all their privileges. If they bring something forward to you, I think they need to be congratulated for having that maturity. Obviously, you know, we can have a courageous conversation of what got them, got them into that point, but we need to really harness that of that maturity that they put forward.

[00:45:50] Bob: Keeping that open line of communication between parents and kids is absolutely essential, and so is having these sometimes difficult conversations about sex, but. It’s so much easier said than done. Jennifer has some wisdom about that.

[00:46:05] Jennifer Buta: I think that parents need to have open conversations about sextortion, just like they do with warnings about anything else to their children when they’re growing up. You know, if that’s about alcohol or substances or driving. I think that it needs to be a normalized conversation with your family that this can happen, and if it does, even if you fall into it, you need to seek help from an adult because it spirals. So quickly that it’s hard to tell what it is to do. And even for the kids, you know, I think they need to know that they are the victim in this no matter what they are being pursued for the wrong reasons and. There is nothing worth taking your life over. There are so many people that want to help you. If you find yourself in this situation, there is light at the other side.

[00:46:59] Bob: It, it must be such, I mean, you as a parent, you have hard conversations all the time. Right? A lot. Especially when, once you have teenagers. But this conversation strikes me as particularly like really, really challenging. Do you have any suggestions as to how to even get started?

[00:47:13] Jennifer Buta: One of the things that I have suggested and that, you know, schools have done is talk about Jordan’s story or talk about another children’s story that this happened to. I think that’s a really good icebreaker to open it up and then go into you know, what to do if it does happen, because it is a real thing, and that’s what parents need to realize. It’s a real thing and your child isn’t an exception from it. You know, Jordan was about to turn 18 years old. There was absolutely no reason for me not to trust him or to take his phone away to investigate what he’s doing on social media. He was all set to go to college and that made him the perfect target for this crime because it made him vulnerable of being exposed and being judged when he had all of this going on for him. So if it can happen to my son, it can happen to anyone. And having those conversations, that is our greatest asset right now to prevent things from happening to kids.