From homeless to helping North Korea’s weapons program; the vexing problem of laptop farms

Source: Department of Justice

Bob Sullivan

It’s a dark, cluttered room full of bookshelves, each shelf jam-packed with laptop computers. There are dozens of them humming away, lights flickering. And each one has a Post-It note attached with a single name on it. And there’s a pink purse just hanging off the side of one of those shelves. What is that purse? And what do those laptops have to do with funding North Korea’s weapons program? That purse belonged to a woman named Christina Chapman, and those laptops … well this is a rags to riches to rags story you might not believe.

Fortunately, the Wall Street Journal’s Bob McMillan recently spoke to me for an episode of The Perfect Scam to help explain all this.

“The North Koreans, if they have a superpower, it’s identifying people who will do almost anything in task rabbit style for them,” he told me.  And that’s where Christina Chapman comes in.

When this story begins, Chapman is a down-on-her-luck 40-something woman — at times homeless, at times living in a building without working showers — who makes a Hail-Mary pass by enrolling in a computer coding school. That doesn’t work either, at first.  She chronicles her troubles in a series of TikTok videos where she shares her increasing frustration, even desperation.

“I need some help and I don’t know really how to do this. Um, I’m classified as homeless in Minnesota,” she says in one. “I live in a travel trailer. I don’t have running water. I don’t have a working bathroom. And now I don’t have heat. Um, I don’t know if anybody out there is willing to help…”

But then a company reaches out and offers her a job working as the “North American representative” for their international firm.  Her job is to manage a series of remote workers.  The opportunity seems like a godsend.  Soon, she’s able to move into a real home and eventually go on some dream vacations.   At one point, she goes to Drunken Shakespeare and gets to be Queen for a day. For a night, anyway.

But underneath it all, she knows something is wrong. The job requires her to receive laptop computers for “new hires” and set them up on her home network. That’s why there’s all those racks and all those Post-it notes.  The home office appears in some of her TikTok videos, and it looks a bit like something out of The Matrix. Every computer represents an employee. And many of them work at various U.S. companies… hundreds of companies.  And instead of logging directly into their networks, they log into Chapman’s network, and she relays their traffic to the companies they work for.

That’s not the only suspicious thing about Chapman’s job.  Each new employee must be set up with a new identity.  She files I-9 eligibility forms for each one, and often times accepts paychecks on their behalf.

Eventually, Chapman comes to understand that she’s being deceptive and breaking the law.  Clearly, she’s helping people who are ineligible to work in the U.S.  evade workplace checks.  In a private email at the time, she frets about going to prison over these deceptions.

What she doesn’t seem to know is where these ineligible workers come from. They’re all from North Korea.  And the hundreds of companies employing Champan’s remote workers are ultimately sending money to the Hermit Kingdom.

“And that is, at this point, bringing in hundreds of millions of dollars to the regime according to the Feds,” McMillan told me. “And … they like to remind us that’s being used to fund their weapons program. Which is pretty scary.”

Chapman is running what’s come to be known as a laptop farm. And while the details about her situation, revealed in McMillan’s Wall Street Journal story, are incredible, laptop farms are not unusual. Fake remote workers are a rampant problem.

“It seems basically if you work for a Fortune 500 company, I would be shocked if you haven’t had a North Korean at least apply for a job there. And many of them have hired people,” he said.

Eventually, one of Chapman’s clients does something suspicious, and the company complains to the FBI. Their investigation reveals hundreds of laptop computers are humming away in Champan’s home, essentially downloading millions of dollars from U.S. companies and funneling it to North Korea, evading U.S. sanctions.  She’s arrested and ultimately pleads guilty and is sentenced to eight years in prison.

“My impression is that when she initially started out, it was to receive a higher-paying job,” said FBI agent Joe Hooper. “She got wrapped up in actually getting paid for what she was doing, and she knew she was doing something wrong, but was looking the other way.”

 Ultimately, prosecutors say Chapman helped get North Koreans paying jobs at 300 US companies. They included a top 5 major television network, a Silicon Valley technology company, an aerospace manufacturer, an American car maker, a luxury retail store, and a US media and entertainment company. Collectively, they paid Chapman’s laptop farm workers $17 million. Over a three-year period, she made about $150,000.  So, she wasn’t really living like that queen from Drunken Shakespeare.
“They target the vulnerable and she definitely was vulnerable,” McMillan said. “She was, I think, a well-intentioned person who was just, just desperate and you do feel sad for her watching the videos because she didn’t make a ton of money, she didn’t appear to be, have any animus toward the United States. There’s no evidence really that I’ve seen that she actually knew she was working for North Korea, but at a certain point, like it was clear, it was clearly, she clearly knew she was working on a scam.”

Clark Flynt-Barr, now government affairs director for AARP (owner and producer of The Perfect Scam), used to work for Chainanalysis, which conducts cryptocurrency investigations. She told me that some North Korean remote workers hang onto their jobs for months, or even years. Some are good employees, even, and don’t know they are a pawn in their government’s effort to evade sanctions.

“They’re good at their job and they’re, in some cases, quite shocked to learn that they’re a criminal who has infiltrated the company,” she said

It’s hard for me to imagine that companies can have remote workers they know so little about — don’t they ever ask how the spouse and kids are? — but McMillan said the arrangement works well for many software developers.

“I think there are a lot of companies where software development is not necessarily their core competency, but they have to have some software…and so they hire these people who are pretty used to offshoring coding to other countries,” he said. “Basically, all they care about is, ‘Just make the software work. Do the magic, spread, spread the magic, software pixie dust and just get this done.’ ”

The remote work scam grew out of long-running efforts by North Korean hackers to steal cryptocurrency, McMillan said. Many were working to get hired by crypto firms so they could pull inside jobs, and then realized there was money to be made in simply collecting paychecks.

The good news is laptop farms are now squarely in the focus of the FBI. A DOJ press release from June indicates that search warrants were executed on 29 different laptop farms all around the country, and there was actually a guilty plea in Massachusetts.

There’s a side note to the story that’s pretty amusing; cybersecurity researchers have come to learn that many North Korean workers go by the name “Kevin” because they are fans of the Despicable Me movie franchise.  You can hear more about that, and much more from Christina Chapman’s TikTok account, if you listen to this episode of The Perfect Scam. But in case podcasts aren’t your thing, some crucial advice: Don’t tell the online world you are desperate; that makes you a target.  If you are hiring, make sure you know who you are hiring and where they live. Ask about the family! And if you are looking for a job, know that there are many criminals out there who can make almost anything sound legitimate.

And one other note that’s hardly amusing; there’s another set of victims in this story, people whose identities are used to facilitate the remote worker deception. Some of these people don’t find out about it until they get a bill from the IRS for failure to pay taxes on income earned by the criminal.  That’s why it’s important to check your credit and your Social Security earnings statement often.

Click here, or click the play button below, to listen to this episode.

New Study Reveals Insider Threats and AI Complexities Are Driving File Security Risks to Record Highs, Costing Companies Millions

Larry Ponemon

As threats continue to accelerate and increase in cost, cyber resilience has shifted from being a technical priority to being a strategic, fiscal imperative. Executives must take ownership by investing in technology that reduces risk and cost while enabling organizations to keep pace with an ever-evolving AI landscape.

The purpose of this research is to learn what organizations are doing to achieve an effective file security management program. Sponsored by OPSWAT, Ponemon Institute surveyed 612 IT and IT security practitioners in the United States who are knowledgeable about their organizations’ approach to file security.

“A multi-layered defense that combines zero-trust file handling with advanced prevention tools is no longer optional but is the standard for organizations looking to build resilient, scalable security in the AI era,” added George Prichici, VP of Products at OPSWAT. “Leveraging a unified platform approach allows file security architectures to adapt to new threats and defend modern workflows and complex file ecosystems inside and outside the perimeter.”

File security refers to the methods and techniques used to protect files and data from unauthorized access, theft, modification or deletion. It involves using various security measures to ensure that only authorized users can access sensitive files and to protect files from security threats. As shown in this research, the most serious risks to file security are data leakage caused by negligent and/or malicious insiders and not having visibility into who is accessing files and being able to control access.

Attacks on sensitive data in files are frequent and costly and indicate the need to invest in technologies and practices to reduce the threat. Sixty-one percent of respondents say their organizations have had an average of eight data breaches or security incidents due to unauthorized access to sensitive and confidential data in files in the past two years.

Fifty-four percent of respondents say these breaches and incidents had financial consequences. The average cost of incidents for organizations in the past two years was $2.7 million. Sixty-six percent of respondents say the average cost of all incidents in the past two years was between $500,000 and more than $10,000,000.

The bottom line of organizations is impacted by the loss of customer data and diminished employee and workplace productivity. These are the most common consequences from these security incidents.

Insights into the state of file security

 Insiders pose the greatest threat to file security. The most serious risk is caused by malicious and negligent insiders who leak data (45 percent of respondents). Other top risks are file access visibility and control (39 percent of respondents) and vendors providing malicious files and/or applications (33 percent of respondents). Only 40 percent of respondents say their organizations can detect and respond to file-based threats within a day (25 percent) or within a week (15 percent).

Files are most vulnerable when they are shared, uploaded and transferred. Only 39 percent of respondents are confident that files are secure when transferring files to and from third parties and only 42 percent of respondents are confident that files are secure during the file upload stage. The Open Web Application Security Project (OWASP) released principles on securing file uploads. According to 40 percent of respondents, the principle most often used or will be used is to store files on a different server. Thirty-one percent of respondents say they only allow authorized users to upload files.

The file-based environment that poses the most risk is file storage such as on-premises, NAS and SharePoint, according to 42 percent of respondents. Forty percent of respondents say web file uploads such as public portals and web forms are a security risk.

Macro-based malware and zero-day or unknown malware are the types of malicious content of greatest concern to file security. Organizations have encountered these types of malicious content and are most concerned about macro-based malware and zero-day or unknown malware according to 44 percent and 43 percent of respondents, respectively.

The effectiveness of file management practices is primarily measured by how productive IT security employees are, according to 52 percent of respondents. Other metrics include the assessment of the security of sensitive and confidential data in files (49 percent of respondents) and fines due to missed compliance (46 percent of respondents). Only about half (51 percent of respondents) say their organizations are very or highly effective in complying with various industry and government regulations that require the protection of sensitive and confidential information.

Country of origin and DLP are most likely used or will be used to improve file security management practices. Country of origin is mainly used to neutralize zero-day or unknown threats (51 percent of respondents). The main reason to use DLP is to prevent data leaks of sensitive data and to control file sharing and access (both 44 percent of respondents).

Most companies are also using or planning to use content disarm and reconstruction (66 percent of respondents), software bill of materials (65 percent of respondents), multiscanning (64 percent of respondents), sandboxing (62 percent of respondents), file vulnerability assessment (61 percent of respondents) and the use of threat intelligence (57 percent of respondents).

AI is being used to mitigate file security risks and reduce the costs to secure files. Thirty-three percent of respondents say their organizations have made AI part of their organizations’ file security strategy and 29 percent plan to add AI in 2026. To secure sensitive corporate files in AI workloads, organizations primarily use prompt security tools (41 percent of respondents) and mask sensitive information (38 percent of respondents).

Twenty-five percent of organization have adopted a formal Generative AI (GenAI) policy and 27 percent of respondents say their organizations have an ad hoc approach. Twenty-nine percent of respondents say GenAI is banned.

The security of data files is most vulnerable when transferring files to and from third parties. Only 39 percent of respondents say their organizations have high confidence in the security of files when transferring them to and from third parties.

Only 42 percent of respondents have high confidence in the security of files during the file upload stage (internal/external) and when sharing files via email or links. Forty-four percent of respondents say their organizations are highly confident in the security of files when downloading them from unknown sources. Organizations have more confidence when storing files in the cloud, on-premises or hybrid (54 percent of respondents) or in the security of backups (53 percent of respondents).

To read the key findings from this research, download the full report at OPSWAT.COM

 

The Challenges to Ensuring Information Is Secure, Compliant and Ready for AI

Larry Ponemon

The Ponemon Institute and OpenText recently released a new global report, “The Challenges to Ensuring Information Is Secure, Compliant and Ready for AI,” revealing that while enterprise IT leaders recognize the transformative potential of AI, a gap in information readiness is causing their organizations to struggle in securing, governing, and aligning AI initiatives across businesses.

The purpose of this research is to drive important insight into how IT and IT security leaders are ensuring the security of information without hindering business goals and innovation.

A key takeaway is that IT and IT security leaders are under pressure to ensure sensitive and confidential information is secure and compliant without making it difficult for organizations to innovate and pursue opportunities to grow the business.

“This research confirms what we’re hearing from CIOs every day. AI is mission-critical, but most organizations aren’t ready to support it,” said Shannon Bell, Chief Digital Officer, OpenText. “Without trusted, well-governed information, AI can’t deliver on its promise.”

The research also reveals what needs to be done to achieve AI readiness based on the experiences of the 50 percent of organizations that have invested in AI. These include preventing the exposure of sensitive information, strengthening encryption practices and reducing the risk of poor or misconfigured systems due to over-reliance on AI for cyber risk management. When deploying, organizations should develop an AI data security program, use tools to validate AI prompts and their responses, train teams to spot AI-generated behavior patterns or threat actors, use data cleansing and governance and identify and mitigate bias in AI models for safe and responsible use.

Metrics to demonstrate the value of the IT security program to the business is the top priority in the next 12 months. Some 47 percent of respondents plan to use metrics to show the value IT security brings to the organization. This is followed by acceleration of digital transformation and automation of business processes (both 44 percent of respondents). Forty percent of respondents say a top three priority is the identification and prioritization of threats affecting business operations.

Organizations recognize the need to make AI part of their security strategy, but difficulties in adoption exist.

 Fifty percent of respondents say their organizations are using AI as part of their security strategy, but 57 percent of respondents rate the adoption of AI as very difficult to extremely difficult and 53 percent of respondents say it is very difficult or extremely difficult to reduce potential AI security and legal risks. Foundational to success is to ensure AI is secure, compliant and governed.

AI deployment has the support of senior leaders. Compared to other IT initiatives, 57 percent of respondents say AI initiatives have a very or very high priority. Fifty-five percent of respondents say their CEOs and Boards of Directors consider the use of AI as part of their IT and security programs as very or extremely important. A possible reason for such support is that 54 percent of respondents are confident or very confident of their organizations’ ability to demonstrate ROI from AI initiatives.

 CEOs, CIOs and CISOs are most likely to have authority for setting AI strategy. Fifteen percent of CEOs, 14 percent of CIOs and 12 percent of CISOs have final authority for such AI initiatives as technology investment decisions and the priorities and timelines for deployment.

 Despite leadership’s support for AI, IT/IT security and business goals may not be in alignment. Less than half (47 percent of respondents) say IT/IT security and business goals are in alignment with those who are responsible for AI initiatives. Fifty percent of respondents say their organizations have hired or are considering hiring a chief AI officer or a chief digital officer to lead AI strategy. Such an appointment of someone dedicated to managing the organization’s AI strategy may help bridge gaps between the goals and objectives of IT/IT security with those who have final authority over AI strategy.

Concerns about privacy can cause delays in AI adoption. The inadvertent infringement of privacy rights is considered the top risk caused by AI. Forty-four percent of respondents say their biggest concern is making sure risks to privacy are mitigated. Other concerns are weak or no encryption (42 percent of respondents) and poor or misconfigured systems due to over-reliance on AI for cyber risk management.

Developing a data security program and practice is considered the most important step to reduce risks from AI. Fifty-three percent of respondents say it is very difficult or extremely difficult to reduce potential AI security and legal risks. To address data security risks in AI, 46 percent of respondents say they are developing a data security program and practice. Other steps are using tools to validate AI prompts and their responses (39 percent of respondents), training teams to spot AI-generated behavior patterns or threat actors (39 percent of respondents), using data cleansing and governance (38 percent of respondents) and identifying and mitigating bias in AI models for safe and responsible use (38 percent of respondents).

Despite being a priority, the top governance challenge is insufficient budget for investments in AI technologies. Thirty-one percent of respondents say there is insufficient budget for AI-based technologies. This is followed by 29 percent of respondents who say there is not enough time to integrate AI-based technologies into security workflows, 28 percent of respondents who say IT and IT security functions are not aligned with the organization’s AI strategy and 28 percent of respondents say their organizations can’t recruit personnel experienced in AI-based technologies.

 The adoption of GenAI and Agentic AI

 GenAI is considered very or highly important to organizations’ IT and overall business strategy because it improves operational efficiency and worker productivity. Of the 50 percent of organizations that have adopted AI, 32 percent have adopted GenAI as part of their IT or overall business strategy and 26 percent will adopt GenAI in the next six months. Fifty-eight percent of these respondents say GenAI is important to highly important to their organizations’ IT and overall business strategy.

 GenAI supports security operations and employee productivity. The most important GenAI use cases are supporting security operations (e.g. analyzing alerts, generating playbooks) (39 percent of respondents), improving employee productivity (e.g. drafting documents, summarizing content) (36 percent of respondents), assisting with software development (e.g. code generation or debugging) (34 percent of respondents) and accelerating threat detection or incident response (34 percent of respondents).

 Copyright and other legal risks are the biggest challenges to an effective GenAI program. Respondents were asked to identify the biggest challenges to an effective GenAI program. Forty-three percent of respondents say copyright and other legal risks are the top challenge to an effective GenAI program. Thirty-seven percent of respondents say lack of in-house expertise and 36 percent of respondents say regulatory uncertainty and changes are barriers to an effective GenAI program.

 Organizations are slow to adopt Agentic AI as part of their overall IT and business strategy. While 32 percent of respondents who are using AI have adopted GenAI, only 19 percent have adopted Agentic AI. Only 31 percent of the organizations that have adopted Agentic AI say it is very or extremely important to their organizations’ IT and business strategy.

Organizations’ approaches to securing data and supporting business innovation

 Ensuring the high availability of IT services supports business innovation. Respondents were asked what is most critical to supporting business innovation. Forty-seven percent of respondents say it is ensuring high availability of IT services and 43 percent of respondents say it is recruiting and retaining qualified personnel. Another important step, according to 39 percent of respondents, is to reduce security complexity by integrating disparate security technologies.

 Business innovation is dependent upon IT’s agility in supporting frequent shifts in strategy. Fifty-three percent of respondents say it is very difficult to support business goals and transformation. To support innovation the most important digital assets to secure are source code (44 percent of respondents), custom data (44 percent of respondents), contracts and legal documents (42 percent of respondents) and intellectual property (42 percent of respondents).

The importance of proving the business value of technology investments

Only 43 percent of respondents say their organizations are very or highly confident in the ability to measure the ROI of investments related to securing and managing information assets. The biggest challenge in demonstrating ROI for information management and security technologies is the inability to track downstream business impacts (52 percent of respondents).

The ROI of downstream business impacts involves understanding the indirect benefits and costs that ripple outwards from an initiative, activity or technology investment. Examples to measure include reduced errors and rework, increased efficiency and productivity and reduced compliance risks. Other challenges are the difficulty in quantifying intangible benefits (51 percent of respondents) and competing priorities (47 percent of respondents).

 Organizations are eager to see the ROI from security technologies.  Calculating ROI is important to proving the business value of IT security investments. It is helpful in making informed decisions about IT security strategies and investments, evaluating performance and calculating profitability. ROI from investments is expected to be shown within six months to one year according to 55 percent of respondents. Forty-five percent of respondents say the timeline is one year to two years (21 percent) or no required timeframe (24 percent).

 Security strategies and technology investments should address the risks of ransomware and malicious insiders.  Fifty-three percent of respondents say their organizations had a data breach or cybersecurity incident in the past two years. The average number of incidents was three. During this time, only 28 percent of respondents say cybersecurity incidents have decreased (18 percent) or decreased significantly (10 percent). Ransomware and malicious insiders are the most likely cyberattacks, according to 40 percent and 37 percent of respondents, respectively. The data most vulnerable to insider risks are customer or client data (58 percent of respondents), financial records (46 percent of respondents) and source code (43 percent of respondents).

 Malicious insiders pose a significant risk to data security. Encryption for data in transit (39 percent of respondents), email data loss prevention (35 percent of respondents), and encryption for data at rest (35 percent of respondents) are primarily used to reduce the risk of negligent and malicious insiders.

 Organizations find it difficult to reduce insider or malicious data loss incidents without jeopardizing trust. Fifty-one percent of respondents say their organizations are effective or very effective in their ability to monitor insider activity across hybrid and/or remote environments. Only 41 percent of respondents say their organizations are effective or very effective in creating trust while taking steps to reduce data loss incidents caused by negligent or malicious insiders.

 Reducing complexity in organizations’ IT security architecture is needed to have a strong security posture. Seventy-three percent of respondents say reducing complexity is essential (23 percent), very important (23 percent) and important (27 percent). Complexity increases because of new or emerging cyber threats (52 percent of respondents), the Internet of Things (46 percent of respondents) and the rapid growth of unstructured data (44 percent of respondents).

 Accountability for reducing complexity is essential. To reduce complexity the most essential steps are to appoint one person to be accountable (59 percent of respondents), streamline security and data governance policies (56 percent of respondents) and reduce the number of overlapping tools and platforms (55 percent of respondents). On average, organizations have 15 separate cybersecurity technologies

To read more key findings and download the entire report, click here. (PDF)

Yes, there is a 9-1-1 for scam victims. Get to know the guardian angels of the Internet — AARP’s Fraud Watch Network

Bob Sullivan

Many years ago, a very smart book editor I worked with (Jill Schwartzman at Dutton now) gently admonished me because I failed to include resources for consumers in my tirades about the mistreatment of consumers.  So was born concepts like “Red Tape Tips” I’d include at the end of my columns and an appendix in each book listing consumer advocacy organizations.  But that experience forced me to face a stark reality: Most of these organizations don’t really take phone calls. While there are plenty of well-meaning non-profit groups out there who try to fix broken policies that favor the Gotcha Capitalists and criminals — there are hardly any organizations set up to field calls from people who are hurting and need help right now.

There’s no 9-1-1 for a consumer who’s about to get ripped off.

Actually, there is. It’s AARP’s Fraud Watch Network helpline. And I’m proud to say that my work on AARP’s Perfect Scam podcast helps highlight the important work they do.

First, let me say I don’t fault the folks who created or work at various grassroots consumer organizations. They often toil away with skeleton staffs and meager funding, true Davids in a battle against billion-dollar Goliaths.  But it’s just not practical for them to take calls and offer customer support to individual victims or take on their cases.

And yes, if you are the victim of a crime, you can and should call 9-1-1 (or the non-emergency line) and report that to the police. Unfortunately, many in-progress scams are difficult to report — “what’s the crime?” — and local police aren’t always set up to offer on-the-spot advice or empathetic listening.

That’s why I’m happy to talk about the Fraud Watch Network. It’s staffed Monday-Friday, from 9 a.m. to 9 p.m., mainly by trained volunteers. They reach out to every caller within 24 hours or so. It’s staffed mainly by volunteers who are ready to help victims within a day or so, and offer both empathetic listening and practical advice.  They’ve stopped millions of dollars in criminal transactions by giving people a place to turn when they’re in crisis.

Who are these guardian angels of this dangerous digital age? In this week’s Perfect Scam episode, I spotlight two volunteers who do this work.  Like most helpline volunteers, Dee Johnoson and Mike Alfred are both former victims who once called the helpline, and now they are two of the 150 volunteers who give their time because they are called to help others.

At this link, you can find a partial transcript of the episode, in case podcasts aren’t your thing. I do hope you’ll listen, however. You’ll really like Mike and Dee. I want readers to see their kindness and empathy in action — those are in short supply these days, I fear.  But more than anything, I want readers to know that there is a 9-1-1 for scams.  If you or someone you love is caught up in an Internet crime right now, I urge you to call the AARP Fraud Watch Network Helpline at 877-908-3360 or visit the website. You’ll get near-immediate help from experts who really care.

You can also email me, of course, at the address on my contact page. Or you can email The Perfect Scam team at theperfectscampodcast@aarp.org.

AARP’s Helpline is part of AARP’s Fraud Watch Network. In addition to volunteers helping victims, the network has roughly a thousand trained volunteers working in their communities and online to spread the message of fraud prevention. To learn more, visit

Optimizing What Matters Most: The State of Mission-Critical Work

In this study, mission critical work refers to tasks, systems or processes within an organization that are essential for its operations and survival. If they fail or are disrupted, the entire operations could be significantly impacted or even brought to a complete halt. It is the most critical work that must be done without interruption to maintain functionality. The difference between mission critical and business critical systems is that mission critical is vital to the core mission or primary functions of the organization. Business critical systems are crucial to business operations and support the organizations’ core processes.

In addition to the impact on the sustainability of organizations, mission-critical failures and disruptions can have a ripple effect with significant economic consequences for government and industry sectors. This is particularly the case when failures and disruptions involve critical infrastructure, which encompasses systems and assets essential for the functioning of a society and its economy. These failures can disrupt supply chains, impact business productivity, and lead to economic losses.

To reduce disruptions and failures, organizations need to assess their ability to manage the risks to tasks, systems and/or processes as well as to protect and secure sensitive and confidential data in mission critical workflows.  However, as shown in this research, confidence in understanding the risk, security and privacy vulnerabilities in mission-critical workflows is low.

Respondents were asked to rate their confidence in the privacy and security and their ability to understand the risk profile of their organization’s mission-critical workflows on a scale of 1 = no confidence to 10 = highly confident. Only 47 percent of respondents say they are very or highly confident in understanding the risk profile of mission-critical workflows. Slightly more than half of respondents (52 percent) are very or highly confident in the privacy and security of mission-critical workflows.

The importance of optimizing mission-critical workflows

In the past 12 months, 64 percent of organizations report they experienced an average of 6 disruptions or failures in executing mission-critical workflows. Respondents say cyberattacks are the number one reason mission critical failures and disruptions occur. To prevent these incidents, 61 percent of organizations in this research believe a strong security posture is critical.

The disruption or failure of mission-critical workflows can result in the loss of high-value information assets. This is followed by data center downtime, which not only prevents mission critical work from being completed but can have severe financial consequences. Sixty-three percent of respondents say the number one metric used to measure the cost of a disruption or failure is the cost of downtime of critical operations. According to a study conducted by Ponemon Institute in 2020, the average cost of one data center downtime was approximately $1 million. Forty-six percent of respondents say that the organizations’ survivability was affected because of a complete halt to operations.

A strong security posture and knowledgeable mission-critical staff are the most important factors to prevent mission-critical disruption and failures. Organizations need to secure mission-critical workflows to avoid disruptions or failures (61 percent of respondents) supported by a knowledgeable mission-critical staff (57 percent of respondents). Also important is an enterprise-wide incident response plan (51 percent of respondents).

Few organizations have risk mitigation strategies in place as part of their mission-critical collaboration tools. According to 47 percent of respondents, their organizations use mission-critical collaboration tools. However, only 39 percent of respondents have risk mitigation strategies in place. Of these respondents, 59 percent of respondents say they have backup procedures to prevent data loss, 54 percent of respondents say they have contingency plans to handle unexpected events.

Cyberattacks and system glitches were the primary causes of the disruption or failure. To reduce the likelihood of a disruption or failure, organizations need to ensure the security of their mission-critical workflows. Fifty percent of respondents cite cyberattacks as the cause of disruption or failure followed by 49 percent who say it was a system glitch. Sixty-one percent say a strong security posture is the most important step to preventing disruptions and failures.

Measuring the financial consequences of a disruption or failure can help organizations prioritize the resources needed to secure mission-critical workflows. Fifty-three percent of respondents say their organizations measured the cost of the disruption or failure in executing mission-critical workflows. The metrics most often used are the cost of downtime of critical operations (63 percent of respondents), which is the number two consequence of a disruption or failure. Other metrics are the cost to recover the organization’s reputation (51 percent of respondents) and the cost to detect, identify and remediate the incident (50 percent of respondents).

Organizations should consider increasing the role of IT and IT security functions in assessing cyber risks that threaten workflow’s reliability. Despite the threat of a cyberattack targeting mission-critical workflows, only 16 percent of respondents say the CISO and only 10 percent of respondents say the CIO are most responsible for executing mission-critical workflows securely. The function most responsible is the business unit leader, according to 26 percent of respondents.

A dedicated team supports the optimization of mission-critical workflows. Fifty-six percent of respondents say their organizations have a team dedicated to managing mission-critical workflows. The 44 percent of organizations without a dedicated team say it is very or highly difficult to accomplish the goals of mission-critical workflows. According to the research presented in this report, a dedicated team gives organizations the following advantages.

  • Increased effectiveness in prioritizing critical communications among team members
  • More likely to be able to prevent disruptions and failures in executing mission-critical workflows
  • More likely to measure the costs of a disruption or failure to improve the execution of mission-critical workflows
  • Improved efficiency of mission-critical workflow management and effectiveness in streamlining mission-critical workflows
  • More likely to use mission-critical collaboration tools

 Mission-critical workflows require setting clear objectives, understanding the requirements, mapping workflows and managing risks. The two most often used activities to manage mission-critical are analyzing current workflow processes (47 percent of respondents) and training mission-critical employees (44 percent of respondents). Only 34 percent of respondents say their organizations are very or highly effective in prioritizing critical communication among team members.

Mission-critical workflows can be overly complex and inefficient. Taking steps to automate repetitive tasks where possible and to regularly review and update workflows are only used by 38 percent and 36 percent of organizations, respectively. Only 46 percent of respondents say their organizations are very or highly effective in streamlining mission-critical workflows to improve their efficiency and very or highly efficient in managing mission-critical workflows.

Ineffective communication about the execution of mission-control workflows can put organizations’ critical operations at risk. Sixty percent of respondents say it is the lack of real-time information sharing and 58 percent of respondents say it is the lack of secure information sharing that are barriers to effectively executing mission-critical workflows.

Enterprise-wide incident response plans should be implemented to reduce the time to respond, contain and remediate security incidents that compromise mission-critical workflows. Fifty-one percent of respondents say an enterprise-wide incident response plan is critical to the prevention of disruption and failures. Fifty-nine percent of organizations measure effectiveness based on how quickly compromises to mission-critical workflows are addressed. Organizations also measure their ability to prevent and detect cyberattacks against mission-critical workflows.

Organizations are adopting AI to improve the management of mission-critical workflows. However, organizations need to consider the potential AI security risks to mission-critical workflows. Fifty-one percent of respondents say their organizations have deployed AI. Most often AI is used to automate repetitive tasks (60 percent) and secure data used and data harvested by Large Language Models (LLMs). The top AI security risks according to respondents are potential leakage or theft of confidential and sensitive data (53 percent) and potential backdoor attacks on their AI infrastructure such as sabotage or malicious code injection (48 percent).

Mission-critical collaboration tools are considered very or highly effective, but adoption is slow. Only 47 percent of respondents use mission-critical collaboration tools. However, 54 percent of respondents say these tools are very or highly effective in making workflows efficient with minimum disruption to critical operations. The features considered most important are data encryption (61 percent of respondents), data loss prevention (56 percent of respondents) and the ability to securely enable real-time communication between teams (56 percent of respondents).

To read the rest of this report, including key findings, please visit Mattermost.com

US (finally) issues warning about crypto ATMs

Bob Sullivan

Finally, crypto ATMs are getting a bit of the attention that they deserve.

As host of AARP’s The Perfect Scam podcast, I talk to crime victims every week.  A few years ago, a majority had their money stolen via bogus gift card transactions. Today, it feels like almost every person is the victim of a cryptocurrency scam, and many have their money stolen through crypto ATMs.

I’m sure you’ve seen these curious machines in convenience stores and gas stations, which are also known as convertible virtual currency (CVC) kiosks.  Put cash in, and you can send or receive crypto around the world.

Crypto ATMs, in theory, democratize crypto. Someone who wouldn’t feel comfortable buying crypto online can do so in a familiar way, using a machine that works just like the ones we’ve used to get cash for many years.  Perhaps you won’t be surprised to hear that crypto ATMs are a bad deal.  Set aside crypto volatility and high transaction fees for a moment: No one who feels uncomfortable opening an online crypto account should be buying or transmitting crypto. Period.

And yet, these crypto ATMs are sprouting up like weeds, at a time when old-fashioned ATMs are disappearing. There were roughly 4,000 crypto ATMs in 2019, and there were more than 37,000 by January of this year.

I know that because  the U.S. Treasury’s Financial Crime Enforcement Network — FinCEN — published a notice Aug. 4 warning financial institutions about crypto ATMs and their connection to crime. The agency also said many of these devices are being put into service without registering as money service businesses with FinCEN, and their operators are sometimes failing to report suspicious activity.

As I mentioned, there really isn’t a use case for these fast-proliferating devices.  Well, there’s one. When a criminal has a victim confused and manipulated, the fastest way to steal their money is to persuade them to drive to the nearest crypto ATM and feed the machines with $100 bills. I’ve talked to countless victims who’ve told me harrowing, tragic tales of crouching in the dark corner of a gas station, shoving money into one of these machines, terrified they are being watched.  In fact, they aren’t. Employees are told not to get involved. So victims drive away, their money stolen in the fastest way possible.  The transfer is nearly instant, faster than a wire transfer, and irrevocable.

That means it’s the perfect gadget for criminals like the Jalisco Cartel in Mexico to steal cash from Americans. Particularly elderly Americans, FinCEN says. According to FTC data, people aged 60 and over were more than three times as likely as younger adults to report a loss using a crypto ATM.

“These kiosks have increasingly facilitated elder fraud, especially among tech/customer supports scams, government impersonation, confidence/romance scams, emergency/person-in-need scams, and lottery/sweepstakes scams,” FinCEN said. And the losses are huge. “In 2024, the FBI’s IC3 received more than 10,956 complaints reporting the use of CVC kiosks, with reported victim losses of approximately $246.7 million. This represents a 99 percent increase in the number of complaints and a 31 percent increase in reported victim losses from 2023.”

In other words, we have a five-alarm fire on our hands.  One that’s been blazing in broad daylight for at least a year and yet…every week, I continue to interview victims who crouched near a crypto ATM for days on end, stuffing bills into these machines, thinking they were doing the right thing.

Banks and kiosk operators should do much more. The current daily limits on transactions aren’t low enough; victims are just instructed to drive all over town, or make daily deposits for weeks on end, so criminals can steal hundreds of thousands of dollars this way.  Regulators should do more, too.  If the majority of transactions flowing through a certain kiosk can be traced to fraud, the machine should be removed immediately. It’s not impossible. The UK ordered all cryto ATMS shut down recently.

Tech can enhance our lives; it can also be weaponized. And when it is, we shouldn’t stand idly by and act as if we are powerless to stop the pain it is causing our most vulnerable people.

The State of Identity and Access Management (IAM) Maturity

Larry Ponemon

Identity Management Maturity (IDM) refers to the extent to which an organization effectively manages user identities and access across its systems and applications. It’s a measure of how well an organization is implementing and managing Identity and Access Management (IAM) practices. A mature IDM program ensures that only authorized users have access to the resources they need, enhancing security, reducing risks and improving overall efficiency.

Most organizations remain in the early to mid-stages of Identity and Access Management (IAM) maturity, leaving them vulnerable to identity-based threats. This new study of 626 IT professionals by the Ponemon Institute, sponsored by GuidePoint Security, highlights that despite growing awareness of insider threats and identity breaches, IAM is under-prioritized compared to other IT security investments. All participants in this research are involved in their organizations’ IAM programs.

Key Insights:

  • IAM is underfunded and underdeveloped.

Only 50 percent of organizations rate their IAM tools as very or highly effective, and even fewer (44 percent) express high confidence in their ability to prevent identity-based incidents. According to 47 percent of organizations, investments in IAM technologies trail behind other security investment priorities.

  • Manual processes are stalling progress.

 Many organizations still rely on spreadsheets, scripts and other manual efforts for tasks like access reviews, deprovisioning and privileged access management—introducing risk and inefficiencies.

  • High performers show the way forward.

 High performers in this research are those organizations that self-report their IAM technologies and investments are highly effective (23 percent). As a result, they report fewer security incidents and stronger identity controls. These organizations also lead other organizations represented in this research in adopting biometric authentication, authentication, identity threat detection and integrated governance platforms.

  • Technology and expertise gaps persist.

 A lack of tools, skilled personnel and resources is preventing broader progress. Many IAM implementations are driven by user experience goals rather than security or compliance needs.

Bottom Line:

Achieving IAM maturity requires a strategic shift—moving from reactive, manual processes to integrated, automated identity security. Organizations that treat IAM as foundational to cybersecurity, not just IT operations, are best positioned to reduce risk, streamline access and build trust in a dynamic threat landscape.

Part 2. Introduction: Including a Peek at High Performer Trends

The purpose of an Identity and Access Management program (IAM) is to manage user identities and access across systems and applications. A mature IAM program ensures that only authorized users have access to the resources they need to enhance security, reduce risks and improve overall efficiency.

This survey, sponsored by GuidePoint Security, was designed to understand how effective organizations are in achieving IAM maturity and which tools and practices are critical components of their identity and access management programs. A key takeaway from the research is that organizations’ continued dependency on manual processes as part of their IAM programs is a barrier to achieving maturity and reducing insider threats. Such a lack of maturity can lead to data breaches and security incidents caused by negligent or malicious insiders.

Recent examples of such events include former Tesla employees in 2023 who leaked sensitive data about 75,000 current and former employees to a foreign media outlet. In August 2022, Microsoft experienced an insider data breach where employees inadvertently shared login credentials for GitHub infrastructure, potentially exposing Azure servers and other internal systems to attackers.

According to the research, investments in IT security technologies are prioritized over IAM technologies.  Without the necessary investments in IAM, organizations lack confidence in their ability to prevent identity-based security incidents. Respondents were asked to rate effectiveness in their organizations’ tools and investments in combating modern identity threats on a scale from 1 = not effective to 10 = highly effective, their confidence in the ability to prevent identity-based security incidents from 1 = not confident to 10 = highly confident and the priority of investing in IAM technologies compared to other security technologies from 1 = not a priority to 10 = high priority.

Only half (50 percent of respondents) believe their tools and investments are very effective and only 44 percent of respondents are very or highly confident in their ability to prevent identity-based security incidents. Less than half of the organizations (47 percent of respondents) say investing in IAM technologies compared to other IT security technologies is a high priority.

Best practices in achieving a strong identity security posture

To identify best practices in achieving a strong identity security posture, we analyzed the responses of the 23 percent of IT professionals who rated the effectiveness of their tools and investments in combating modern identity threats as highly effective (9+ on a scale from 1 = low effectiveness to 10 = high effectiveness). We refer to these respondents and their organizations as high performers. Seventy-seven percent of respondents rated their effectiveness on a scale from 1 to 8. We refer to this group as “other” in the report.

Organizations that have more effective tools and investments to combat modern identity threats are less likely to experience an identity-based security incident. Only 39 percent of high performers had an identity-based security incident.

High performers are outpacing other organizations in the adoption of automation and advanced identity security technologies.  

  • Sixty-four percent of high performers vs. 37 percent of other respondents have adopted biometric authentication.
  • Fifty-nine percent of high performers vs. 34 percent of other respondents use automated mechanisms that check for compromised passwords.
  • Fifty-six percent of high performers vs. 23 percent of other respondents have a dedicated PAM platform.
  • Fifty-three percent of high performers vs. 31 percent of other respondents use IAM platforms and/or processes used to manage machine, service and other non-human accounts or identities.

 High performers are significantly more likely to assign privileged access to a primary account (55 percent vs. 30 percent). Only 25 percent of high performers vs. 33 percent of other respondents use manual or scripted processes to temporarily assign privileged accounts.

 High performers are leading in the adoption of ITDR, ISPM and IGA platforms. 

  • Thirty-seven percent of high performers vs. 12 percent of other respondents have adopted IDTR.
  • Thirty-five percent of high performers vs. 15 percent of other respondents have adopted ISPM.
  • Thirty-one percent of high performers vs. 9 percent of other respondents have adopted IGA platforms.

 Following are highlights from organizations represented in this research

 Identity verification solutions are systems that confirm the authenticity of a person’s identity, typically in digital contexts, such as online transactions or applications. These solutions use various methods to verify a person’s identity and ensures only authorized users have access to the resources they need.

Few organizations use identity verification solutions and services to confirm a person’s claimed identity. Only 39 percent of respondents say their organizations use identity verification solutions and services. If they do use identity verification solutions and services, they are mainly for employee and contractor onboarding (37 percent of respondents). Thirty-three percent of respondents say it is part of customer registration and vetting, and 30 percent of respondents say it is used for both employee/contractor and customer.

Reliance on manual processes stalls organizations’ ability to achieve maturity. Less than half of organizations (47 percent) have an automated mechanism that checks for compromised passwords. If they do automate checks for compromised passwords, 37 percent of respondents say it is for both customer and workforce accounts, 34 percent only automate checks for customer accounts, and 29 percent only automate checks for workforce accounts.

 To close the identity security gap, organizations need technologies, in-house expertise and resources. However, as discussed previously, more resources are allocated to investments in IT security. Fifty-four percent of respondents say there is a lack of technologies. Fifty-two percent say there is a lack of in-house expertise, and 45 percent say it is a lack of resources.

 Security is not a priority when making IAM investment decisions.  Despite many high-profile examples of insider security breaches, 45 percent of respondents say the number one priority for investing in IAM is to improve user experience. Only 34 percent of respondents say investments are prioritized based on the increase in number of regulations or industry mandates or the constant turnover of employees, contractors, consultants and partners (31 percent of respondents).

To achieve greater maturity, organizations need to improve the ability of IAM platforms to authenticate and authorize user identities and access rights. Respondents were asked to rate the effectiveness of their IAM platform in user access provisioning lifecycle from onboarding through termination, and its effectiveness authenticating and authorizing on a scale of 1 = not effective to 10 = highly effective. Only 46 percent of respondents say their IAM platform is very or highly effective for authentication and authorization. Fifty percent of respondents rate the effectiveness of their IAM platforms’ user access provisioning lifecycle from onboarding through termination as very or highly effective.

Policies and processes are rarely integrated with IAM platforms in the management of machine, service and other non-human accounts or identities. Forty-four percent of respondents say their IAM platform and/or processes are used to manage machine, service and other non-human accounts or identities. Thirty-nine percent of respondents say their organizations are in the adoption stage of using their IAM platform and/or processes to manage machine, service and other non-human accounts. Of these 83 percent of respondents (44 percent + 39 percent), 39 percent say the use of the IAM platform to manage machine, service and other non-human accounts or identities is ad hoc. Only 28 percent of these respondents say management is governed with policy and/or processes and integrated with the IAM platform.

IAM platforms and/or processes are used to perform periodic access review, attestation, certification of user accounts and entitlements but mostly it is manual. While most organizations conduct periodic access review, attestation and certification of user accounts and entitlements, 34 percent of respondents say it is manual with spreadsheets, and 36 percent say their organizations use custom in-house built workflows. Only 17 percent of respondents say it is executed through the IAM identity governance platform. Only 41 percent of respondents use internal applications and resources based on their roles and needs, to streamline onboarding, offboarding and access management. An average of 38 percent of internal applications are managed by their organizations’ IAM platforms.

Deprovisioning non-human identities, also known as non-human identity management (NHIM), focuses on removing or disabling access for digital entities like service accounts, APIs, and IoT devices when they are no longer needed. This process is crucial for security, as it helps prevent the misuse of credentials by automated systems that could lead to data breaches or system compromises.

Deprovisioning user access is mostly manual. Forty-one percent of respondents say their organizations include non-human identities in deprovisioning user access. Of those respondents, 40 percent say NHI deprovisioning is mostly a manual process. Twenty-seven percent of respondents say the process is automated with a custom script and 26 percent say it is automated with a SaaS tool or third-party solution.

Few organizations are integrating privileged access with other IAM systems and if they do the integration is not effective. Forty-two percent of respondents say PAM is running a dedicated platform. Twenty-seven percent say privileged access is integrated with other IAM systems, and 31 percent of respondents say privileged access is managed manually. Of these 27 percent of respondents, only 45 percent rate the effectiveness of their organizations’ IAM platforms for PAM as very or highly effective.

To read the full findings of this report, visit Guidepoint’s Website. 

Minnesota assassin used data brokers as a deadly weapon

I’ve called Amy Boyer the first person killed by the Internet…with only a hint of a stretch in my assertion.  She was stalked and murdered by someone who tracked her down using a data broker …in 1999.  I told her story in a documentary podcast called “No Place to Hide” published five years ago, on the 20th anniversary of her death.

The dark events that took place in Minnesota last month shows we’ve learned just about nothing, a sold 25 years after Amy’s unnecessary death.

Bob Sullivan

When alleged assassin Vance Boelter left his home on June 13, he had a list of 45 state politicians in his car, and enough ammunition to kill dozens of them. He also had a notebook full of their personal information, including home addresses. That notebook also had detailed information on 11 different Internet data brokers — how long their free trials were, how he could get home addresses. Most of them have names you’ve probably seen in online ads — I’ve redacted them in the image above to avoid giving them any unnecessary publicity.

Belter stalked his victims digitally. He ultimately killed Rep. Melissa Hortman and her husband, and shot a second state legislator and his wife, before his rampage ended. The horrific attack could have been even worse — and it was fueled, in part, by data brokers.

As stories of political violence mount in the U.S., a fresh spotlight is being shined on security for public officials — politicians, judges, government bureaucrats, even corporate executives. But America has failed for decades to take even basic steps to protect our privacy, failing again and again to pass a federal privacy law, even failing to do much about the hundreds of data brokers that profit off of selling our personal information.

What was the role of data brokers in this horrific crime and what more could be done to protect elected officials — protect all of us — going forward? I host a podcast for Duke University called Debugger, and in this recent episode, I talk with David Hoffman, a professor at Duke University and director of the Duke Initiative for Science and Society.

Would Boelter have found his victims without data brokers? Perhaps, perhaps not. We’ll never know.  But why do we seem to be making things so easy for stalkers, for murderers? Why do we pretend to be helpless bystanders when there are simple steps our society can take to make things harder for stalkers?

————Partial transcript————–

(lightly edited for clarity)

David Hoffman: We’ve known for quite a while that people have been actually been getting killed because of the accessibility of data from data brokers. These people search websites and people search data brokers are really the bottom feeders of the Internet economy. What we haven’t seen is something of such high profile as this particular instance, and it’s my hope that it’s going to serve as a catalyst for us to take some of the very reasonable policy actions that we could do to address this and make sure something like this doesn’t happen in the future.

Bob: It’s not just elected officials or CEOs of companies that are at risk for this, right? Who who else might be at risk from from digital stalking and from the information that can be gleaned from a data broker?

David Hoffman: I think some of the cases that we’ve seen have been, for instance, victims of domestic violence and stalking. But it can be just about anyone who, for one reason or another, has someone to fear … who can find out who they are, where they live, and other personal information about them and their family.

Bob: I know Duke has done some research on data brokers and their impact on national security and other issues.What kind of research have you done and what have you found?

David Hoffman: We’ve actually led a program on data broker research for six years now, and what we have done is shown the value that people are providing for the data so that… it actually has economic value, people are paying for it, and that they are creating the kinds of lists and selling them that are horrific.

Let me give you an example. We have found that there are entities out there that are collecting lists for sale of personal information about veterans and members of the military. We have found that there are people out there creating lists about people who are in the early stages of Alzheimer’s and dementia, and those people are selling those lists to scam artists, particularly because those people are at risk.

So we have actually done research where we’ve gone out and we have purchased this data from data brokers, and then we have analyzed what we have received and we see a tremendous amount of sensitive information, including information about sexual orientation of individuals and healthcare information.

Bob: Are there natural security risks as well from the sale of information at data brokers?

David Hoffman: You can imagine for the list that I described for members of the military and veterans …. not just information about them, but understanding information about their families, the issues that there could be for blackmail and for people trying to compromise people’s security clearances and get access to information.

Bob: I know there’s long been this perception that, you know, “You have no privacy, get over it.” There’s this helpless feeling many of us have that our information is out there. It’s hard for me to imagine sitting here…How could I make sure no one could find my home address, for example? Is there anything that Congress could do or policymakers could do to make this situation any better?

David Hoffman: Absolutely. I think there’s a number of things that people could do. So first of all, we have to take a look at where these entities are getting a lot of this information.

You know, for decades we have had public records that actually store people’s addresses, but before those records were digitized and made available on the internet, you would have to go to a clerk’s office in an individual county or city have to know what you’re looking for and be able to file an access request to get that information.

Now what we have done all across the United States is provide ready and open access to all of that information so that these nuts can access it en masse and be able to process it and then to further sell it. We need privacy laws that include the protection of public records because we include (personal information) when we purchase real estate or small business filings that we do, or a court case that we might be involved in. Yes, those produce public records, but we never intended those to be readily available to everyone at a moment’s notice on their computer or by automated bots that will go and collect them and then be able to provide that information to anybody who wants to provide a relatively small money, amount of money, usually under $20,

Bob: And in many cases free. One of the chilling elements of the affidavit I read … in the Minnesota case … he’s got a list of…how long the free trials are, what information you can get from each site… so you often don’t have to pay anything to get this kind of information, right?

David Hoffman: That’s absolutely right. And this just demonstrates once again, how important it should be that we have a comprehensive privacy law in the United States like they have in almost every other developed country around the world that would provide … protection for this kind of information. This isn’t something that’s going to chill innovation. This is not the kind of innovation that we need…people to actually create sort of spy-on-your-neighbor websites where you can learn all of this about anyone at that point in time.

We can still have innovation. We can still drive social progress with the use of data while providing much stronger protections for it.

The 2025 Study on the State of AI in Cybersecurity

INTRODUCTION: The overall threat from China and other adversaries has only increased over time and has accelerated and been exacerbated with technological innovation and their access to AI. In an article from January 2025, Jen Easterly, former Director of the Cybersecurity and Infrastructure Security Agency (CISA), lays out some of the risks to US critical infrastructure. CISA defines critical infrastructure as encompassing 16 sectors from utilities to government agencies to banks and the entire IT industry. Outages happen consistently across all sectors and vulnerabilities are everywhere. So, the key for all Cyber programs is continuing to improve upon early detection and early response. 

After the Crowdstrike outage in 2024 that affected thousands of hospitals, airports and businesses worldwide, Easterly said, “We are building resilience into our networks and our systems so that we can withstand a significant disruption or at least drive down the recovery time to be able to provide services, which is why I thought the CrowdStrike incident — which was a terrible incident — was a useful exercise, like a dress rehearsal, for what China may want to do to us in some way and how we react if something like that happens,” she said.  “We have to be able to respond very rapidly and recover very rapidly in a world where [an issue] is not reversible.” –

What will organizations do to combat persistent threats and cyberattacks from increasingly sophisticated adversaries?  A goal of this research MixMode sponsored is to provide information on how industry can leverage AI in their cybersecurity plans to detect attacks earlier (be predictive) and improve their ability to recover from attacks more quickly.     


Organizations are in a race to adopt artificial intelligence (AI) technologies to strengthen their ability to stop the constant threats from cyber criminals. This is the second annual study sponsored by MixMode on this topic. The purpose of this research is to understand how organizations are leveraging AI to effectively detect and respond to cyberattacks.

Ponemon Institute surveyed 685 US IT and IT security practitioners in organizations that have adopted AI in some form. These respondents are familiar with their organization’s use of AI for cybersecurity and have responsibility for evaluating and/or selecting AI-based cybersecurity tools and vendors.

Since last year’s study, organizations have not made progress in their ability to integrate AI security technologies with legacy systems and streamline their security architecture to increase AI’s value. More respondents believe it is difficult to integrate AI-based security technologies with legacy systems, an increase from 65 percent to 70 percent of respondents. Sixty-seven percent of respondents, a slight increase from 64 percent of respondents, say their organizations need to simplify and streamline their security architecture to obtain maximum value from AI. Most organizations continue to use AI to detect attacks across the cloud, on-premises and hybrid environments.

The following research findings reveal the benefits and challenges of AI: How organizations are using AI to improve their security posture.

In just one year since the research was first conducted, organizations are reporting that their security posture has significantly improved because of AI.  The biggest changes are improving the ability to prioritize threats and vulnerabilities (an increase from 50 percent to 56 percent of respondents), increasing the efficiency of the SOC team (from 43 percent to 51 percent) and increasing the speed of analyzing threats (from 36 percent to 43 percent).

Since 2024, the maturity of AI programs has increased. Fifty-three percent of organizations have achieved full adoption stage (31 percent of respondents) or mature stage (22 percent of respondents). This is an increase from 2024 when 47 percent of respondents said they had reached the full adoption stage (29 percent of respondents) or mature stage (18 percent of respondents).

AI-based security technologies increase productivity and job satisfaction. Seventy percent of respondents say AI increases the productivity of IT security personnel, an increase from 66 percent in 2024. Fifty-one percent of respondents say AI improves the efficiency of junior analysts so that senior analysts can focus on critical threats and strategic projects. Sixty-nine percent of respondents say since the adoption of AI, job satisfaction has improved because of the elimination of tedious tasks, an increase from 64 percent.

Forty-four percent of respondents are using AI-powered cybersecurity tools or solutions. By leveraging advanced algorithms and machine learning techniques. AI-powered systems analyze vast amounts of data, identify patterns and adapt their behavior to improve performance over time.

Forty-three percent of respondents are using pre-emptive security tools to stay ahead of cybercriminals. Pre-emptive security tools apply AI-based data analysis to cybersecurity so organizations can anticipate and prevent future attacks. The benefits include the ability to preemptively deter threats and minimize damages, prioritize tasks effectively and address the most important business risks first. Pre-emptive security data can guide response teams, offer insights into the attack’s objectives, potential targets and more. The result is continuous improvement to ensure more accurate forecasts and reduce costs associated with handling attacks

Respondents say pre-emptive security is used to identify patterns that signal impending threats (60 percent), assess risks to identify emerging threats and potential impact (57 percent) and is used to harness vast amounts of online metadata from various sources as an input to predictive analytics (52 percent).

Pre-emptive security will decrease the ability of cybercriminals to direct targeted attacks. Fifty-two percent of respondents in organizations that use pre-emptive security say that without it cybercriminals will become more successful at directing targeted attacks at unprecedented speed and scale while going undetected by traditional, rule-based detection. Forty-nine percent say investments are being made in pre-emptive AI to stop AI-driven cybercrimes.

Fifty-eight percent of respondents say their SOCs use AI technologies. The primary benefit of an AI-powered SOC is that alerts are resolved faster, according to 57 percent of respondents. In addition to faster resolution of alerts, 55 percent of respondents say it frees up analyst bandwidth to focus on urgent incidents and strategic projects. Fifty percent of respondents say it applies real-time intelligence to identify patterns and detect emerging threats.

An AI-powered SOC is effective in reducing threats. Human analysts are effective as the final line of defense in the AI-powered SOC. Fifty-seven percent of respondents say AI in the SOC is very or highly effective in reducing threats and 50 percent of respondents say their human analysts are very or highly effective as the final line of defense in the AI-powered SOC.

More organizations are creating one unified approach to managing both AI and privacy security risks, an increase from 37 percent to 52 percent of respondents.  In addition, 58 percent of respondents say their organizations identify vulnerabilities and what can be done to eliminate them.

The barriers and challenges to maximizing the value from AI

While an insufficient budget to invest in AI technologies continues to be the primary governance challenge, more organizations say an increase in internal expertise is needed to validate vendors’ claims. The lack of internal expertise to validate vendors’ claims increased significantly from 53 percent to 59 percent of respondents. One of the key takeaways from the research is that 63 percent of respondents say the decision to invest in AI technologies is based on the extensiveness of the vendors’ expertise.

As the number of cyberattacks increase, especially malicious insider incidents, organizations lack confidence in their ability to prevent risks and threats. Fifty-one percent of respondents say their organizations had at least one cyberattack in the past 12 months, an increase from 45 percent of respondents in 2024.

Only 42 percent say their organizations are very or highly effective in mitigating risks, vulnerabilities and attacks across the enterprise. The attacks that increased since 2024 are malicious insiders (53 percent vs. 45 percent), compromised/stolen devices (40 percent vs. 35 percent) and credential theft (49 percent vs. 53 percent). The primary types of attacks in 2024 and 2025 are phishing/social engineering and web-based attacks.

The effectiveness of AI technologies is diminished because of interoperability issues and an increase in a heavy reliance on legacy IT environments. The barriers to AI-based security technologies’ effectiveness are interoperability issues (63 percent, an increase from 60 percent of respondents), can’t apply AI-based controls that span across the entire enterprise (59 percent vs. 61 percent of respondents) and can’t create a unified view of AI users across the enterprise (56 percent vs 58 percent of respondents). The most significant trend is the increase in the heavy reliance on legacy IT environments, an increase from 36 percent to 45 percent of respondents.

Complexity challenges the preparedness of cybersecurity teams to work with AI-powered tools. Only 42 percent of respondents say their cybersecurity teams are highly prepared to work with AI-powered tools. Fifty-five percent of respondents say AI-powered solutions are highly complex.

AI continues to make it difficult to comply with privacy and security mandates and to safeguard confidential and personal data in AI. Forty-eight percent of respondents say it is highly difficult to achieve compliance and 53 percent of respondents say it is highly difficult to safeguard confidential and personal data in AI

To read key findings and the rest of this report, visit MixMode’s website.

Trying to avoid a heart attack, Instagram attacked me with ads

This confronted me in the waiting room.

Bob Sullivan

It’s almost like they want you to be sick…

I went for a relatively routine cardiovascular screening the other day, and I’ve never hated social media more. Before I even walked into the exam room, I was pounded by ads telling me the test I was taking wasn’t good enough (But there is this other test…); the drugs I might be taking wouldn’t work (But here’s a rare supplement pill …); and the things I was doing woudn’t make me healthier (you know what will? Droping all social media).

The ad cluster above is a tiny fraction of the digital attack I suffered in the days around my doctor visit.  I didn’t capture them all, I was busy, trying to get healthy. My smartphone did not help. The digital surveillance I was subjected to just made me furious, and I know my case is tame.  There are endless stories about cruel baby ads that follow around women after a miscarriage, for example.

The thing is, I’m a technology reporter who specializes in privacy.  I’m a visiting scholar at Duke University where I work on research around privacy.  I use social media for work; I believe I have to.  And I’ve tried my best to defend my accounts against just this kind of attack without completely locking them down (at which point, they are no longer social media).  I just checked — I have unchecked “share my information with our partners” and every other such option wherever it can be found.  I’ve even banned weight loss ads from my feed, which is a fascinating stand-alone option. So is Meta’s tsk tsking of my prudish choices – “No, don’t make my ads more relevant…your ads will use less of your information and be more likely to apply to other people.”

Sounds like a dream.

Here’s why I am so repulsed by this. Meta/Facebook/Instagram knew exactly where I was and what I was doing.  But it didn’t just show me relevant ads.  It knew I’d be vulnerable. It knew I might get bad news. And then it targeted me with crazy, untested products that would probably make me sicker. It’s vile and it needs to stop.  This isn’t capitalism and it isn’t free speech. It’s using technology to attack people when they are nearly defenseless.

How did Instagram know what I was doing? Well, in theory, it could have just made an educated guess based on my age.  More likely it had noted some Internet searches I’d made recently, perhaps had access to some kind of “de-identigied” information in my email or my calendar tool, and perhaps it made some deductions from my smartphone’s location information.  I don’t know. What I do know is that I spent a good long while in the waiting room clicking through menus trying to get those ads to disappear. And I failed.

Yes, I didn’t have to look at Instagram while in the waiting room, but I would have been the only one in the waiting room not staring at a Smartphone. What else do you do when waiting for a health test? Those rooms are already crammed with anxious energy unlike anything humans ever experience elsewhere — souls of all ages facing everything from routine tests to a final exam which quite literally might mean life or death to them.  It is no place to practice surveillance capitalism.

Targeted health advertising is dangerous, social media companies. Turning it off should be easy.  Congress, hurry up and make them do that with a federal privacy law.