2024 Global Study on Securing the Organization with Zero Trust, Encryption, Credential Management & HSMs

To stave off never-ending security exploits, organizations are investing in advanced technologies and processes. The purpose of this report, sponsored by Entrust, is to provide important information about the use of zero trust, encryption trends, credential management and HSMs to prepare for and prevent cyberattacks. The research also reveals what organizations believe to be the most significant threats. The top three are hackers, system or process malfunction and unmanaged certificates.

A second report will present the research findings of PKI and IoT, as well as how organizations are preparing to transition to post quantum cryptography in order to mitigate the quantum threat. For both reports, Ponemon Institute surveyed 4,052 IT and IT security practitioners who are familiar with the use of these technologies in their organizations.

“With the rise of costly breaches and AI-generated deepfakes, synthetic identity fraud, ransomware gangs, and cyber warfare, the threat landscape is intensifying at an alarming rate,” said Samantha Mabey, Director, Solutions Marketing at Entrust. “This means that implementing a Zero Trust security practice is an urgent business imperative – and the security of organizations’ and their customers’ data, networks, and identities depends on it.”

The countries in this research are the United States (908 respondents), United Kingdom (458 respondents), Canada (473 respondents), Germany (582 respondents), UAE (355 respondents), Australia/New Zealand (274 respondents), Japan (334 respondents), Singapore (367 respondents) and Middle East (301 respondents).

Organizations are adopting zero trust because of cyber risk concerns. Zero trust is defined in this research as an evolving set of cybersecurity paradigms that move defenses from static, network-based perimeters to focus on users, assets and resources. It assumes there is no implicit trust granted to assets or user accounts based solely on their physical or network location or based on asset ownership. Sixty-two percent of respondents say their organizations have adopted zero trust at some level. However, only 18 percent of respondents have implemented all zero-trust principles.

 In the survey, 67 percent of respondents say the most important drivers to implementing a zero-trust strategy is the risk of a data breach and/or other security incidents (37 percent) and the expanding attack surface (30 percent).

Following are the most salient findings from this year’s research

 The slow but growing adoption of zero trust

  • As evidence of the importance of zero trust to secure the organization, 57 percent of respondents that have or will implement zero trust say their organizations will include zero trust in their encryption plans or strategies. Sixty-two percent of respondents say their organizations have implemented all zero-trust principles (18 percent), some zero-trust principles (12 percent), laid the foundation for a zero-trust strategy (14 percent) or started exploring various solutions to help implement its zero-strategy (18 percent). According to the research, a lack of in-house expertise is slowing adoption.
  • Senior leaders are supporting an enterprise-wide zero-trust strategy. Fifty-nine percent of respondents say their leadership has significant or very significant support for zero trust. As evidence of senior leadership’s support, only 37 percent of respondents say lack of leadership buy-in is a challenge. The biggest challenges when implementing zero trust are lack of in-house expertise (47 percent of respondents) or lack of budget (40 percent of respondents). 
  • Securing identities is the highest priority for a zero-trust strategy. Respondents were asked to select the one area that has the highest priority for their zero-trust strategy. The risk areas are identities, devices, networks, applications and data. Forty percent of respondents say identities and 24 percent of respondents say devices are the priorities. 
  • Best-of-breed solutions are most important for a successful zero-trust strategy (44 percent of respondents). This is followed by an integrated solution ecosystem from one to three vendors (22 percent of respondents). 

Trends in encryption and encryption in the public cloud: 2019 to 2024 

  • Hackers are becoming more of a threat to sensitive and confidential data. Organizations need to make the hacker threat an important part of their security strategies. Since the last report, a significant increase from 29 percent of respondents to 46 percent of respondents cite hackers as the biggest concern to being able to protect sensitive and confidential information. 
  • Management of keys and enforcement of policy continue to be the most important features in encryption solutions. Respondents were asked to rate the importance of certain features in encryption solutions. The most important features are management of keys, enforcement of policy and system performance and latency. 
  • Since 2019, organizations have been steadily transferring sensitive and confidential data to public clouds whether or not it is encrypted or made unreadable via some other mechanism. In this year’s study, 80 percent of respondents say their organizations currently transfer (52 percent) or likely to do so in the next 12 to 24 months (28 percent). 
  • Encryption performed on-premise prior to sending data to the cloud using organizations’ own keys has declined significantly since 2019. The main methods for protecting data at rest in the cloud are using keys generated/managed by the cloud provider (39 percent of respondents) or encryption is performed in the cloud using keys their organizations generate and manage on-premises. Only 23 percent of respondents say encryption is performed on-premise. 
  • There has been a significant decrease in organizations only using keys controlled by their organization (from 42 percent to 22 percent of respondents). Instead, the primary strategy for encrypting data at rest in the cloud is the use of a combination of keys controlled by their organization and by the cloud provider, with a preference for keys controlled by their organization, a significant increase from 19 percent of respondents to 32 percent of respondents in 2024. This is followed by only using keys controlled by the cloud provider (24 percent of respondents). 
  • The importance of privileged user access controls has increased significantly. Respondents were asked to rate the importance of cloud encryption features on a scale of 1 = not important to 5 = most important. Privileged user access controls increased from 3.23 in 2022 to 4.38 in 2024 on the 5-point scale. The importance of granular access controls and the ability to encrypt and rekey data while in use without downtime also increased significantly. 

Trends in credential management and HSMs: 2019 to 2024 

  • Lack of skilled personnel and no clear ownership makes the management of credentials painful. Fifty-nine percent of respondents say managing keys has a severe impact on their organizations. There are interesting trends in what causes the pain since 2019. The lack of skilled personnel (50 percent of respondents) and no clear ownership (47 percent of respondents) continue to make credential management difficult. Insufficient personnel increased from 34 percent to 46 percent of respondents. Not causing as much pain are the inadequacy of key management tools (from 52 percent to 32 percent) and systems are isolated and fragmented (from 46 percent to 29 percent). 
  • Many types of keys are getting less painful to manage. Between 2019 to 2024 the following keys have become less painful to manage are external cloud or hosted services including Bring Your Own Keys (from 54 percent to 22 percent of respondents), SSH keys (from 57 percent to 27 percent of respondents) and signing keys (e.g. code signing, digital signatures (from 52 percent to 25 percent of respondents). 
  • Management of credentials is challenging because it is harder to consistently apply security policies over credentials used across multi-cloud and cross cloud environments. Fifty-five percent of respondents say the management of credentials becoming more challenging in a multi-cloud and cross-cloud environment. Thirty-six percent of respondents say it is due to the difficulty in consistently applying security policies over credentials used across cloud services followed by it is harder to have visibility over credentials that protect and enable access to critical data and applications (33 percent of respondents).  The applications that require the use of credential management across cloud-based deployments are mainly KMIP-compliant applications (44 percent of respondents), and databases, back-up and storage (43 percent of respondents). 
  • More organizations are using Hardware Security Modules (HSMs). HSMs are a dedicated crypto processor that is specifically designed for the protection of the crypto key lifecycle. Since 2019, the use of HSMs has increased from 47 percent of respondents to 55 percent of respondents. 
  • Organizations value the use of HSMs. Since 2019, organizations are increasing the use of HSMs as part of their encryption and credential management strategies. The use of application-level encryption, database encryption and TLS/SSL have increased significantly. For the first time, respondents were asked where HSMs are deployed.  Most are deployed in online root, offline root and issuing CA.  

You can download a full copy of the report at Entrust’s website.

They’re finding dead bodies outside scam call centers; it’s time to sound the alarm on fraud

Bob Sullivan

“The cartel just very quickly, easily, and efficiently made an example of them by leaving their body parts in 48 bags outside of the city….They’re good at making high profile, gruesome examples of those who would defy them.”

I’ve spent many years writing about Internet crime, so I don’t spook easily.  After working on this week’s podcast, I’m spooked.

For the last year or two, I’ve had a gathering sense of doom about the computer crime landscape. I hear about scams constantly, but something has seemed different lately. The dollar figures seem higher, the criminals more relentless, the cover stories far more sophisticated. Thanks to fresh reporting and statistics, I am now fairly certain I’m not being paranoid. Increasingly, Internet scams are being run by organized crime organizations that combine the dark side of street gangs with Fortune 500 sales tactics.   I will share numbers in a moment, but stories are always needed to make a point this important, and that’s why we bring you “James’ ” harrowing tale this week.  He wanted to sell an old, useless timeshare, but instead had $900,000 stolen from him — by the New Generation Jalisco Cartel in Mexico. That same group was blamed for murdering call center workers and spreading their body parts around Jalisco.

 This episode offers a rare chance to hear a scam in action. James recorded some of his calls with criminals and shared them with us.  ‘Show me, don’t tell me’ is the oldest advice in storytelling, and that’s why I really hope you’ll listen.  As you hear criminals who go by the names “Michael” and “Jesus” badger and manipulate James, your skin will crawl, as mine did. But I hope it will also place a memory deep in your limbic brain, so when you inevitably find yourself on the phone with such a criminal one day, an autonomic defensive reaction will kick in.

For this episode, I also spoke with a remarkable journalist named Steve Fisher. An American from rural Pennslyvania, he worked in farms as a youth, learned a lot about the plight of migrant workers, and that led him to take a post as an investigative journalist in Mexico City covering crime gangs.  Fisher recently wrote about a victim like James who thought he was unloading an unwanted timeshare, but instead had $1.8 million stolen during a decade of interactions with the cartel. In that victim’s case, about 150 different cartel “workers” interacted with him. I can’t begin to stress how vast this conspiracy is — how detailed the cartel’s record-keeping must be — in order to carry on this kind of ongoing crime. As we point out in the story, timeshare scams have become so profitable that this dangerous Mexican cartel is trading in drug running operations for call centers.  The gruesome methods of control remain, however.

This model is replicated around the world.  From India, there are tech support scams. From Jamaica, we get sweepstakes fraud. From Southeast Asia, cryptocurrency scams.  From Africa, romance scams.

(Of course, there are scams operated in the U.S., too, but those criminals don’t enjoy the natural protection that international boundaries and jurisdictional challenges provide).

I know most of us imagine scam criminals sitting in dark, smoke-filled boiler rooms placing 100s of calls every day desperately hunting for single victims.  That’s not how it works any more. Scam call center “employees” work in cubicles  (though in some cases, they are victims of human trafficking).  They have fine-tuned software; they work from lead lists; they have well-researched sales scripts; they have formalized training. And they succeed, very often.

The numbers bear this out. Theft through fraud has surged over the last five years, with losses jumping from $2.4 billion in 2019 to more than $10 billion in 2023. Of course, many scam losses are never reported, so the real number could easily be four or five times that.

But I trust my own ears, and you should too.  Recently, I have heard a pile of stories from victims in my larger social circle.  Plenty of near misses — friends who tell me they got a call from a “sheriff” about an arrest summons that was so believable they were driving to a bitcoin ATM before something triggered skepticism.  Unfortunately, I hear plenty of heartbreaking stories too, of people who bought gift cards or sent crypto before that skepticism kicked in.

I want you to listen for these stories in your life. Look for them in your social feeds, ask for them at family parties. I bet you leave this exercise just as concerned as I am. Scams have become big businesses, operated by large, sophisticated crime gangs all over the world. It’s time to talk with your friends and family about this.

We can’t educate our way out of this problem.  There’s a lot more than U.S. financial institutions, regulators, and law enforcement can do to slow the massive growth of fraud.  But at the moment, you are the best defense for yourself and the people you love.

To that end, I would like to suggest you listen to this week’s episode and share it with people you care about. I can tell you that scams are up, and that criminals are so persuasive anyone can be vulnerable. But there is nothing like hearing it for yourself.

Be careful out there.

You can listen to part 1 of our series by clicking here. And part two is at this link.

The ‘protected health information’ crisis in healthcare

The PHI crisis in healthcare is putting patient safety and privacy at risk. Healthcare organizations represented in this research experienced an average of 74 cyberattacks in the past two years and almost half of respondents (47 percent) say these cyberattacks resulted in the loss, theft or data breach of PHI. Over the past two years, the cost to detect, respond and remediate PHI cyberattacks was $2.6 million and another $1.6 million was spent to hire staff, paralegals and technologies to determine the cost to patients.

Protected health information (PHI) is any information in the medical record or designated record set that can be used to identify an individual and was created, used, or disclosed when providing a health care service such as diagnosis or treatment.

The purpose of this research, sponsored by Tausight and independently conducted by the Ponemon Institute, is to understand the challenges healthcare organizations face in securing PHI data. Ponemon Institute surveyed 551 US IT and IT security practitioners who are in the following healthcare organizations: hospitals (37 percent of respondents), healthcare service providers (23 percent of respondents), clinics (21 percent of respondents) and healthcare systems (19 percent of respondents). The primary responsibilities of respondents are managing IT and IT security budgets, assessing cyber risks to PHI, setting IT or IT security priorities and selecting vendors and contractors.

Healthcare organizations’ ability to protect patient PHI is in critical condition. Organizations are losing control of the risk because of the lack of visibility into the enormous amount of PHI outside EHR. There are two serious root causes of the PHI crisis. According to 58 percent of respondents, their organizations are unable to determine how much PHI exists outside of EHR, where it is and how it is being accessed. And Fifty-five percent of respondents say their organizations are at risk because of the excessive presence of PHI across their data centers, endpoints and email accounts. On average, organizations have 30,030 network-connected devices.

Findings that illustrate the PHI crisis in healthcare 

  • Organizations lack the budget to invest in PHI protection technologies (52 percent of respondents) and the ability to have the necessary expertise to manage PHI protection technologies (48 percent of respondents). 
  • Current legacy technologies have difficulty protecting the enormous amounts of PHI across our systems (66 percent of respondents) and identifying PHI on servers and endpoints to understand what to put in organizations’ secure storage (69 percent of respondents).
  • Migration to the cloud and collaboration tools have increased risks to PHI (52 percent of respondents).
  • The level of security risk to PHI created by remote care and accessing or transmission of PHI outside the firewall is very high, according to 57 percent of respondents.
  • Current technologies are not improving visibility into PHI outside EHR. As a result, only 39 percent of respondents say their organizations have a high ability to detect and classify unstructured data and only 47 percent of respondents say their organizations have a high ability to detect and classify structured data wherever they exist throughout the expanding digital environment.
  • Only 30 percent of respondents say their organizations have significant visibility into PHI located in the data center and endpoints where it is exchanged between doctors’ and patients’ systems or applications.
  • Most organizations say DLP and DSP software are not effective in improving visibility into PHI on endpoints, networks and in the cloud and providing visibility into data movement of PHI.
  • Once organizations have a PHI data breach, 71 percent of respondents say it very difficult to assess how many patients were affected by the breach and almost half of respondents (47 percent) say their organizations are likely to overreport the number of patients affected because of the difficulty in determining the device or server that was compromised.
  • The negative consequences of a PHI data breach are exacerbated because it can take an average of more than two months to recover, remediate and assess the impact to PHI and to be able to disclose the breach and notify affected patients.
  • Insiders put PHI data at risk. The most frequent types of insider negligence are accessing PHI on uncontrolled devices and accessing hyper-connected endpoints on networks and varying IT security standards. Other frequent incidents are sending emails with unencrypted PHI and moving PHI to an unknown USB drive and data is lost.

Click here to to watch a webinar about these findings with Larry Ponemon and David Ting — CTO and Co-Founder of Tausight, which helps healthcare organizations protect data.

When fraud turns fatal — Uber driver shot after ‘grandparent scam’ call

Bob Sullivan

When consumers and criminals interact, you just never know how combustible a situation can become. A recent story out of Ohio is a reminder that any scam can get very serious and lead to devastating consequences.

An Ohio man who had been communicating with criminals attempting to commit a “grandparent scam” shot and killed an Uber driver that he said he believed was part of the scam; he has been indicted for murder and pleaded not guilty.

Police say 81-year-old Michael Brock told them he had spent hours talking on the phone with someone who claimed that his nephew was in jail and needed bail money. Brock allegedly believed that Lo-Letha Hall, 61, had come to his house to pick up the money. He accused her of being part of the scam, and when she tried to leave, he fatally shot her.

Local news reports indicate Hall was an Uber driver simply picking up a package for what she thought was a normal delivery.

“Upon being contacted by Ms. Hall, Mr. Brock produced a gun and held her at gunpoint, making demands for identities of the subjects he had spoken with on the phone,” the sheriff’s office said, according to the Associated Press. Hall was unarmed and unthreatening, the sheriff’s office alleges in that story. A video posted on a local news site shows her walking away from Brock as he threatens her with a gun.

“I’m sure glad to see you guys out here because I’ve been on this phone for a couple hours with this guy trying to say to me I had a nephew in jail and had a wreck in Charleston and just kept hanging on and needing bond money,” Brock said to police, according to the Associated Press. “And this woman was supposed to get it.”

According to a memorial page set up for Hall, she was retired.

Whenever I speak in front of cybersecurity and fraud groups, I try to remind them how important their work is. There are plenty of reasons to take cybersecurity and financial fraud seriously — even crimes that might seem like common thefts can turn very serious, or be part of wider conspiracies. Even though it can feel exhausting and at times fruitless, all of us must continue the fight against scams and cybercrime.

Surprise! Here’s another attempt at a federal privacy law. Does it have a chance?

Bob Sullivan

Just when you think Congress isn’t going to be able to do anything constructive for the rest of this presidential election year, out comes a bipartisan proposal called The Americans Privacy Rights Act of 2024. Its name is delightfully simple; it arrives after decades of failed attempts to create federal protections or American privacy rights.  Given its authors — Sen. Cathy McMorris Rodgers, R-Wash., and Sen. Maria Cantwell, D-Wash — chair committees important to its passage, the proposed legislation seems to have a chance. It’ll get a hearing almost immediately. Rodgers has scheduled discussion for April 17 in the House Committee on Energy and Commerce.

The proposal attempts a tight-rope act on controversial elements that have stalled previous federal law efforts — namely state law pre-emption and the ability for consumers to sue bad actors, known as a private right of action.  One at a time:

Corporations have long demanded pre-emption, arguing that they shouldn’t have to deal with 50 different state standards when crafting policies. That argument has grown increasingly more tenuous as time has passed and statehouse after statehouse has done the work to pass their own privacy laws. (Check out this IAPP tracker.) The count sits at 15 now, with more coming soon. Those state lawmakers never look kindly at their hard work being discarded by Congress.  As a compromise, APRA would pre-empt state laws except when it wouldn’t.  There are exceptions for consumer protection rules, for example.

Corporations and some conservatives also dislike the private right of action previous laws have included.  Here’s why. The Telephone Consumer Protection Act allows consumers who receive unwanted calls to sue for $500 a pop under certain circumstances.  That’s a powerful incentive to prevent spam phone calls. Sometimes, it can abused by class action lawyers. The alternative is to leave enforcement to state attorneys general or other government agencies, which aren’t always responsive to consumer complaints.  APRA would create a TCPA-like right, and even set aside mandatory arbitration clauses for cases that involve minors or a “significant privacy harm.” You’d have to expect this provision will get major pushback. Without citing it specifically, Sen. Ted Cruz (R-Texas) issued a statement saying, in part, “I cannot support any data privacy bill that empowers trial lawyers.”

So, roadblocks remain. Still, arrival of The Americans Privacy Rights Act is a welcome surprise and…who knows? Perhaps Congress will take a step into the 21st Century.

For more reading, I found “10 Things to Know about APRA” at Wiley.com very helpful.”  And  So was this analysis by IAPP helpful.

CLICK HERE to read the American Privacy Rights Act discussion draft.

CLICK HERE to read the section-by-section of the discussion draft.

The 2024 Study on the State of AI in Cybersecurity

Sponsored by MixMode, the purpose of this research is to understand the value of artificial intelligence (AI) to strengthen an organizations’ security posture and how they are integrating these technologies into their security infrastructure. A challenge faced by organizations is the ability to overcome AI’s tendency to increase the complexity of their security architecture.

Ponemon Institute surveyed 641 IT and security practitioners in organizations that are at some stage of AI adoption. All respondents are involved in detecting and responding to potentially malicious content or threats targeting their organization’s information systems or IT security infrastructure. They also have some level of responsibility for evaluating and/or selecting AI-based cybersecurity tools and vendors.

AI improves IT security staff’s productivity and the ability to detect previously undetectable threats — 66 percent of respondents believe the deployment of AI-based security technologies will increase the productivity of IT security personnel. Given the oft-cited problem of a shortage of IT security expertise, this can be an important benefit. According to the research, an average of 34 security personnel are dedicated to the investigation and containment of cyber threats and exploits. About half of respondents (49 percent) say dedicated security personnel have specialized skills relating to the supervision of AI tools and technologies.

Sixty-six percent of respondents say their organizations are using AI to detect attacks across the cloud, on-premises and hybrid environments. On average, organizations receive 22,111 security alerts per week and an average of 51 percent of these alerts can be handled by AI without human supervision and an average of 35 percent are investigated. An average of 9,854 false positives are generated by security tools in a typical week. An average of 12,009 unknown threats go undetected. Seventy percent of respondents say AI is highly effective in detecting previously undetectable threats.

“AI is a game-changer for cybersecurity, as it can automate and augment the detection and response capabilities of security teams, as well as reduce the noise and complexity of security operations,” said John Keister, CEO of MixMode. “However, AI also poses new challenges and risks, such as the threat of AI being used for adversarial attacks and the need for specialized operator skills. MixMode understands the complexity of AI and delivers automated capabilities to revolutionize the cybersecurity landscape through our patented self-learning algorithm that can detect threats and anomalies in real-time at high speed and scale. This helps enterprises rapidly recognize new malware and insider risks to help strained security teams automate mundane tasks and focus on higher-level defenses.”

Following are the findings that describe the value of AI and the challenges when leveraging AI to detect and respond to cyberattacks.

 The value of AI

 AI is valuable when used in threat intelligence and for threat detection. In threat intelligence, AI is mainly used to track indicators of suspicious hostnames, IP addresses and file hashes (65 percent of respondents). In threat detection it creates rules based on known patterns and indicators of cyber threats (67 percent of respondents). Other primary uses in threat intelligence are the results of cybercrime investigations and prosecutions (60 percent of respondents) and in Tactics, Techniques & Procedure Reports (TTP). TTP’s are used to analyze and APT’s operation or can be used as a means of profiling a certain threat actor.

The benefits of AI include an improved ability to prioritize threats and vulnerabilities and the identification of application security vulnerabilities. Fifty percent of respondents say their security posture improves because of being better able to prioritize threats and vulnerabilities and 46 percent of respondents say AI identifies application security vulnerabilities.

Most AI adoption is at the early stage of being mature, but organizations are already reaping benefits. Fifty-three percent of respondents say their organizations’ use of AI is at the early stage of adoption. That means the AI strategy is defined, investments are planned and partially deployed. Only 18 percent of respondents say AI in cybersecurity activities are fully deployed and security risks are assessed. Effectiveness is measured with KPIs and C-level executives are regularly updated about how AI is preventing and reducing cyberattacks

AI effectiveness is measured by the financial benefits these technologies deliver. Sixty-three percent of respondents say their organizations measure the decrease in the cost of cybersecurity operations, 55 percent of respondents say they measure increases in revenue and 52 percent of respondents say they measure the productivity increases the SOC team’s ability to detect and respond to threats.

Defensive AI is critical to protecting organizations from cyber criminals using AI. Fifty-eight percent of respondents say their organizations are investing in AI to stop AI-driven cybercrimes. Sixty-nine percent of respondents say defensive AI is important to stopping cybercriminals’ ability to direct targeted attacks at unprecedented speed and scale while going undetected by traditional, rule-based detection. The basic rules model refers to a traditional approach where predefined rules and signatures are used to analyze and detect threats.

 The challenges of AI deployment

 Difficulty in applying AI-based controls that span across the entire enterprise and interoperability issues are the two main barriers to effectiveness. Sixty-one percent of respondents say a deterrent to AI effectiveness is the inability to apply AI-based controls that span across the entire enterprise and 60 percent of respondents say there are interoperability issues among AI technologies.

However, AI adoption complicates integrating AI-based security technologies with legacy systems (65 percent of respondents) and requires the simplification of security architecture to obtain maximum value from AI-based security technologies (64 percent of respondents). Seventy-one percent of respondents say digitization is a prerequisite and critical enabler for deriving value from AI.

Organizations are struggling to identify areas where AI would create the most value. Only 44 percent of respondents say they can accurately pinpoint where to deploy AI. However, 62 percent of respondents say their organizations have a high ability to identify where machine learning would add the most value. A machine learning (ML) model in cybersecurity is a computational algorithm that uses statistical techniques to analyze and interpret data to make predictions or decisions related to security.

Eighty-one percent of respondents say generative AI is highly important. Generative AI refers to a category of AI algorithms that generates new outputs based on the large language models they have been trained on. Sixty-nine percent of respondents say it is highly important to integrate advanced analysis methods, including machine learning and/or behavioral analytics.

Outside expertise is needed to obtain value from AI. Fifty-four percent of respondents say their organizations need outside expertise to maximize the value of AI-based security technologies. Fifty percent of respondents say a reason to invest in AI is to make up for the shortage in cybersecurity expertise.

The lack of budget and internal expertise are barriers to getting value from AI. The top two reasons organizations are not leveraging AI is insufficient budget for AI-based technologies (56 percent of respondents). Currently, 60 percent of respondents say 25 percent or less of the average annual total IT security budget of $28 million is allocated to investments in AI and ML investments. Another barrier is the lack of internal expertise to validate vendors’ claims (53 percent of respondents). Forty-two percent of respondents say there is not enough time to integrate AI-based technologies into security workflows.

Many employees outside of IT and IT security distrust decisions made by AI. Fifty-six percent of respondents say it is very difficult to get rank-and-file employees to trust decisions made by AI. Slightly more than half (52 percent of respondents) say it is very difficult to safeguard confidential and personal data used in AI. Forty-six percent of respondents say it is very difficult to comply with privacy and security regulations and mandates. As the deployment of AI matures, organizations may become more confident in their ability to safeguard personal data and comply with privacy and security regulations and mandates.

Despite AI using personal data, few organizations have privacy policies applied to AI. Sixty-five percent of respondents say confidential consumer data is used by their organizations AI. However, only 38 percent of respondents say their organizations have privacy policies specifically for the use of AI. If they do have policies, 44 percent of respondents say they conduct regular privacy impact assessments and 41 percent of respondents say their organizations appoint a privacy officer to oversee the governance of AI privacy policies. Only 25 percent of respondents say they work with vendors to ensure “privacy by design” in the use of AI technologies.

Organizations are at risk because consumer data is often used by organizations’ AI. According to the research, organizations are having difficulties in safeguarding confidential data. However, 65 percent of respondents say consumer data is being used by their organizations’ AI. Without having the needed safeguards in place, consumer data is at risk of being breached. Seventy percent of respondents say analytics are used by AI.

Despite AI technologies using personal data, few organizations have privacy polices applied to AI. As discussed, 65 percent of respondents say confidential consumer data is used by their organizations AI. However, only 38 percent of respondents say their organizations have privacy policies specifically for the use of AI. If they do have policies, 44 percent of respondents say they conduct regular privacy impact assessments and 41 percent of respondents say their organizations appoint a privacy officer to oversee the governance of AI privacy policies.

Organizations should take a unified approach with an organizational task force to manage risk. According to the findings, few organizations have an enterprise-wide strategy for understanding where AI adds value. Less than half of respondents (49 percent) have an organizational task force to manage AI risk and only 37 percent of respondents have one unified approach to managing both AI and privacy security risks.

To read the rest of this study, visit MixMode.com

 

Ill-conceived TikTok ban is a missed opportunity; a real privacy law would be far superior

Bob Sullivan

In the age where the U.S. House of Representatives can’t really do anything, it has voted to ban TikTok in its current ownership structure. Color me unimpressed, and more than a bit concerned. I’d say the fact that Congress’ lower house was able to pass the legislation should give everyone pause.

Of course TikTok poses a national risk. So do other platforms, which is why it’s time for Congress to pass comprehensive privacy and national security legislation that forces all platforms to handle personal data with care. But that’s…hard.  Grandstanding that you’ve been tough on TikTok is easy, so that’s what we’re getting.

TikTok’s owner, Bytedance, has been given the choice to divest itself from the popular social media service. I hope you’ll think it’s strange that Congress can force such a sale …. of a single company. But setting that aside for the moment, it’s hard to imagine that doing so will solve the problem everyone seems to agree exists — that the Chinese government can access TikTok’s intimate data about its users.  Would a the sale really stop that? And what of China’s ability to buy such data from any one of hundreds of data brokers in the U.S. who are willing to sell such information to the highest foreign bidder? (Duke University has tested this theory.)

And as for Chinese ownership, it must be asked, why stop with TikTok? Anyone who’s been online in the past six months has seen the near-ubiquitous ads for a shopping service named Temu and its “shop like a billionaire” tagline. That’s because this Chinese-owned firm has spent billions of dollars advertising with companies like Meta/Facebook.  But Temu has been sued for, essentially, loading its software up with spyware.  You should read this analysis for yourself, but here’s a highlight: “The app has hidden functions that allow for extensive data exfiltration unbeknown to users, potentially giving bad actors full access to almost all data on customers’ mobile devices.”

A well-written law would stop a company from doing what Temu is (accused of) doing before it starts, or make it very easy to shut it down. Instead, Congress is doing something so arbitrary that it has made TikTok into a sympathetic character, which would have seemed impossible a few months ago.

Hopefully, cooler heads will prevail in the Senate, and a better law can emerge from this rare moment of focus on data privacy. If not, I fear this legislation could delay passage of a real, comprehensive federal privacy law.   Congress could mistakenly believe it has solved a problem and turn its meager attention in other directions; and the law could backfire so badly that lobbyists could point to it for years as proof no law should ever be passed that limits big tech’s powers.

I found Alex Stamos’ appearances on NBC networks to be informative; you can watch them here. 

 

Managing Access & Risk in the Operational Technology (OT) Environment

There is growing awareness and concern that the increasing sophistication and severity of the various types of potential cyberattacks against the OT environment are putting the critical infrastructure that everyone depends upon at serious risk. In some cases, these incidents could lead to the loss of human life. In 2021, a ransomware attack against the Colonial Pipeline directly impacted Americans on the east coast who were confronted with the disruption of fuel supplies.

(To access a webinar Larry participated in about this research, click this link).

In response to such threats, the Cybersecurity & Infrastructure Security Agency (CISA) has recently announced a pilot program “designed to deliver cutting-edge cybersecurity services on a voluntary basis to critical infrastructure entities most in need of support”.

As defined in the research, OT is the hardware and software that monitors and controls devices, processes and infrastructure and is used in industrial settings. OT devices control the physical world while IT systems manage data and applications. Sponsored by Cyolo, the purpose of this research is to learn important information about organizations’ security and control procedures designed to mitigate serious risks in the OT environment. All respondents are knowledgeable about their organizations’ approach to managing OT system access and risk. The average annual IT security budget is $55 million and an average of $11.5 million of the budget is allocated to OT security activities.

Uncertainty about the number and types of assets in OT environment puts organizations at significant risk.  A key takeaway from the research is that organizations lack visibility into the industrial assets in their OT environment making it difficult to ensure they are secure from potential cyberattacks. Only 27 percent of respondents say their organizations maintain an inventory of the industrial assets in their OT environment. Worse, 38 percent of respondents say their organizations have an inventory, but it may not be accurate or current.

Following are findings that illustrate the importance of aligning IT and OT priorities and improving communication between the two functions.

 The lack of alignment between the IT and OT can result in conflicting priorities about the importance of securing the OT environment despite the risk. As shown in the findings, secure access is a very or high priority in only slightly more than half of respondents (51 percent) and only slightly more than half (55 percent of respondents) say their organizations are very effective (33 percent) or highly effective (22 percent) in reducing risks and security threats.

Without regular communication between IT and OT teams, the goal of collaboration and alignment is difficult to achieve.  Collaboration between the two teams is critical to ensuring consistent policies and processes are in place to secure access between IT and OT systems. However, only 39 percent of respondents say collaboration between the two teams is significant. Thirty-eight percent of respondents say the only time the teams communicate is on an ad-hoc basis (19 percent) or when a security incident occurs (19 percent). Only 16 percent of respondents say the teams communicate daily (6 percent) or weekly (10 percent).

OT and IT teams share responsibilities without regular communication. As discussed above, more regular communication is needed. OT cybersecurity responsibilities are mostly shared by the IT and OT teams (39 percent of respondents). Thirty-two percent say IT is solely responsible for managing the OT environment and 30 percent say OT is solely responsible for managing the OT environment.

Communication with senior leadership and boards of director is also rare and may contribute to respondents’ concerns about having the allocation of needed resources. Thirty percent of respondents say senior leadership and/or board members are updated on the OT security posture, policies and practices in place on an ad-hoc basis. Only 23 percent say they communicate frequently (10 percent say monthly and 13 percent say quarterly). Without being briefed on a regular basis, it may be difficult to convince senior leaders of the importance of increasing budget and in-house expertise.

Organizations are making progress in achieving secure connectivity between IT and OT systems. Most organizations (81 percent) say they have a goal of achieving convergence to be able to transmit data in one or both directions. Thirty-three percent of respondents say their organizations have established policies, tools, governance and reporting in place to control and monitor connectivity between IT and OT systems. Another 24 percent of respondents say that they have some policies in place to govern access between IT and OT systems.

Securing the OT infrastructure is the responsibility of senior executives in the OT and IT. The two roles most involved in securing the OT infrastructure are the OT Vice president/Director (32 percent of respondents) and the CIO or CTO (29 percent of respondents). To strengthen the security posture in the OT environment, close collaboration between these two roles is needed and includes the deployment and integration of traditional IT security solutions as well as Industrial control systems (ICS) protocols and assessments.

The following findings illustrate the progress being made in IT/OT convergence.

To advance IT/OT convergence, more organizations should adopt a blend of IT and OT security solutions. When asked how their organizations plan to introduce new tools to better secure the OT infrastructure, only 32 percent of respondents say their organizations are using a blend of IT and OT security solutions and 19 percent of respondents say they plan to expand existing IT security solutions to secure the OT infrastructure.

Convergence is considered important, but organizations are concerned about its impact on the OT environment. While IT/OT convergence can improve connectivity and the OT environment, more than half of organizations represented are very or highly concerned about the impact of convergence on the availability of IT systems/services (52 percent of respondents) and the safety and uptime of the OT environment (56 percent of respondents).

Convergence is considered to reduce security risks and improve the ability of IT and OT teams to collaborate. The benefits when IT/OT connectivity is increased includes a reduction in security risks (59 percent of respondents), improvement in the ability of IT and OT teams to collaborate (57 percent of respondents) and to respond to unplanned asset downtime quickly (38 percent of respondents).

To achieve convergence, organizations need to have the budget, ability to ensure security and have collaboration between the IT and OT teams. The top three challenges to connecting IT/OT environments is the lack of budget (42 percent of respondents), security risks (35 percent of respondents) and siloed teams (32 percent of respondents. Those organizations that have no plans for connectivity blame the lack of budget, siloed teams and pushback from the OT team.

The following findings reveal the steps needed to improve secure access to the OT environment by internal teams and third parties.

The OT environment is heavily regulated and should drive investments in security solutions and in-house expertise to reduce risks and security threats. Eighty-one percent of respondents say their organizations must comply with regulations today (59 percent) or in the future (25 percent). Noncompliance can potentially result in costly fines.

The ease of accessing the OT environment by both internal teams and third parties using current tools does not receive high marks. Access is important to be able to extend IT and security tools into OT environments, to observe processes and/or check sensors and increase productivity. However, only half of respondents (49 percent) say the access experience is positive. Similarly, only 43 percent of respondents say vendors/third parties experience accessing OT systems with current tools is very good or excellent.

The importance of third parties in maintaining and supporting OT/ICS environments should make securing their access a priority. Third parties include all types of external suppliers, partners, service providers and contractors who perform important work for the organization but are not direct employees. Because of the complexity and specialized systems in the OT/ICS environments it is important to have third parties who can provide product/system support and maintenance.

According to the findings, an average of 77 third parties/vendors are authorized to connect to the OT systems represented in this research. Of the 73 percent of respondents who say their organizations permit access to the OT environment, 30 percent say they limit vendor/third-party access to on-site and 43 percent of respondents say third parties can access both on-site and remote. Only 27 percent of respondents say their organizations do not allow third party access.

Organizations need to address the risk of third parties’ unauthorized access. Forty-four percent of respondents say the top challenge is preventing unauthorized access and 40 percent of respondents say it is to keep third party access secure. Another top challenge is the lack of alignment between IT and OT security priorities regarding third party risks.

Allowing third party access is needed to maintain operations and prevent downtime, but there should be greater awareness and attention to potential risks.  Only 44 percent of respondents say their organizations are very or highly concerned about vendors/third parties accessing its OT environment.

To read the rest of this report. visit Cyolo.com at this link. You can also access a webinar that Larry participated in discussing the report. 

The State of Cybersecurity Insurance Adoption

The cost of a single data breach, ransomware attack or other security incident can adversely impact the most solid financial balance sheet. The growing threat from sophisticated cybercriminals targeting organizations of all sizes elevates cybersecurity insurance from an IT security concern to a critical business priority, demanding the attention of senior leadership and boards of directors. But what are the limitations of these cybersecurity policies and what are the benefits and hurdles to purchasing a policy that protects organizations? In the event of a cyberattack, how satisfied are organizations with their insurers’ response? Sponsored by Recast Software, the purpose of this research is to address these questions and help organizations prepare for the purchase of insurance.

It’s about the money. Respondents do not expect any decrease in cyber risks targeting their organizations. Instead, according to 75 percent of respondents, their organizations’ exposure will increase (47 percent) or at best stay the same (28 percent). As cyberattacks increase in severity and sophistication, the potential for a significant financial consequence is becoming more likely. According to 61 percent of respondents, the average total financial impact of all security exploits and data breaches experienced by their organizations since purchasing insurance averaged $21 million.

The top two reasons for purchasing insurance are the increasing number of cybersecurity incidents (41 percent of respondents) and concerns about the financial impact (40 percent of respondents). According to the research, 65 percent of respondents say their organizations are purchasing limits between $6 million to more than $100 million. However, 50 percent of respondents say it is difficult to comply with insurers’ requirements. More than 51 percent of respondents say insurers require regular scanning for vulnerabilities that need to be patched.

Ponemon Institute surveyed 631 IT and IT security practitioners in the United States who are familiar with cyber risks facing their companies and have knowledge about their organizations’ use of cybersecurity insurance. Seventy-six percent of respondents say their organizations have completed the purchase and 24 percent of respondents say their organizations are in the process.

 

In this section, we provide an analysis of the research. The complete findings are presented in the Appendix of this report. The report is organized according to the following topics.

 

  • What keeps organizations’ IT security posture from being strong?
  • How helpful is cybersecurity insurance in protecting organizations from adverse financial consequences?
  • Dealing with the hurdles organizations face when purchasing cybersecurity insurance

 What keeps organizations’ IT security posture from being strong?

 Technology and governance challenges are affecting the ability to improve organizations’ security posture. Less than half (49 percent) of respondents rate their IT security posture in terms of its effectiveness at mitigating risks, vulnerabilities and attacks across the enterprise as very effective. The primary reasons are the ineffectiveness of security technologies and the complexity of the IT security environment.

Other challenges that need to be addressed are having a complete inventory of third parties with access to their sensitive and confidential data, keeping senior management up to date about threats facing their organizations and convincing management that cyberattacks are a significant risk.

Understanding the level of cyber risk is important because organizations realize cyber threats are not decreasing. Sixty-three percent of respondents say they assess the level of cyber risk to their organizations. According to 75 percent of respondents, cyber risks will increase (47 percent) or stay the same (28 percent).

The internal assessments are informal (23 percent) or formal (21 percent). However, 37 percent of respondents say their organizations do not do any type of assessment (21 percent) or rely on intuition of gut feel (16 percent). Only 19 percent hire an independent third party to conduct the assessment.

How helpful is cybersecurity insurance in protecting organizations from adverse financial consequences?

 Cybersecurity insurance can improve organizations’ security posture. As reported, 76 percent of respondents have completed the purchase of cyber insurance. On average, these organizations have held their policies for two years, which gives them an understanding of the benefits and effectiveness of cyber insurance.

Almost half (49 percent) of respondents say following the purchase of cybersecurity insurance their cybersecurity posture improved greatly or significantly. However, 48 percent of these respondents changed insurance companies. The primary reasons for the change were the cancellation of the policy or the high expense.

Since purchasing cybersecurity insurance, the threats to organizations did not decrease. While only 27 percent of respondents say cyberattacks have increased and only 17 percent of respondents say their IT security costs have increased,  45 and 44 percent of respondents say cyberattacks and IT security costs have stayed the same.

Forty-three percent of respondents say cyber insurance coverage is sufficient with respect to coverage terms and conditions, exclusions, retentions, limits and insurance carrier financial security. Sixty-seven percent of respondents are extremely satisfied (23 percent), very satisfied (21 percent) or satisfied (23 percent) with coverage.

The financial consequences of all security exploits and data breaches experienced since the purchase of insurance averages $21 million, which includes all costs including out-of-pocket expenditures such as ransomware, consultant and legal fees, indirect business costs such as productivity losses, diminished revenues, legal actions, customer turnover and reputation damage. Sixty-one percent of respondents experienced a significantly disruptive security exploit or data breach since the purchase of cybersecurity insurance.

Fifty-three percent of respondents say their organizations filed a claim following the incident and an average of 46 percent of the losses were covered or approximately $9.7 million. When asked how satisfied their organizations were with the insurance company’s response to the claim, less than half (46 percent of respondents) were very or highly satisfied with the response.

And 65 percent of respondents say their organizations have experienced cyberattacks such as ransomware or denial of service and 61 percent of respondents say cyberattacks have resulted in the misuse or theft of business confidential information, such as intellectual properties.

Dealing with the hurdles organizations face when purchasing cybersecurity insurance

 Insurance companies’ assessment of organizations’ security posture is mainly focused on the existence of an adequate budget. Only half (50 percent) of respondents say the insurance company assesses their security posture. If they do, it is to determine if there is adequate budget (65 percent of respondents). Other factors included are evidence of security and training programs conducted (52 percent of respondents), effectiveness of incident response team (45 percent of respondents) and ability to detect and prevent cyberattacks (45 percent of respondents).

To read the rest of this report, visit the ReCastSoftware.com website

Taylor Swift, the FCC deepfake ban, and why you are the last (only?) line of defense

Twitter (X) briefly blocked Taylor Swift searches in reaction to deepfake posts. A good, if brutal, response.

Bob Sullivan

Hold on tight, fellow humans, there’s artificial turbulence ahead.  Like it or not, the time has come to stop believing what you see, what you hear, and perhaps even what you think you know. Reality is indeed under attack and it’s up to us to preserve it.  The only way to beat back this futuristic nightmare is with old-fashioned skepticism.

Lately, it feels like all anyone wants to talk about is AI and how it’s going to make life much easier for criminals, and much harder for you.  I’ve annoyed several interviewers recently by saying I don’t believe the hype. There is not an avalanche of voice-cloning criminals out there manipulating victims by creating fake wailing kids claiming to need bail money.  The so-called grandparent scam has operated successfully for many years without AI.  But I think that misses the point. First of all, as many journalists have demonstrated (even me!) it’s trivial to create deepfakes now. An expert cloned my voice for $1. But more important, a recent offensive, vile Taylor Swift deepfake was viewed 47 million times before it was removed from most of social media.  This kind of violation is here, today, and it’s going to be very hard to stop.

There are celebrated efforts, of course. The FCC just made voice cloning in scams explicitly illegal, which is certainly welcome, but if FCC efforts to stop robocalling are a guide, AI scams won’t be stopped by this. There are also some high-tech efforts to separate what’s real from what’s fake, and that’s also welcome. Watermarking — even in audio files — can be used by software to declare items as AI-generated, so our gadgets can tell us when a Joe Biden video has been manipulated. Naturally, I wish tech companies had built such safety tools into their AI-generating software in the first place, but this kind of retrofitting is what we’ve come to expect from Big Tech.

I don’t have high hopes that an “AI-generated” label on a negative presidential candidate video is going to do much to stop the coming attack on reality, however. I’m afraid to say this, but it’s true: the problem, dear Brutus, lies not in our stars but in ourselves.

I am the last person to lump responsibility for the failures of billion-dollar tech companies onto busy human beings.  And that’s not what I’m doing here. I still want tech workers to speak up when managers ask them to make tools that can be used to hurt people. I still want regulators to staff up and lock down companies that behave recklessly.  But when it comes to defending reality, the truth is, we are on our own right now.  Human beings are going to have to develop radical inquisitiveness when it comes to things we see, hear, and feel while interacting with technology.

This is going to be hard. Many of us want to see a video of our least-favorite politician looking stupid.  A large number want to see “exclusive” video of famous people in….candid…moments.  We would love for them to contact us directly and offer to be our friend, or even our lover.

We have to help each other learn to resist these base urges, to choose reality over this dark fantasy world that’s being foisted on us.

As if often the case with tech crises, this problem isn’t really new.  Marketers have always manipulated consumers. Propagandists have always lied to populations.  Many dark periods of history can be blamed on large groups failing to exercise proper skepticism, their prejudices and predispositions used against them.  What’s different about our time is the scale.  As we learned back in 2016, a room full of typists half-way around the world can persuade thousands of Americans to attend real-world rallies. The tools for liars and criminals are very powerful; we have to respond with equal force.

I recently interviewed Professor Jonathan Anderson, an expert in artificial intelligence and computer security at Memorial University in Canada, about this problem, and he’s persuaded me that humans must react by adjusting to this new “reality” of un-reality.  We must stop believing what we see and hear. And there is precedence for this.  At the dawn of photography, many people believed that photos couldn’t lie.  Most folks now know that it’s trivial to manipulate images, perhaps even on a subconscious level. If you see something that doesn’t look right — a man’s head on an animal’s body — your first instinct is to react as if Photoshop is the culprit.  Hopefully, we’ll all engage in a learning curve now where this is how we react to any media that’s unexpected, be it a fake desperate child, a celebrity asking to meet with us, or a politician doing something foolish.

My fear is that people will still believe what they want to believe, however.  A “red” person will believe only “blue” fakes, and vice versa.  And that, in my view, is the greatest threat to reality right now.