The 2024 Study on the State of Identity and Access Management (IAM) Security

Keeping enterprise and customer data secure, private, and uncorrupted has never been more important to running a business. Data is the great asset in our information-driven world and keeping it secure can allow your organization to maintain a healthy operation and reduce operational, financial, legal, and reputational risk.

This report is to understand how organizations are approaching Identity and Access Management (IAM), to what extent they are adopting leading security practices, and how well they are mitigating identity security threats. Sponsored by Converge Technology Solutions, Ponemon Institute surveyed 571 IT and IT security practitioners in the US to hear what they are currently practicing in IAM.

Keeping information safe has gotten more complex as technology has advanced, the number of users has grown, and the devices and access points they use have proliferated beyond the walls of the enterprise. Attackers see their opportunities everywhere.

Threat actors have also changed. It’s no longer the “lone wolf” hacker that is the threat, but now organized criminal organizations and bad-actor nation states are a constant threat to our data security. They have more sophisticated tools, expanding compute power, and AI. They’ve also had decades to hone their methods and are innovating daily.

Not a week goes by without a new data breach hitting the news cycle. A single successful attack can be painfully expensive. In the United States the average cost per data breach was $9.48 million in 2023. And this is just the financial impact which may not include reputational harm, loss of customers and other hidden costs.

Surprisingly, stolen or compromised credentials are still the most common cause of a data breach. While there is an entire industry devoted to identifying and remediating breaches as or after they happen, the best defense is to prevent credential theft in the first place.

At the heart of prevention are the practices of Identity and Access Management or IAM. IAM ensures that only trusted users are accessing sensitive data, that usernames and passwords aren’t leaked or breached, and that the enterprise knows precisely who, where and when their systems are being accessed. Keeping the bad guys from stealing credentials severely limits their ability to cause harm. Good IAM and awareness training does that.

The State of the Art of IAM

Like all technology practices, IAM has evolved over the years to become more sophisticated and robust as new techniques have been developed in keeping data and systems secure. Organizational adoption and enforcement vary greatly.

While some advanced businesses are already using endpoint privileged management and biometrics, there are still organizations with policies loose enough that using a pet’s name with a rotating digit as a password is still possible or credentials are on sticky notes stuck to employee monitors.

For most companies, it all begins with the basics of authentication. If you’re only using username and password, it is no longer enough authentication for your “primary” login for mission-critical systems. In legacy systems, where sophistication beyond usernames and passwords are not available, best practices must be taught and enforced rigorously. Practices such as very long passwords or passphrases and checking passwords against a blacklist must be put in place. These password basics are a starting point that many, many users still don’t universally adhere to.

The next critical step is adding Multi-Factor Authentication (MFA). Many cyberattacks are initiated by phishing where credentials and personal information are obtained from susceptible users. Others are brute force attacks where the password is eventually guessed. Using MFA introduces a second level of authentication that isn’t password-based to thwart attackers who may have discovered the right password. If your organization hasn’t yet implemented MFA, it is past time to act. This additional layer of security can dramatically reduce the risk of credential compromise.

If you’ve already deployed basic MFA, the next logical steps include Adaptive Authentication or Risk Based Authentication. This technique adds intelligence to the authentication flow to provide strong security but reduces a bit of the friction by creating authentication requirements based on the risk and sensitivity of each specific request rather than using the same MFA prompt every time. This reduces MFA response fatigue for end users.

On the leading edge, organizations may choose to forgo using passwords altogether and go passwordless to nearly eliminate the risk of phishing attacks. This method uses passkeys that may leverage biometrics (e.g., fingerprint, retina scan), hardware devices or PINs with cryptographic key pairs assigned and integrated into the access devices themselves.

A layer on top of these methods is Identity Threat Detection and Response (ITDR). This technology gathers signals across the ecosystem to automatically deal with a credential breach (or risk of one) as they happen to limit lateral movement. ITDR uses analytics and AI to monitor access points and authentication and identify anomalies that may represent possible attacks to force re-authentication or terminate sessions before further damage can be done. These systems have sophisticated reporting and analytics to identify areas of risk across the environment.

Regulatory Compliance: Identity Governance and Administration (IGA)

Regulatory non-compliance is another risk of failed IAM. Since regulations such as GDPR (General Data Protection Regulation), SOX (Sarbanes-Oxley), and HIPAA (Health Insurance Portability and Accountability Act) all set standards for data privacy, it is imperative that organizations identify, approve, and monitor access to critical data and systems.

The authoritative source of identity information for most organizations should be their HR system(s). A properly configured IGA solution utilizes this authoritative source as the starting point for determining access to an organization’s critical systems based upon the person’s role.

Beyond providing access, a viable IGA solution should also allow you catalog and attest to user entitlements associated with mission critical systems and systems with regulated data to create an audit trail. Periodic reviews of access (e.g., quarterly, annually) in addition to Separation of Duty (SoD) policies and event driven micro-reviews should be part of an IGA solution to ensure that compliance requirements are continually met.

Another avenue that is often exploited is over-privileged user accounts, where a user has access to data or systems that they don’t need, creating unneeded risks. User accounts can gain too much privilege in many ways, such as the retention of past privileges as individuals’ roles within the organization change. By managing lifecycle events with an IGA solution, organizations can minimize the risks of overprivileged accounts being compromised.

IGA solutions can enforce a policy of “least privileged access” where users are only assigned the necessary privileges to perform the duties required of them. This approach combined with SoD policy enforcement can help to greatly reduce your data security risk profile.

Similarly, Role Based Access Control (RBAC) can be a valuable methodology for managing the evolving access requirements of an organization. RBAC associates the required access based on the role an employee plays within the organization instead of using mirrored account privileges, thereby limiting the scope of what they can access to what is necessary. RBAC can greatly reduce the timeline necessary to roll-out large changes to systems and data thus allowing your organization to adapt quickly to the market and new requirements.

In addition to improving security, an IGA solution should also make life easier for users and administrators. An integrated IGA solution can take time- and labor-intensive manual provisioning operations and move them to automated request and fulfillment processes. The IGA solution not only performs the actions faster than manual provisioning activities, but it also ensures that the right resource is granted to the right person with the right approvals at the right time.

Privileged Access Management (PAM): The Rise of Enterprise Password Vaults

PAM systems control access and passwords to highly sensitive data and systems, such as those controlled by IT to access root systems, administrator access, command-line access on vital servers, machine user IDs or other applications where a breach could put the entire IT footprint in jeopardy. The key component of a PAM system is an enterprise password vault that monitors access activity on highly sensitive accounts.

The password vault does more than just safely store passwords. It updates them, rotates them, disposes of them, tracks their usage and more. Users “borrow” privileged accounts temporarily for time-bound sessions, creating an abstraction between the person’s typical user account and the privileged account, minimizing the potential for privileged account credential compromise. Once a vault is established, the next level is to automatically rotate the passwords after they are borrowed. This ensures that nobody but the current user knows the password for a temporary timeframe.

For highly regulated systems with extremely sensitive data, like found in healthcare and finance, security can go one step further and automatically proxy the privileged session so that even the admin doesn’t even know the username and password to use it. These sessions can also be recorded for forensic evidence of the work performed under privilege to provide auditability.

Privileged Identity Management (PIM) is another approach based upon the concept of zero standing privileges that can work in conjunction with traditional PAM. This is a “just-in-time” temporary enrollment into privileged access and their subsequent removal after use. In PIM, each session is provisioned, subject to approval, based on the requester’s justification for needing access. Sessions are time-bound and an audit history is recorded. This ensures that the most sensitive systems are extremely difficult to hack.

Adoption and Use are Key to IAM

IAM best practices and new technologies don’t work if they are not fully implemented to understand the current prevalence, adoption and impact of IAM practices, Converge Technology Solutions sponsored the Ponemon Institute to study and understand organizations’ approach to IAM and how they are working to mitigate security threats targeting their user credentials, sensitive information, and confidential data.

Ponemon Institute surveyed 571 IT and IT security practitioners in the US who are involved their organizations’ IAM program. The top three areas of respondents’ involvement are evaluating IAM effectiveness (51 percent of respondents), mitigating IAM security risk (46 percent of respondents) and selecting IAM vendors and contractors (46 percent of respondents).

The key takeaway from this research is how vulnerable organizations’ identities are to attacks. While organizations seem to know they need to improve the security posture of their IAM practices, they are not moving at the necessary speed to thwart the attackers. According to the research, organizations are slow to adopt processes and technologies that could strengthen the security posture of IAM programs.

Only 20 percent of respondents say their organizations have fully adopted zero trust. Only 24 percent of respondents say their organizations have fully implemented passwordless authentication, which uses more secure alternatives like possession factors, one-time passwords, register smartphones, or biometrics.

Following are research findings that reveal the state of IAM insecurity.

Less than half of organizations represented in this research are prepared to protect identities and prevent unauthorized access. Only 45 percent of respondents say their organizations are prepared to protect identities when attackers have AI capabilities. Less than half (49 percent) use risk-based authentication to prevent unauthorized access and only 37 percent of respondents say their organizations use AI security technology to continuously monitor authenticated user sessions to prevent unauthorized access.

Organizations lack the ability to respond quickly to next-generation attacks. Forty-six percent of respondents say if a threat actor used a stolen credential to log in to their organization, it could take 1 day to 1 week (18 percent), more than 1 week (28 percent) to detect the incident. Eight percent of respondents say they would not be able to detect the incident.

IAM security is not a priority. As evidence, only 45 percent of respondents say their organizations have an established or formal IAM program, steering committee and/or internally defined strategy and only 46 percent of respondents say IAM programs compared to other security initiatives are a high or very high priority.

IAM platforms are not viewed by many organizations as effective. Only 46 percent of respondents say their IAM platform(s) are very or highly effective for user access provisioning, lifecycle and termination. Only 44 percent of respondents rate their IAM platform(s) for authentication and authorization as very or highly effective. Similarly, only 45 percent of organizations that have a dedicated PAM platform say it is very or highly effective.

More organizations need to implement MFA as part of their IAM strategy. Thirty percent of respondents say their organizations have not implemented MFA. Only 25 percent of respondents say their organizations have applied MFA to both customer and workforce accounts.

Few organizations have fully integrated IAM with other technologies such as SIEM. Only 30 percent of respondents say IAM is fully integrated with other technologies and another 30 percent of respondents say IAM is not integrated with other technologies. Only 20 percent of respondents say practices to prevent unauthorized usage are integrated with the IAM identity governance platform.

As evidence that IAM security is not a priority for many organizations, many practices to prevent unauthorized usage are ad hoc and not integrated with the IAM platform. To perform periodic access review/attestation/certification of user accounts and entitlements, 31 percent of respondents say they use custom in-house build workflows, 23 percent say the process is manual using spreadsheets, and 20 percent of respondents say it is executed through IAM identity governance platform. Twenty-six percent of respondents say no access/review/attestation/certification performed.

Organizations favor investing in improving end-user experience. Improved user experience (48 percent of respondents) is the number one driver for IAM investment.  Forty percent of respondents say the constant changes to the organization due to corporate reorganizations, downsizing and financial distress is a reason to invest.

To read the rest of the findings in this report, visit the Converge Technology Solutions website. 

Suicide after a scam; one family’s story

Bob Sullivan

I’ve been saying for a while that the two halves of my journalism career — consumer protection and cybersecurity — are merging together.  I will tell anyone who listens that poor customer service is our greatest cybersecurity vulnerability. Consumers often trust criminals more than the institutions designed to protect them. and when you listen to some customer service interactions, that’s not as surprising as it sounds.

So this month, I’m sharing a story we covered on The Perfect Scam podcast, which I host for AARP.  It makes clear that the consequences of unpatched vulnerabilities, including inadequate customer service, can be deadly. On the other hand, I want those of you who work to protect people to hear this story as a reminder that what you do is incredibly important and valuable and….sometimes a matter of life or death.  Keep that in mind on the hard days.

This month, we interviewed an adult daughter and son whose father took his own life after becoming embroiled in a crypto/romance scam.

“When he had to accept that this is a world where this happened, he was no longer able to be in this world,” his daughter told me.

As I interviewed Dennis’ children, I really connected with him. He was a single dad; he encouraged his son to join multiple rock bands (even when they were terrible, I was told). Dennis even spent years photographing his son making music.  And today, he’s a successful musician. Dennis spent summers at the lake in Minnesota with his daughter and her kids.

He was a great guy who wanted one more bit of love, affection, excitement, and purpose in his life. He thought he’d found that with Jessica, and with crypto. He wasn’t looking to get rich. He was looking to leave something for his family.

Instead, every dollar he had saved to that point in his life was stolen. And when the very last dollar was gone, the criminals talked him through opening up an LLC so he could borrow more money, which they stole.  Even after the kids lovingly stepped in, and dad was persuaded he’d been defrauded, he still believed in Jessica. He figured she was a victim, too.  And whoever Jessica was, Dennis was probably right. As we’ve chronicled before, many scam callers are victims of human trafficking, forced to steal money online against their will.

And when Dennis just couldn’t wrap his head around everything that had happened, he ended his life.

“I heard a story of someone in a book, and the way it was talked about in that story was knowing that he took his own life, but also feeling like he was killed by a crime,” his daughter told me.

(This story and accompanying podcast include extensive discussion of suicide. If you or someone you love is in crisis, call 9-8-8, a free hotline staffed by professionals who can provide immediate help.)

Readers of my newsletter know this is not the first time I’ve talked about the scam/suicide connection. Last year we told the story of Kathy Book, who survived a suicide attempt and bravely talked with me about her experience. The stakes for scams have risen so much in the past couple of years, even since I started working on The Perfect Scam. I’m hardly the only one who thinks so. 

Also, please don’t be fooled into thinking this malady impacts only the elderly. Everyone can be a victim under the right circumstances. The pain, fear and shame of being a victim have driven many to contemplate self-harm, often with tragic results. Teenagers.  Women.  Anyone. 

Look, nobody wants to have this conversation.  I will be eternally grateful to Laura and Matt for speaking to me about their father — all because they want to help others. I can’t imagine how difficult that was for them, and what a gift it is to the rest of us. I can assure you I don’t want to talk with any more family members about their loved ones’ pain, suffering, and suicide.  And I know I sound like a broken record when I talk about scams being more sophisticated, more prevalent, and more dangerous.  But please, talk with one person you love about the dangers posed by crypto, and online dating, and online job hunting, and even online games. Tell them the Internet is full of liars who know how to say something to stir their our and make us click on something we’d “never” click on, or do something we’d “never” do.  It’s ok to repeat yourself.

But most of all, be a person that can be talked to under any circumstances. Cultivate a non-judgemental, open spirit so they know you can be trusted. Tell them that no matter how bad things might suddenly seem — an IRS audit, an arrest warrant, accusations of child pornography — they can always talk with you, there’s always another way.

If you’d like,  listen to this week’s episode, Suicide After a Scam: One Family’s Story.  Especially if you still have that nagging feeling like, “This could never happen to me or anyone I know.”

 

The dark deepfake version I’m most worried about

Bob Sullivan

Everyone should be concerned about deepfakes and voice cloning; what’s difficult is deciding how concerned to be.  When it comes to the use of AI in crime, I think we have a 5-alarm fire on our hands.

I just came back from a week of talks at the University of Georgia journalism school, and I can tell you students there are very worried about the use of artificial intelligence in the generation of fake news.  I’m less worried about that, perhaps betraying naivete on my part. This is the second presidential election cycle where much has been made about the potential to create videos of candidates saying things they’d never say; so far, there are no high-profile examples of this swaying an electorate. There was a high-profile cloning of then-candidate Joe Biden’s voice during the New Hampshire primary, when an operative paid for robocalls designed to suppress the vote (he said, later, just to make a point).  That fake call was exposed pretty quickly.

We can all imagine a fake video that a candidate’s opponents might want to spread, but my sense is that such a deepfake wouldn’t persuade anyone to change their vote — it would merely reinforce an existing opinion.  I could be wrong; in an election this close, a few votes could make all the difference.  But there are far easier ways to suppress votes — such as long voting lines — and those should get at least as much attention as deepfakes.

I am far more concerned about more mundane-sounding AI attacks, however. Research I’ve done lately confirms what I have long feared — AI will be a boon for scam criminals. Imagine a crime call center staffed by robots armed with generative conversational skills.  AI bot armies really can replace front-line scam call center operators, and can be more effective at finding targets.  They don’t get tired, they have no morals, and perhaps most significantly — they hallucinate.  That means they will change their story randomly (say from, ‘your child’s been kidnapped’ to ‘your child is on the operating table’), and when they hit on a story that works, they’ll stick with it. This might allow a kind of dastardly evolution at a pace we’ve not seen before.  And while voters might see through attempts to put words in the mouths of presidential candidates, a hysterical parent probably won’t detect a realistic-sounding imitation of their kid after a car accident.

As with all new tech, we risk blaming too much fraud and scandal on the gadgets, without acknowledging these very same schemes have always been around.  Tech is a tool, and tools can always be used for both good and bad.  The idea of scaling up crime should concern everyone, however.  Think about spam. It’s always been a numbers game. It’s always been an economic battle.  There’s no way to end spam.  But if you make spam so costly for criminals that the return on investment drops dramatically – if spammers make $1 from every 100,000 emails, rather than $1 from every 1,000 emails — criminals move on.

That’s why any tech which lets criminals scale up quickly is so concerning.  Criminals spend their time looking for their version of a hot lead — a victim who has been sent to a heightened emotional state so they can be manipulated.  Front-line crime call center employees act as filtering mechanisms. Once they get a victim on the line and show that person can be manipulated into performing a small task, like buying a $50 gift card or sharing personal information, these “leads” are passed on to high-level closers who escalate the crime.  This process can take months, or longer.  Romance scam criminals groom victims for years occasionally. Now, imagine AI bots performing these front-line tasks.  They wouldn’t have to be perfect. They’d just have to succeed at a higher rate than today’s callers, who are often victims of human trafficking working against their will.

This is the dark deepfake future that I’m most worried about.  Tech companies must lead on this issue. Those who make AI tools must game out their dark uses before they are released to the world.  There’s just too much at stake.

The 2024 Study on Cyber Insecurity in Healthcare: The Cost and Impact on Patient Safety and Care

An effective cybersecurity approach centered around stopping human-targeted attacks is crucial for healthcare institutions, not just to protect confidential patient data but also to ensure the highest quality of medical care.

This third annual report was conducted to determine if the healthcare industry is making progress in reducing human-centric cybersecurity risks and disruptions to patient care. With sponsorship from Proofpoint, Ponemon Institute surveyed 648 IT and IT security practitioners in healthcare organizations who are responsible for participating in such cybersecurity strategies as setting IT cybersecurity priorities, managing budgets and selecting vendors and contractors.

According to the research, 92 percent of organizations surveyed experienced at least one cyberattack in the past 12 months, an increase from 88 percent in 2023. For organizations in that group, the average number of cyberattacks was 40. We asked respondents to estimate the single most expensive cyberattack experienced in the past 12 months from a range of less than $10,000 to more than $25 million. Based on the responses, the average total cost for the most expensive cyberattack was $4,740,000, a 5 percent decrease over last year. This included all direct cash outlays, direct labor expenditures, indirect labor costs, overhead costs and lost business opportunities.

At an average cost of $1.47 million, disruption to normal healthcare operations because of system availability problems continues to be the most expensive consequence from the cyberattack, a 13 percent increase from an average $1.3 million in 2023. Users’ idle time and lost productivity because of downtime or system performance delays decreased from $1.1 million in 2023 to $995,484 in 2024. The cost of the time required to ensure the impact on patient care is corrected also decreased from an average of $1 million average in 2023 to $853,272 in 2024.

Cloud/account compromise. The most frequent attacks in healthcare are against the cloud, making it the top cybersecurity threat for the third consecutive year. Sixty-three percent of respondents say their organizations are vulnerable or highly vulnerable to a cloud/account compromise. Sixty-nine percent say their organizations have experienced a cloud/account compromise. In the past two years, organizations in this group experienced an average of 20 cloud compromises.

Supply chain attacks. Organizations are very or highly vulnerable to a supply chain attack, according to 60 percent of respondents. Sixty-eight percent say their organizations experienced an average of four attacks against its supply chain in the past two years.

Ransomware. Ransomware remains an ever-present threat to healthcare organizations, even though concerns about it have declined. Fifty-four percent of respondents believe their organizations are vulnerable or highly vulnerable to a ransomware attack, a decline from 64 percent. In the past two years, organizations that had ransomware attacks (59 percent of respondents) experienced an average of four such attacks. While fewer organizations paid the ransom (36 percent in 2024 vs. 40 percent in 2023), the ransom paid spiked 10 percent to an average of $1,099,200 compared to $995,450 in the previous year.

Business email compromise (BEC)/spoofing/impersonation. Concerns about BEC/spoofing/impersonation attacks have decreased. Fifty-two percent of respondents say their organizations are vulnerable or highly vulnerable to a BEC/spoofing/impersonation incident, a decrease from 61 percent in 2023. Fifty-seven percent of respondents say their organizations experienced an average of four attacks in the past two years.

Cyberattacks can cause poor patient outcomes due to delays in procedures and tests.
As in the previous report, an important part of the research is the connection between cyberattacks and patient safety. Among the organizations that experienced the four types of cyberattacks in the study, an average of 69 percent report disruption to patient care.

Specifically, an average of 56 percent report poor patient outcomes due to delays in procedures and tests, an average of 53 percent saw an increase in medical procedure complications and an average of 28 percent say patient mortality rates increased, a 21 percent spike over last year.

The following are additional trends in how cyberattacks have affected patient safety and patient care delivery. 

  • Supply chain attacks are most likely to affect patient care. Sixty-eight percent of respondents say their organizations had an attack against their supply chains. Of this group, 82 percent say it disrupted patient care, an increase from 77 percent in 2023. Patients were primarily impacted by an increase in complications from medical procedures (51 percent) and delays in procedures and tests that resulted in poor outcomes (48 percent).
  • A BEC/spoofing/impersonation attack causes delays in procedures and tests. Fifty-seven percent of respondents say their organizations experienced a BEC/spoofing/impersonation incident. Of these respondents, 65 percent say a BEC/spoofing/impersonation attack disrupted patient care. Sixty-nine percent say the consequences caused delays in procedures and tests that have resulted in poor outcomes and 57 percent say it increased complications from medical procedures.
  • Ransomware attacks cause delays in patient care. Fifty-nine percent of respondents say their organizations experienced a ransomware attack. Of this group, 70 percent say ransomware attacks had a negative impact on patient care. Sixty-one percent say patient care was affected by delays in procedures and tests that resulted in poor outcomes and 58 percent say it resulted in longer lengths of stay, which affects organizations’ ability to care for patients.
  • Cloud/account compromises are least likely to disrupt patient care. Sixty-nine percent of respondents say their organizations experienced a cloud/account compromise. In this year’s study, 57 percent say the cloud/account compromises resulted in disruption in patient care operation, an increase from 49 percent in 2023. Fifty-six percent of respondents say cloud/account compromises increased complications from medical procedures and 52 percent say it resulted in a longer length of stay. 
  • Data loss or exfiltration has had an impact on patient mortality. Ninety-two percent of organizations had at least two data loss incidents involving sensitive and confidential healthcare data in the past two years. On average, organizations experienced 20 such incidents in the past two years. Fifty-one percent say the data loss or exfiltration resulted in a disruption in patient care. Of these respondents, 50 percent say it increased the mortality rate and 37 percent say it caused delays in procedures and tests that resulted in poor outcomes. 

Other key trends 

Employee negligence because of not following policies caused data loss or exfiltration. The top one root cause of data loss or exfiltration, according to 31 percent of respondents, was employee negligence. Such policies include employees’ responsibility to safeguard sensitive and confidential information and the practices they need to follow. As shown in the research, more than half of respondents (52 percent) say their organizations are very concerned about employee negligence or error.

Cloud-based user accounts/collaboration tools that enable productivity are most often attacked. Sixty-nine percent of respondents say their organizations experienced a successful cloud/account compromise and experienced an average of 20 cloud/account compromises over the past two years. The tools most often attacked are text messaging (61 percent of respondents), email (59 percent of respondents) and Zoom/Skype/Videoconferencing (56 percent of respondents).

The lack of clear leadership is a growing problem and a threat to healthcare’s cyber security posture. While 55 percent of respondents say their organizations’ lack of in-house expertise is a primary deterrent to achieving a strong cybersecurity posture, the lack of clear leadership as a challenge increased significantly since 2023 from 14 percent to 49 percent of respondents. Not having enough budget decreased from 47 percent to 40 percent of respondents in 2024. Survey respondents note that their annual budgets for IT increased 12 percent from last year ($66 million in 2024 vs. $58.26 million in 2023) with 19 percent of that budget dedicated to information security. The healthcare industry seems to recognize cyber safety is patient safety \based on the findings.

Organizations continue to rely on security training awareness programs to reduce risks caused by employees but are they effective?  Negligent employees pose a significant risk to healthcare organizations. While more organizations (71 percent in 2024 vs. 65 percent of respondents in 2023) are taking steps to address the risk of employees’ lack of awareness about cybersecurity threats, are they effective in reducing the risks? Fifty-nine percent say they conduct regular training and awareness programs. Fifty-three percent say their organizations monitor the actions of employees.

To reduce phishing and other email-based attacks, most organizations are using anti-virus/anti-malware (53 percent of respondents). This is followed by patch& vulnerability management (52 percent of respondents) and multi-factor authentication (49 percent of respondents).

Concerns about insecure mobile apps (eHealth) increased. Organizations are less worried about employee-owned mobile devices or BYOD, a decrease from 61 percent in 2023 to 53 percent of respondents in 2024, BEC/spoofing/impersonation, a decrease from 62 percent in 2023 to 46 percent in 2024 and Cloud/ account compromise, a decrease from 63 percent in 2023 to 55 percent in 2024.  However, concerns about insecure mobile apps (eHealth) increased from 51 percent to 59 percent of respondents in 2024.

 AI and machine learning in healthcare For the first time, we include in the research the impact AI is having on security and patient care. Fifty-four percent of respondents say their organizations have embedded AI in cybersecurity (28 percent) or embedded in both cybersecurity and patient care (26 percent). Fifty-seven percent of these respondents say AI is very effective in improving organizations’ cybersecurity posture.

AI can increase the productivity of IT security personnel and reduce the time and cost of patient care and administrators’ work. Fifty-five percent of respondents agree or strongly agree that AI-based security technologies will increase the productivity of their organization’s IT security personnel. Forty-eight percent of respondents agree or strongly agree that AI simplifies patient care and administrators’ work by performing tasks that are typically done by humans but in less time and cost.

Thirty-six percent of respondents use AI and machine learning to understand human behavior. Of these respondents, 56 percent of respondents say understanding human behavior to protect emails is very important, recognizing the prevalence of socially-engineered attacks. 

While AI offers benefits, there are issues that may deter wide-spread acceptance. Sixty-three percent of respondents say it is difficult or very difficult to safeguard confidential and sensitive data used in organizations’ AI.

Other challenges to adopting AI are the lack of mature and/or stable AI technologies (34 percent of respondents), there are interoperability issues among AI technologies (32 percent of respondents) and there are errors and inaccuracies in data inputs ingested by AI technology (32 percent of respondents).

Click here to read the entire report at Proofpoint.com

The State of Cybersecurity Risk Management Strategies

What are the cyber-risks — and opportunities — in the age of AI? The purpose of this research is to determine how organizations are preparing for an uncertain future because of the ever-changing cybersecurity risks threatening their organizations. Ponemon Institute surveyed 632 IT and IT security professionals in the United States who are involved in their organizations’ cybersecurity risk management strategies and programs. The research was sponsored by Balbix.

Frequently reviewed and updated cybersecurity risk strategies and programs are the foundation of a strong cybersecurity posture.  However, cybersecurity risk strategies and programs are outdated and jeopardize the ability to prevent and respond to security incidents and data breaches.

 When asked how far in the future their organizations plan their cybersecurity risk strategies and programs, 65 percent of respondents say it is for two years (31 percent) or for more than two years (34 percent). Only 23 percent of respondents say the strategy is for only one year because of changes in technologies and the threat landscape and 12 percent of respondents say it is for less than one year.

The following research findings reveal the steps that should be included in cybersecurity risks and programs.

 Identify unpatched vulnerabilities and patch them in a timely manner. According to a previous study sponsored by Balbix, only 10 percent of respondents were very confident that their organizations have a vulnerability and risk management program that helps them avoid a data breach. Only 15 percent of respondents rated their organizations’ ability to identify vulnerabilities and patch them in a timely manner was highly effective.

In this year’s study, 54 percent of respondents say unpatched vulnerabilities is of the greatest concern to their organizations. This is followed by outdated software (51 percent of respondents) and user error (51 percent of respondents).

Frequent scanning to identify vulnerabilities should be conducted. In the previous Balbix study, only 31 percent of respondents said their organizations scan daily (12 percent) or weekly (19 percent). In this year’s research, scanning has not increased in frequency. Only 38 percent of respondents say their organizations scan for vulnerabilities more than once per day (25 percent) or daily (13 percent).

The prioritization of vulnerabilities should not be limited to a vendor’s vulnerability scoring. Fifty-one percent of respondents say their organizations’ vendor vulnerability scoring is used to prioritize vulnerabilities. Only 33 percent of respondents say their organizations use a risk-based vulnerability management solution and only 25 percent of respondents say it is based upon a risk scoring system within their vulnerability management tools.

Take steps to reduce risks in the attack vector. These risks especially are software vulnerabilities (45 percent of respondents), ransomware (37 percent of respondents), poor or missing encryption (36 percent of respondents) and phishing (36 percent of respondents). An attack vector is a path or method that a hacker uses to gain unauthorized access to a network or computer to exploit system flaws.

 Inform the C-suite and board of directors of the threats against the organization to obtain the necessary funding for cybersecurity programs and strategies.  In the previous Balbix study, the research revealed that the C-suite and IT security functions operate in a communications silo. Only 29 percent of respondents said their organizations’ executives and senior management clearly communicate their business risk management priorities to the IT security leadership and only 21 percent of respondents said their communications with the C-suite are highly effective. Those respondents who said they were very effective say it was because they were able to present information in a way that was understandable and they kept their leaders up-to-date on cyber risks and didn’t wait until the organization had a data breach or security incident.

 In this year’s study, 50 percent of respondents rate their organizations’ effectiveness in communicating the state of their cybersecurity as very low or low.  The primary reasons are negative facts are filtered before being disclosed to senior executives and the CEO (56 percent of respondents), communications are limited to only one department or line of business (silos) (44 percent of respondents) and the information can be ambiguous, which may lead to poor decisions (41 percent of respondents).

The IT and IT security functions should provide regular briefings on the state of their organizations’ cybersecurity risks. In addition to making their presentations understandable and unambiguous, briefings should not be limited to only when a serious security risk is revealed or if senior management initiates the request.

To address the challenge in meeting SLAs agreements, organizations need to eliminate the silos that inhibit communication among project teams. Forty-nine percent of respondents say their organizations track SLAs to evaluate their cybersecurity posture. Of these respondents,

Only 44 percent say their organization is meeting most or all SLAs to support its cybersecurity posture.

If AI is adopted as part of a cybersecurity strategy, risks created by AI need to be managed. Fifty-four percent of respondents say their organizations have fully adopted (26 percent) or partially adopted (28 percent). Risks include poor or misconfigured systems due to over-reliance on AI for cyber risk management, software vulnerabilities due to AI-generated code, data security risks caused by weak or no encryption, incorrect predictions due to data poisoning and inadvertent infringement of privacy rights due to the leakage of sensitive information.

Steps to reduce cybersecurity risks include regular user training and awareness about the security implications of AI, develop a data security programs and practices for AI, identify and mitigate bias in AI models for safe and responsible use, implement and consider a tool for software vulnerability management, conduct regular audits and tests to identify vulnerabilities in AI models and infrastructure, deploy risk quantification of AI models and their infrastructure and consider tools to validate AI prompts and their responses.

To more read key findings from this research, please visit the Balbix website.

Appeals court rules TikTok could be responsible for algorithm that recommended fatal strangulation game to child

Bob Sullivan

TikTok cannot use federal law “to permit casual indifference to the death of a ten-year-old girl,” a federal judge wrote this week.  And with that, an appeals court has opened a Pandora’s box that might clear the way for Big Tech accountability.

Silicon Valley companies have become rich and powerful in part because federal law has shielded them from liability for many of the terrible things their tools enable and encourage — and, it follows, from the expense of stopping such things. Smart phones have poisoned our children’s brains and turned them into The Anxious Generation; social media and cryptocurrency have enabled a generation of scam criminals to rob billions from our most vulnerable people; advertising algorithms tap into our subconscious in an attempt to destroy our very agency as human beings. To date, tech firms have made only passing attempts to stop such terrible things, emboldened by federal law which has so far shielded them from liability …  even when they “recommend” that kids do things which lead to death.

That’s what happened to 10-year-old Tawainna Anderson, who was served a curated “blackout challenge” video by Tiktok on a personalized “For You” page back in 2021. She was among a series of children who took up that challenge and experimented with self-asphyxiation — and died.  When Anderson’s parents tried to sue TikTok, a lower court threw out the case two years ago, saying tech companies enjoy broad immunity because of the 1996 Communications Decency Act, and its Section 230.

You’ve probably heard of that. Section 230 has been used as a get-out-of-jail-free card by Big Tech for decades; it’s also been used as an endless source of bar fights among legal scholars.

But now, with very colorful language, a federal appeals court has revived the Anderson family lawsuit and thrown Section 230 protection into doubt.  Third Circuit Judge Paul Matey’s concurring opinion seethes at the idea that tech companies aren’t required to stop awful things from happening on their platforms, even when it’s obvious that they could.  He also takes a shot at those who seem to care more about the scholarly debate than about the clear and present danger facilitated by tech tools. It’s worth reading this part of the ruling in full.

TikTok reads Section 230 of the Communications Decency Act… to permit casual indifference to the death of a ten-year-old girl. It is a position that has become popular among a host of purveyors of pornography, self-mutilation, and exploitation, one that smuggles constitutional conceptions of a “free trade in ideas” into a digital “cauldron of illicit loves” that leap and boil with no oversight, no accountability, no remedy. And a view that has found support in a surprising number of judicial opinions dating from the early days of dialup to the modern era of algorithms, advertising, and apps.. But it is not found in the words Congress wrote in Section 230, in the context Congress acted, in the history of common carriage regulations, or in the centuries of tradition informing the limited immunity from liability enjoyed by publishers and distributors of “content.” As best understood, the ordinary meaning of Section 230 provides TikTok immunity from suit for hosting videos created and uploaded by third parties. But it does not shield more, and Anderson’s estate may seek relief for TikTok’s knowing distribution and targeted recommendation of videos it knew could be harmful.

Later on, the opinion says, “The company may decide to curate the content it serves up to children to emphasize the lowest virtues, the basest tastes. But it cannot claim immunity that Congress did not provide.”

The ruling doesn’t tear down all Big Tech immunity. It makes a distinction between TikTok’s algorithm specifically recommending a blackout video to a child after the firm knew, or should have known, that it was dangerous ….. as opposed to a child seeking out such a video “manually” through a self-directed search.  That kind of distinction has been lost through years of reading Section 230 at its most generous from Big Tech’s point of view.  I think we all know where that has gotten us.

In the simplest of terms, tech companies shouldn’t be held liable for everything their users do, no more than the phone company can be liable for everything callers say on telephone lines — or, as the popular legal analogy goes, a newsstand can’t be liable the content of magazines it sells.

After all, that newsstand has no editorial control over those magazines.  Back in the 1990s, Section 230 added just a touch of nuance to this concept, which was required because tech companies occasionally dip into their users’ content and restrict it. Tech firms remove illegal drug sales, or child porn, for example.  While that might seem akin to the exercise of editorial content, we want tech companies to do this, so Congress declared such occasional meddling does not turn a tech firm from a newsstand into a publisher, so it does not assume additional liability because of such moderation — it enjoys immunity.

This immunity has been used as permission for all kinds of undesirable activity. Using another mildly strained metaphor, a shopping mall would never be allowed to operate if it ignored massive amounts of crime going on in its hallways…let alone supplied a series of tools that enable elder fraud, or impersonation, or money laundering. But tech companies do that all the time. In fact, we know from whistleblowers like Frances Haugen that tech firms are fully aware their tools help connect anxious kids with videos that glorify anorexia.  And they lead lonely and grief-stricken people right to criminals who are expert at stealing hundreds of thousands of dollars from them. And they allow ongoing crimes like identity theft to occur without so much as answering the phone from desperate victims like American service members who must watch as their official uniform portraits are used for romance scams.

Will tech companies have to change their ways now? Will they have to invest real money into customer service to stop such crimes, and to stop their algorithms from recommending terrible things?  You’ll hear that such an investment is an overwhelming demand. Can you imagine if a large social media firm was forced to hire enough customer service agents to deal with fraud in a timely manner? It might put the company out of business.  In my opinion, that means it never had a legitimate business model in the first place.

This week’s ruling draws an appropriate distinction between tech firms that passively host content which is undesirable and firms which actively promote such content via algorithm. In other words, algorithm recommendations are akin to editorial control, and Big Tech must answer for what their algorithms do.  You have to ask: Why wouldn’t these companies welcome that kind of responsibility?

The Section 230 debate will rage on.  Since both political parties have railed against Big Tech, and there is the appetite for change, it does seem like Congress will get involved. Good. Section 230 is desperate for an update.  Just watch carefully to make sure Big Tech doesn’t write its own rules for regulating the next era of the digital age. Because it didn’t do so well with the current era.

If you want to read more, I’d recommend Matt Stoller’s Substack post on the ruling. 

 

2024 Global PKI, IoT and Post Quantum Cryptography Study

The Public Key Infrastructure (PKI) is considered essential to keep people, systems and things securely connected. According to this research, in their efforts to achieve PKI maturity organizations need to address the challenges of having clear ownership of the PKI strategy and sufficient skills.

 The 2024 Global PKI, IoT and Post Quantum Cryptography research is part of a larger study — sponsored by Entrust — published in May involving 4,052 respondents in 9 countries. In this report, Ponemon Institute presents the findings based on a survey of 2,176 IT and IT security who are involved in their organizations’ enterprise PKI in the following 9 countries: United States (409 respondents), United Kingdom (289 respondents), Canada (245 respondents), Germany (309 respondents), Saudi (Middle East) (162 respondents), United Arab Emirates (UAE ) (203 respondents), Australia/NZ (156 respondents), Japan (168 respondents) and Singapore (235 respondents).

“With the rise of costly breaches and AI-generated deepfakes, synthetic identity fraud, ransomware gangs, and cyber warfare, the threat landscape is intensifying at an alarming rate,” said Samantha Mabey, Director Solutions Marketing at Entrust. “This means that implementing a Zero Trust security practice is an urgent business imperative – and the security of organizations’ and their customers’ data, networks, and identities depends on it.”

 The following is a summary of the most important takeaways from the research

 The orchestration of the PKI software increased from 42 percent of respondents to 50 percent of respondents. However, 59 percent of respondents say orchestration is very or extremely complex, an increase from 43 percent of respondents.

Responsibility for the PKI strategy is being assigned to IT security and IT leaders. As PKI becomes increasingly critical to an organization’s security posture, the CISO and CIO are most responsible for their organization’s PKI strategy. The IT manager being most responsible for the PKI strategy has declined from 26 percent to 14 percent of respondents.

Fifty-two percent of respondents say they have PKI specialists on staff who are involved in their organizations’ enterprise PKI. Of the 48 percent respondents who say their organizations do not have PKI specialists rely on consultants (45 percent) or service providers (55 percent).

A certificate authority (CA) provides assurance about the parties identified in a PKI certificate. Each CA maintains its own root CA for use only by the CA. The most popular method for deploying enterprise PKI continues to be through an internal corporate certificate authority (CA) or an externally hosted private CA—managed service, according to 60 percent and 47 percent of respondents, respectively.

No clear ownership, insufficient skills and requirements too fragmented or inconsistent are the top three challenges to enabling applications to use PKI. The challenge of no clear ownership continues to be the top challenge to deploying and managing PKI according to 51 percent of respondents. Other challenges are insufficient skills (43 percent of respondents) and requirements are too fragmented or inconsistent (43 percent of respondents).

Challenges that are declining significantly include the lack of resources (from 64 percent of respondents to 41 percent of respondents) and lack of visibility of the applications that will depend on PKI (from 48 percent to 33 percent of respondents).

As organizations strive to achieve greater PKI maturity they anticipate the most change and uncertainty in PKI technologies and with vendors. Forty-three percent of respondents say PKI technologies and 41 percent of respondents say it will be with products and services.

Cloud-based services continue to be the most important trend driving the deployment of applications using PKI. Cloud-based services continue to be the number one trend driving deployment of applications using PKI (46 percent of respondents). However, respondents who say IoT is the most important trend driving the deployment of applications using PKI has declined from 47 percent of respondents to 39 percent of respondents. BYOD and internal mobile device management has increased significantly from 24 percent of respondents to 34 percent of respondents.

More organizations are deploying certificate revocation techniques. In addition to verifying the CA’s signature on a certificate, the application software must also be sure that the certificate is still trustworthy at the time of use. Certificates that are no longer trustworthy must be revoked by the CA. Those organizations that do not deploy a certificate revocation technique has declined significantly from 32 percent to 13 percent.

The certificate revocation technique most often deployed continues to be Online Certificate Status Protocol (OCSP), according to 45 percent of respondents. For the first time, the manual certificate revocation list is the second technique most often deployed.

Smart cards (for CA/root key protection) to manage the private keys for their root/policy/issuing CAs are used by 41 percent of respondents. Thirty-one percent of respondents say removable media for CA/root keys cards are used.

Organizations’ primary root CA strategies are shifting significantly since 2021. A root certificate is a public key certificate that identifies a root certificate authority (CA). Both offline, self-managed and offline, externally hosted increased to 29 percent of respondents. Online, self-managed decreased from 31 percent of respondents to 25 percent of respondents and online, externally hosted decreased from 21 percent to 17 percent of respondents.

Organizations with internal CAs use an average of 6.5 separate CAs, managing an average of 31,299 internal or externally acquired certificates. An average of 9.5 distinct applications, such as email and network authentication, are managed by an organization’s PKI. This indicates that the PKI is at the core of the enterprise IT backbone. Not only the number of applications dependent upon the PKI but the nature of them indicates that PKI is a strategic part of the core IT backbone.

Conflict with other apps using the same PKI is becoming a bigger challenge to enabling applications to use the same PKI. While the number one challenge is not having sufficient skills, it has decreased from 43 percent to 37 percent of respondents.

Common Criteria Evaluation Assurance Level 4+ and Federal Information Processing Standards (FIPS) 140-2 Level 3 continue to be the most important security certifications when deploying PKI infrastructure and PKI-based applications. Fifty-seven percent of respondents say Common Criteria EAL 4+ is the most important security certification when deploying PKI. The evaluation at this level includes a comprehensive security assessment encompassing design testing and code review.

Fifty-five percent say FIPS 140-2 Level 3 is an important certification when deploying PKI. In the US, FIPS 140 is the standard called out by NIST in its definition of a “cryptographic module”, which is mandatory for most US federal government applications and a best practice in all PKI implementations.

SSL certificates for public-facing websites and services using PKI credentials is still the application most often used but has declined since 2022. Sixty-four percent of respondents say the application most often using PKI credentials is SSL certificates for public-facing websites and services. However, mobile device authentication and private cloud-based applications have increased as apps using PKI credentials (60 percent and 56 percent of respondents, respectively).

Scalability to millions of managed certificates continues to be the most important PKI capability for IoT deployments. While scalability is the most important, the support for Elliptic Curve Cryptography (ECC) is the number two most important PKI capability. ECC is an alternative technique to RSA and is considered a powerful cryptography approach. It generates security between key pairs for public key encryption by using the mathematics of elliptic curves.

Today and in the next 12 months, the most important IoT security capabilities are delivering patches and updates to devices and monitoring device behavior. Device authentication will become more important in the next 12 months.

Post Quantum Cryptography

For the first time, this 2024 global study features organizations’ approach to achieving migration to Post Quantum Cryptography (PQC). As defined in the research, quantum computing is a rapidly emerging technology that harnesses the laws of quantum mechanics to solve problems too complex for classical computers.

Sixty-one percent of respondents plan to migrate to PQC within the next five years. The most popular path to PQC is implementation of pure PQC (36 percent of respondents) followed by a hybrid approach combining traditional crypto with PQC (31 percent of respondents) and test PQC with their organization’s system and applications (26 percent of respondents).

Many organizations are not prepared to achieve migration because of the lack of visibility and not having the right technologies. Only 45 percent of respondents say their organizations have full visibility into their cryptographic estate and 50 percent of respondents say they have the right technology to support the larger key lengths and computing power required with PQC.

To prepare for migration, organizations need to know what cryptographic assets and algorithms they have and where they reside. It is important to know data flows and where organizations’ long-life data resides that is sensitive and must remain confidential. To achieve full visibility, organizations need to ensure they have a full and clear inventory of all the cryptographic assets (keys, certificates, secrets and algorithms across the environment) and what is being secured.

Organizations are slow to prepare for the post-quantum threat. The quantum threat, sometimes referred to as “post quantum”, is the inevitability that within the decade a quantum computer will be capable of breaking traditional public key cryptography. Experts surveyed by the Global Risk Institute predict quantum computing will compromise cybersecurity as early as 2027.

Most respondents are not preparing for the post-quantum threat. Twenty-seven percent of respondents say their organizations have not yet considered the impact of the threat, 23 percent are aware of the potential impact but haven’t started to create a strategy and 9 percent are unsure if their organizations are preparing for the post-quantum threat.

To prepare for the post-quantum threat, 44 percent of respondents say their organizations are building a post-quantum cryptography strategy. Although it is recommended as a best practice, only 38 percent of respondents say their organization is taking an inventory of its cryptographic assets and/or ensuring it is crypto agile. Crypto agility is the capacity for an information security system to adopt an alternative to the original encryption method or cryptographic primitive without significant change to system infrastructure.

To protect against the post-quantum threat, organizations need to be able to have an inventory of their cryptographic assets and achieve a fully crypto agile approach to be able to easily transition from one algorithm to another. Improving the ability to have a complete inventory of cryptographic assets (43 percent of respondents) and to achieve crypto agility (40 percent of respondents) are the top two concerns.

Crypto agility is critical to the migration to PQC. Crypto agility is the capacity for an information security system to adopt an alternative to the original encryption method or cryptographic primitive without significant change to system infrastructure. Only 28 percent of respondents say their organizations have a fully implemented crypto agile approach.

To read more key findings and the full report, please visit Entrust.com’s website.

Social media hack attack relies on the kindness of friends; I was (almost) a victim

Bob Sullivan

You might think your humble social media account would be of no use to multinational crime gangs, but you’d be wrong. Computer criminals have dozens of ways to turn hijacked Facebook, Instagram, TikTok or Twitter accounts into cash…or worse. You’d be stunned how quickly friend-of-friend attacks escalate into massive crime scenes, so it’s essential you protect your account. Be suspicious of ANY odd or surprising incoming messages. Your best bet in most cases is to do nothing.

I offer this reminder because I’ve just learned about a new(ish) way criminals steal social accounts. It relies only on the kindness of friends. It’s so simple, it almost got me, and it did get a friend of mine. And because there’s a bit of “truth” to the ask, you can see why victims might comply with the single, brief request the criminals make —  and inadvertently enable the hacker to use the change password / password recovery feature to hijack their account.

I’ll describe it.  It’s a bit confusing, but a picture is worth 1,000 words. I recently got this instant message on Instagram from a friend.

And, indeed, I had recently received an email from Facebook that looked like this:

The kicker is this message came from a long-time friend of mine — or at least from his account. So I was inclined to help him. He’d lost access to his account, which I know is essential to his small business. Also, the message came late at night, when I didn’t really have on my cybersecurity journalist hat. So, I opened the message and thought about responding by sending him the code.

I also recalled that Facebook uses friends to assist with account recovery when a criminal hijacks an account. At least, that was true until about a year ago.  An innovative feature called “trusted contacts” used to be available when victims were working to recover access to their accounts. In essence, Facebook/Meta would write to people in this trusted contact list and ask them to vouch for someone who was locked out of their account. Hackers learned how to exploit the feature, however, so Facebook discontinued it sometime in 2023. 

Still, since I had some vague recollection about it, I entertained my friend’s request.   Fortunately, instead of sending him the code I’d received in email from Facebook, I chose to send him a message using another piece of software owned by another company — not Facebook or Instagram or WhatsApp — to ask him what was going on.

And there, a few hours later, he told me he’d been hacked…just because he was trying to help out a friend regain access to his account. And now, like so many account hijacking victims I’ve written about, he’s lost in the hellscape that is trying to restore account access using Meta’s backlogged process.

It’s no secret I think companies like Facebook could do a lot more to protect users, beginning with better customer service to deal with problems when they arise. Recall, it took me half a year to regain access to my dog’s Instagram account after my cell phone was stolen.  In this case, I have an additional beef with Facebook. Look again at the email I received. The subject line really works in the criminal’s favor. It just says “XXXX is your account recovery code.” That’s all you see in an email preview, and it would be easy to just read that off to someone who asked for it.  The *body* of the email indicates that the code was sent in response to “a request to reset your Facebook password.”  But if a recipient were to quickly try to help out a friend in distress, they might not read that far.

By now, you’ve figured out the “game” the hackers are playing. They were trying to get a code that would have allowed them to reset my Facebook account and hijack it.  I was lucky; my friend was not.

What could a criminal do with access to his account, or mine? They could soon start offering fraudulent cryptocurrency “opportunities.”  Or run a convincing “I need bail money” scam.  Or, they would bank the account with thousands of other hijacked accounts for use in some future scam or disinformation campaign.  An account could be used to spread a fake AI video of a presidential campaign, for example. Pretty awful stuff you’d never want to be a part of.

This attack is not new; I see mentions of it on Reddit that date back at least two years.  So I hope this story feels like old news to you and you are confident you’d see through this scam. But it feels very persuasive to me, so I wanted to get a warning to you as soon as possible.

Let me know if you’ve seen this attack, or anything similar, operating out there in the wild.  Meanwhile, please take this as a reminder that criminals want to steal your digital identity, even if you believe your corner of the Internet universe is so small that no one would ever want to steal it.

2024 Cybersecurity Threat and Risk Management Report

The threat landscape keeps breaking records as it becomes more volatile and complex. Most organizations are experiencing data breaches and security incidents; what’s more, they are also reporting an increase in frequency. Sixty-one percent of organizations represented in this research had a data breach or cybersecurity incident in the past two years and 55 percent of respondents say they have experienced more than four to five of these incidents.

The purpose of this research, sponsored by Optiv,  is to learn the extent of the cybersecurity threats facing organizations and the steps being taken to manage the risks of potential data breaches and cyberattacks. Ponemon Institute surveyed 650 IT and cybersecurity practitioners in the US who are knowledgeable about their organizations’ approach to threat and risk management practices.

In the past 12 months 61 percent of respondents say cybersecurity incidents have increased significantly (29 percent) or increased (32 percent). Only 21 percent of respondents say incidents have decreased (13 percent) or significantly decreased (8 percent).

The following is a summary of the most salient research findings

An enterprise-wide Cybersecurity Incident Response Plan (CSIRP) is an essential blueprint
for navigating a security crisis. A CSIRP is a written and systematic approach that establishes
procedures and documentation and helps organizations before, during and after a security
incident. Despite the importance of such a plan, less than half of respondents (46 percent) say
their organizations have a CSIRP that is applied consistently across the entire enterprise.
Twenty-six percent of respondents say their CSIRP is not applied consistently across the
enterprise and 17 percent of respondents say it is ad hoc. Of those organizations with a CSIRP, only 50 percent say it is effective or highly effective. To improve its effectiveness, CSIRPs need to be applied consistently throughout the organization. This would ensure that should a data breach occur the response activities would be uniform and not siloed based on the different functions having different CSIRPs.

To determine if the plan can deal with incidents that are increasing in frequency and severity, the CSIRP should be regularly reviewed and tested. However, only 23 percent of respondents say the CSIRP is reviewed and tested each quarter and 44 percent of respondents say it is reviewed twice per year (29 percent) or once per year (15 percent). Only 48 percent of respondents say it is tested by a third party.

Proof that investments in technologies and resources are effective in reducing security
incidents determines how much to allocate to the cybersecurity budget. An average of $26
million was allocated to cybersecurity investments in 2024. To calculate how much to allocate to
the 2024 budget for cybersecurity budgets, organizations focus on evaluating the proven
effectiveness of investments in reducing security incidents (61 percent of respondents),
assessing the threats and risks facing the organization (53 percent of respondents) and analyzing the total cost of ownership (48 percent of respondents). Only 36 percent of respondents say there is no formal approach for determining the cybersecurity budget.

More resources are allocated to assessing the effectiveness of organizations’
cybersecurity processes and governance practices. The 2024 cybersecurity budget is being
used to conduct an internal assessment of the effectiveness of their organizations’ security
processes and governance practices (60 percent of respondents), to increase resources
allocated to Identity and Access Management (58 percent of respondents), to purchase more
cybersecurity tools (51 percent of respondents) and to hire more skilled security staff (49
percent).

Compliance practices and cybersecurity insurance are considered the most important
governance activities. Fifty-two percent of respondents say the most important cybersecurity
governance activity is to conduct internal or external audits of security and IT compliance
practices. The second and third most important governance practices are the purchase of
cybersecurity insurance (46 percent of respondents) and establishment of a business continuity
management function (42 percent of respondents).

Cybersecurity insurance is difficult to purchase because of insurers’ requirements. Only
29 percent of respondents say their organizations have cybersecurity insurance. Forty-eight
percent of respondents say they plan to purchase cybersecurity insurance in the next six months (23 percent) or in the next year (25 percent of respondents). Fifty-two percent of respondents say it is highly difficult to purchase cybersecurity insurance because of the insurer’s requirements.

Insurers often require having certain policies and technologies in place such as regular scanning
for vulnerabilities that need to be patched, adequate staff to support cybersecurity programs and policies and multi-factor authentication required for remote access. The ability to reduce the time to detect, contain and recover from a data breach measures the effectiveness of cybersecurity threat and risk management programs. The metrics most often used to report on the state of the cybersecurity risk management program are the time to detect a data breach or other security incident (47 percent of respondents), time to contain a data breach or other security incident (43 percent of respondents) and time to recover from a data breach or other security incident (41 percent of respondents). An enterprise-wide CSIRP is valuable in enhancing the ability to respond quickly to a data breach.

Too many cybersecurity tools are hindering a strong cybersecurity posture. Organizations
in this research have an average of 54 separate cybersecurity technologies. Forty percent of
respondents say their organizations have too many cybersecurity tools to be able to achieve a
strong cybersecurity posture. Only 29 percent of respondents say their organizations have the
right number of cybersecurity tools. Not only are there too many tools, only 51 percent of
respondents rate these technologies as highly effective in mitigating cyber risks.

Technology efficiency and integration are key to achieving the right number technologies.
To have the right number of separate security technologies, 53 percent of respondents say it is to make sure technologies are used efficiently and 51 percent of respondents say it is to make sure the data is integrated across the technologies deployed.

The primary technologies deployed are network firewalls (NGFW) and intrusion detection
prevention (IDS/IPS), according to 58 percent of respondents. Other technologies most often
deployed are endpoint antivirus (AV) and anti-malware (AM) (51 percent of respondents),
cloud/container security (50 percent of respondents) and endpoint detection and response (EDR) (48 percent of respondents).

Organizations are investing more in cloud services that go beyond traditional on-premises
security methods. A SASE (secure access service edge) or Security Service Edge (SSE)
architecture combines networking and security as a service function into a single cloud-delivered service at the network edge. Forty-six percent of respondents say their organizations have implemented SASE and of these respondents, 42 percent of respondents say their organizations engaged a third party or system integrator to support the SASE or SSE implementation.

According to the findings there is significant interest in Security Orchestration Automation
and Response (SOAR) adoption. SOAR seeks to alleviate the strain on IT teams by
incorporating automated response to a variety of events. Seventy-three percent of respondents
say their organizations use SOAR significantly (38 percent) or moderately (35 percent).
Cybersecurity use cases for artificial intelligence (AI) and machine learning (ML) models
are on the rise. A ML model in cybersecurity is a computational algorithm that uses statistical
techniques to analyze and interpret data to make predictions or decisions related to security.
Forty-four percent of respondents say their organizations use AI/ML to prevent cyberattacks and to maintain competitive advantage (49 percent of respondents) and to support their IT security team (40 percent of respondents). To ensure that AI/ML reduces cybersecurity risks and threats, 59 percent of respondents say they use AI vulnerability scanning, an AI firewall (52 percent of respondents) and adversary TTP training for security staff (47 percent of respondents).

To read best practices of high performing organizations, and the rest of this report, download it from Optiv’s website.

Cybercrime adds a new, very dangerous twist — face-to-face meetings

Bob Sullivan

We often think of cybercrime as a long-distance nightmare.  A victim is manipulated by someone pretending to be a lover, or a boss, or a seller, and then sends that criminal money using some electronic, virtual method.  A really disturbing trend I’ve noticed recently is the increased frequency of in-person meetings as part of a cybercrime.  A criminal visits the victim to pick up cash, or even gold, at their home (like this story we did in March). A criminal sends an Uber delivery person to pick up a  “package” that contains fraudulent payments. A victim is lured into a meeting over a Facebook Marketplace purchase, then robbed. Or, in the case of a recent Perfect Scam podcast I worked on, a con artist lurks at a “zone of trust” place like a golf course or a church looking for generous people to target with a charity scam.

This in-person meeting trend is alarming because a lot more things can go wrong when criminals are in the same physical space as their victims.  Earlier, I told you about the tragic story of an Ohio man who had been communicating with criminals attempting to commit a “grandparent scam”  and shot an Uber driver that he said he believed was part of the scam; he has been indicted for murder and pleaded not guilty. The driver, who died, was not a part of the scam.

Steve Baker, a longtime consumer advocate and former Federal Trade Commission lawyer, first pointed out this trend to me, and now I’m seeing it in many places. The Social Security Administration issued a dire-sounding warning a few weeks ago titled “Don’t Hand Off Cash to ‘Agents.’ ”   It reads:

“The Social Security Administration (SSA) Office of the Inspector General (OIG) is receiving alarming reports that criminals are impersonating SSA OIG agents and are requesting that their targets meet them in person to hand off cash. SSA OIG agents will never pick up money at your door or in any type of exchange. This is a SCAM!

NEVER exchange money or funds of any kind with any individual stating they are an SSA OIG agent. This new scam trend introduces an element of physical danger to scams that never existed before.

Meanwhile, police in New York are warning about a rise in crimes that begin as fake Facebook Marketplace ads — and end with victims staring down the barrel of a gun.

Why are cybercriminals getting this bold and meeting victims in person, or sending someone else to do that?  It’s too early to tell, but part of the reason *could* be increased transaction scrutiny at places like Zelle or cryptocurrency exchanges, along with increased fraud awareness around gift cards.  Time will tell.

In the meantime, I’m very concerned we will see more situations like that story from Ohio. Please be extra vigilant when speaking with loved ones about cybercrime.  Look and listen for signs of surprising new friends or unexpected meetings. Keep those lines of communication open.