Category Archives: Uncategorized

Social media hack attack relies on the kindness of friends; I was (almost) a victim

Bob Sullivan

You might think your humble social media account would be of no use to multinational crime gangs, but you’d be wrong. Computer criminals have dozens of ways to turn hijacked Facebook, Instagram, TikTok or Twitter accounts into cash…or worse. You’d be stunned how quickly friend-of-friend attacks escalate into massive crime scenes, so it’s essential you protect your account. Be suspicious of ANY odd or surprising incoming messages. Your best bet in most cases is to do nothing.

I offer this reminder because I’ve just learned about a new(ish) way criminals steal social accounts. It relies only on the kindness of friends. It’s so simple, it almost got me, and it did get a friend of mine. And because there’s a bit of “truth” to the ask, you can see why victims might comply with the single, brief request the criminals make —  and inadvertently enable the hacker to use the change password / password recovery feature to hijack their account.

I’ll describe it.  It’s a bit confusing, but a picture is worth 1,000 words. I recently got this instant message on Instagram from a friend.

And, indeed, I had recently received an email from Facebook that looked like this:

The kicker is this message came from a long-time friend of mine — or at least from his account. So I was inclined to help him. He’d lost access to his account, which I know is essential to his small business. Also, the message came late at night, when I didn’t really have on my cybersecurity journalist hat. So, I opened the message and thought about responding by sending him the code.

I also recalled that Facebook uses friends to assist with account recovery when a criminal hijacks an account. At least, that was true until about a year ago.  An innovative feature called “trusted contacts” used to be available when victims were working to recover access to their accounts. In essence, Facebook/Meta would write to people in this trusted contact list and ask them to vouch for someone who was locked out of their account. Hackers learned how to exploit the feature, however, so Facebook discontinued it sometime in 2023. 

Still, since I had some vague recollection about it, I entertained my friend’s request.   Fortunately, instead of sending him the code I’d received in email from Facebook, I chose to send him a message using another piece of software owned by another company — not Facebook or Instagram or WhatsApp — to ask him what was going on.

And there, a few hours later, he told me he’d been hacked…just because he was trying to help out a friend regain access to his account. And now, like so many account hijacking victims I’ve written about, he’s lost in the hellscape that is trying to restore account access using Meta’s backlogged process.

It’s no secret I think companies like Facebook could do a lot more to protect users, beginning with better customer service to deal with problems when they arise. Recall, it took me half a year to regain access to my dog’s Instagram account after my cell phone was stolen.  In this case, I have an additional beef with Facebook. Look again at the email I received. The subject line really works in the criminal’s favor. It just says “XXXX is your account recovery code.” That’s all you see in an email preview, and it would be easy to just read that off to someone who asked for it.  The *body* of the email indicates that the code was sent in response to “a request to reset your Facebook password.”  But if a recipient were to quickly try to help out a friend in distress, they might not read that far.

By now, you’ve figured out the “game” the hackers are playing. They were trying to get a code that would have allowed them to reset my Facebook account and hijack it.  I was lucky; my friend was not.

What could a criminal do with access to his account, or mine? They could soon start offering fraudulent cryptocurrency “opportunities.”  Or run a convincing “I need bail money” scam.  Or, they would bank the account with thousands of other hijacked accounts for use in some future scam or disinformation campaign.  An account could be used to spread a fake AI video of a presidential campaign, for example. Pretty awful stuff you’d never want to be a part of.

This attack is not new; I see mentions of it on Reddit that date back at least two years.  So I hope this story feels like old news to you and you are confident you’d see through this scam. But it feels very persuasive to me, so I wanted to get a warning to you as soon as possible.

Let me know if you’ve seen this attack, or anything similar, operating out there in the wild.  Meanwhile, please take this as a reminder that criminals want to steal your digital identity, even if you believe your corner of the Internet universe is so small that no one would ever want to steal it.

2024 Cybersecurity Threat and Risk Management Report

The threat landscape keeps breaking records as it becomes more volatile and complex. Most organizations are experiencing data breaches and security incidents; what’s more, they are also reporting an increase in frequency. Sixty-one percent of organizations represented in this research had a data breach or cybersecurity incident in the past two years and 55 percent of respondents say they have experienced more than four to five of these incidents.

The purpose of this research, sponsored by Optiv,  is to learn the extent of the cybersecurity threats facing organizations and the steps being taken to manage the risks of potential data breaches and cyberattacks. Ponemon Institute surveyed 650 IT and cybersecurity practitioners in the US who are knowledgeable about their organizations’ approach to threat and risk management practices.

In the past 12 months 61 percent of respondents say cybersecurity incidents have increased significantly (29 percent) or increased (32 percent). Only 21 percent of respondents say incidents have decreased (13 percent) or significantly decreased (8 percent).

The following is a summary of the most salient research findings

An enterprise-wide Cybersecurity Incident Response Plan (CSIRP) is an essential blueprint
for navigating a security crisis. A CSIRP is a written and systematic approach that establishes
procedures and documentation and helps organizations before, during and after a security
incident. Despite the importance of such a plan, less than half of respondents (46 percent) say
their organizations have a CSIRP that is applied consistently across the entire enterprise.
Twenty-six percent of respondents say their CSIRP is not applied consistently across the
enterprise and 17 percent of respondents say it is ad hoc. Of those organizations with a CSIRP, only 50 percent say it is effective or highly effective. To improve its effectiveness, CSIRPs need to be applied consistently throughout the organization. This would ensure that should a data breach occur the response activities would be uniform and not siloed based on the different functions having different CSIRPs.

To determine if the plan can deal with incidents that are increasing in frequency and severity, the CSIRP should be regularly reviewed and tested. However, only 23 percent of respondents say the CSIRP is reviewed and tested each quarter and 44 percent of respondents say it is reviewed twice per year (29 percent) or once per year (15 percent). Only 48 percent of respondents say it is tested by a third party.

Proof that investments in technologies and resources are effective in reducing security
incidents determines how much to allocate to the cybersecurity budget. An average of $26
million was allocated to cybersecurity investments in 2024. To calculate how much to allocate to
the 2024 budget for cybersecurity budgets, organizations focus on evaluating the proven
effectiveness of investments in reducing security incidents (61 percent of respondents),
assessing the threats and risks facing the organization (53 percent of respondents) and analyzing the total cost of ownership (48 percent of respondents). Only 36 percent of respondents say there is no formal approach for determining the cybersecurity budget.

More resources are allocated to assessing the effectiveness of organizations’
cybersecurity processes and governance practices. The 2024 cybersecurity budget is being
used to conduct an internal assessment of the effectiveness of their organizations’ security
processes and governance practices (60 percent of respondents), to increase resources
allocated to Identity and Access Management (58 percent of respondents), to purchase more
cybersecurity tools (51 percent of respondents) and to hire more skilled security staff (49
percent).

Compliance practices and cybersecurity insurance are considered the most important
governance activities. Fifty-two percent of respondents say the most important cybersecurity
governance activity is to conduct internal or external audits of security and IT compliance
practices. The second and third most important governance practices are the purchase of
cybersecurity insurance (46 percent of respondents) and establishment of a business continuity
management function (42 percent of respondents).

Cybersecurity insurance is difficult to purchase because of insurers’ requirements. Only
29 percent of respondents say their organizations have cybersecurity insurance. Forty-eight
percent of respondents say they plan to purchase cybersecurity insurance in the next six months (23 percent) or in the next year (25 percent of respondents). Fifty-two percent of respondents say it is highly difficult to purchase cybersecurity insurance because of the insurer’s requirements.

Insurers often require having certain policies and technologies in place such as regular scanning
for vulnerabilities that need to be patched, adequate staff to support cybersecurity programs and policies and multi-factor authentication required for remote access. The ability to reduce the time to detect, contain and recover from a data breach measures the effectiveness of cybersecurity threat and risk management programs. The metrics most often used to report on the state of the cybersecurity risk management program are the time to detect a data breach or other security incident (47 percent of respondents), time to contain a data breach or other security incident (43 percent of respondents) and time to recover from a data breach or other security incident (41 percent of respondents). An enterprise-wide CSIRP is valuable in enhancing the ability to respond quickly to a data breach.

Too many cybersecurity tools are hindering a strong cybersecurity posture. Organizations
in this research have an average of 54 separate cybersecurity technologies. Forty percent of
respondents say their organizations have too many cybersecurity tools to be able to achieve a
strong cybersecurity posture. Only 29 percent of respondents say their organizations have the
right number of cybersecurity tools. Not only are there too many tools, only 51 percent of
respondents rate these technologies as highly effective in mitigating cyber risks.

Technology efficiency and integration are key to achieving the right number technologies.
To have the right number of separate security technologies, 53 percent of respondents say it is to make sure technologies are used efficiently and 51 percent of respondents say it is to make sure the data is integrated across the technologies deployed.

The primary technologies deployed are network firewalls (NGFW) and intrusion detection
prevention (IDS/IPS), according to 58 percent of respondents. Other technologies most often
deployed are endpoint antivirus (AV) and anti-malware (AM) (51 percent of respondents),
cloud/container security (50 percent of respondents) and endpoint detection and response (EDR) (48 percent of respondents).

Organizations are investing more in cloud services that go beyond traditional on-premises
security methods. A SASE (secure access service edge) or Security Service Edge (SSE)
architecture combines networking and security as a service function into a single cloud-delivered service at the network edge. Forty-six percent of respondents say their organizations have implemented SASE and of these respondents, 42 percent of respondents say their organizations engaged a third party or system integrator to support the SASE or SSE implementation.

According to the findings there is significant interest in Security Orchestration Automation
and Response (SOAR) adoption. SOAR seeks to alleviate the strain on IT teams by
incorporating automated response to a variety of events. Seventy-three percent of respondents
say their organizations use SOAR significantly (38 percent) or moderately (35 percent).
Cybersecurity use cases for artificial intelligence (AI) and machine learning (ML) models
are on the rise. A ML model in cybersecurity is a computational algorithm that uses statistical
techniques to analyze and interpret data to make predictions or decisions related to security.
Forty-four percent of respondents say their organizations use AI/ML to prevent cyberattacks and to maintain competitive advantage (49 percent of respondents) and to support their IT security team (40 percent of respondents). To ensure that AI/ML reduces cybersecurity risks and threats, 59 percent of respondents say they use AI vulnerability scanning, an AI firewall (52 percent of respondents) and adversary TTP training for security staff (47 percent of respondents).

To read best practices of high performing organizations, and the rest of this report, download it from Optiv’s website.

Cybercrime adds a new, very dangerous twist — face-to-face meetings

Bob Sullivan

We often think of cybercrime as a long-distance nightmare.  A victim is manipulated by someone pretending to be a lover, or a boss, or a seller, and then sends that criminal money using some electronic, virtual method.  A really disturbing trend I’ve noticed recently is the increased frequency of in-person meetings as part of a cybercrime.  A criminal visits the victim to pick up cash, or even gold, at their home (like this story we did in March). A criminal sends an Uber delivery person to pick up a  “package” that contains fraudulent payments. A victim is lured into a meeting over a Facebook Marketplace purchase, then robbed. Or, in the case of a recent Perfect Scam podcast I worked on, a con artist lurks at a “zone of trust” place like a golf course or a church looking for generous people to target with a charity scam.

This in-person meeting trend is alarming because a lot more things can go wrong when criminals are in the same physical space as their victims.  Earlier, I told you about the tragic story of an Ohio man who had been communicating with criminals attempting to commit a “grandparent scam”  and shot an Uber driver that he said he believed was part of the scam; he has been indicted for murder and pleaded not guilty. The driver, who died, was not a part of the scam.

Steve Baker, a longtime consumer advocate and former Federal Trade Commission lawyer, first pointed out this trend to me, and now I’m seeing it in many places. The Social Security Administration issued a dire-sounding warning a few weeks ago titled “Don’t Hand Off Cash to ‘Agents.’ ”   It reads:

“The Social Security Administration (SSA) Office of the Inspector General (OIG) is receiving alarming reports that criminals are impersonating SSA OIG agents and are requesting that their targets meet them in person to hand off cash. SSA OIG agents will never pick up money at your door or in any type of exchange. This is a SCAM!

NEVER exchange money or funds of any kind with any individual stating they are an SSA OIG agent. This new scam trend introduces an element of physical danger to scams that never existed before.

Meanwhile, police in New York are warning about a rise in crimes that begin as fake Facebook Marketplace ads — and end with victims staring down the barrel of a gun.

Why are cybercriminals getting this bold and meeting victims in person, or sending someone else to do that?  It’s too early to tell, but part of the reason *could* be increased transaction scrutiny at places like Zelle or cryptocurrency exchanges, along with increased fraud awareness around gift cards.  Time will tell.

In the meantime, I’m very concerned we will see more situations like that story from Ohio. Please be extra vigilant when speaking with loved ones about cybercrime.  Look and listen for signs of surprising new friends or unexpected meetings. Keep those lines of communication open.

 

2024 Global Study on Securing the Organization with Zero Trust, Encryption, Credential Management & HSMs

To stave off never-ending security exploits, organizations are investing in advanced technologies and processes. The purpose of this report, sponsored by Entrust, is to provide important information about the use of zero trust, encryption trends, credential management and HSMs to prepare for and prevent cyberattacks. The research also reveals what organizations believe to be the most significant threats. The top three are hackers, system or process malfunction and unmanaged certificates.

A second report will present the research findings of PKI and IoT, as well as how organizations are preparing to transition to post quantum cryptography in order to mitigate the quantum threat. For both reports, Ponemon Institute surveyed 4,052 IT and IT security practitioners who are familiar with the use of these technologies in their organizations.

“With the rise of costly breaches and AI-generated deepfakes, synthetic identity fraud, ransomware gangs, and cyber warfare, the threat landscape is intensifying at an alarming rate,” said Samantha Mabey, Director, Solutions Marketing at Entrust. “This means that implementing a Zero Trust security practice is an urgent business imperative – and the security of organizations’ and their customers’ data, networks, and identities depends on it.”

The countries in this research are the United States (908 respondents), United Kingdom (458 respondents), Canada (473 respondents), Germany (582 respondents), UAE (355 respondents), Australia/New Zealand (274 respondents), Japan (334 respondents), Singapore (367 respondents) and Middle East (301 respondents).

Organizations are adopting zero trust because of cyber risk concerns. Zero trust is defined in this research as an evolving set of cybersecurity paradigms that move defenses from static, network-based perimeters to focus on users, assets and resources. It assumes there is no implicit trust granted to assets or user accounts based solely on their physical or network location or based on asset ownership. Sixty-two percent of respondents say their organizations have adopted zero trust at some level. However, only 18 percent of respondents have implemented all zero-trust principles.

 In the survey, 67 percent of respondents say the most important drivers to implementing a zero-trust strategy is the risk of a data breach and/or other security incidents (37 percent) and the expanding attack surface (30 percent).

Following are the most salient findings from this year’s research

 The slow but growing adoption of zero trust

  • As evidence of the importance of zero trust to secure the organization, 57 percent of respondents that have or will implement zero trust say their organizations will include zero trust in their encryption plans or strategies. Sixty-two percent of respondents say their organizations have implemented all zero-trust principles (18 percent), some zero-trust principles (12 percent), laid the foundation for a zero-trust strategy (14 percent) or started exploring various solutions to help implement its zero-strategy (18 percent). According to the research, a lack of in-house expertise is slowing adoption.
  • Senior leaders are supporting an enterprise-wide zero-trust strategy. Fifty-nine percent of respondents say their leadership has significant or very significant support for zero trust. As evidence of senior leadership’s support, only 37 percent of respondents say lack of leadership buy-in is a challenge. The biggest challenges when implementing zero trust are lack of in-house expertise (47 percent of respondents) or lack of budget (40 percent of respondents). 
  • Securing identities is the highest priority for a zero-trust strategy. Respondents were asked to select the one area that has the highest priority for their zero-trust strategy. The risk areas are identities, devices, networks, applications and data. Forty percent of respondents say identities and 24 percent of respondents say devices are the priorities. 
  • Best-of-breed solutions are most important for a successful zero-trust strategy (44 percent of respondents). This is followed by an integrated solution ecosystem from one to three vendors (22 percent of respondents). 

Trends in encryption and encryption in the public cloud: 2019 to 2024 

  • Hackers are becoming more of a threat to sensitive and confidential data. Organizations need to make the hacker threat an important part of their security strategies. Since the last report, a significant increase from 29 percent of respondents to 46 percent of respondents cite hackers as the biggest concern to being able to protect sensitive and confidential information. 
  • Management of keys and enforcement of policy continue to be the most important features in encryption solutions. Respondents were asked to rate the importance of certain features in encryption solutions. The most important features are management of keys, enforcement of policy and system performance and latency. 
  • Since 2019, organizations have been steadily transferring sensitive and confidential data to public clouds whether or not it is encrypted or made unreadable via some other mechanism. In this year’s study, 80 percent of respondents say their organizations currently transfer (52 percent) or likely to do so in the next 12 to 24 months (28 percent). 
  • Encryption performed on-premise prior to sending data to the cloud using organizations’ own keys has declined significantly since 2019. The main methods for protecting data at rest in the cloud are using keys generated/managed by the cloud provider (39 percent of respondents) or encryption is performed in the cloud using keys their organizations generate and manage on-premises. Only 23 percent of respondents say encryption is performed on-premise. 
  • There has been a significant decrease in organizations only using keys controlled by their organization (from 42 percent to 22 percent of respondents). Instead, the primary strategy for encrypting data at rest in the cloud is the use of a combination of keys controlled by their organization and by the cloud provider, with a preference for keys controlled by their organization, a significant increase from 19 percent of respondents to 32 percent of respondents in 2024. This is followed by only using keys controlled by the cloud provider (24 percent of respondents). 
  • The importance of privileged user access controls has increased significantly. Respondents were asked to rate the importance of cloud encryption features on a scale of 1 = not important to 5 = most important. Privileged user access controls increased from 3.23 in 2022 to 4.38 in 2024 on the 5-point scale. The importance of granular access controls and the ability to encrypt and rekey data while in use without downtime also increased significantly. 

Trends in credential management and HSMs: 2019 to 2024 

  • Lack of skilled personnel and no clear ownership makes the management of credentials painful. Fifty-nine percent of respondents say managing keys has a severe impact on their organizations. There are interesting trends in what causes the pain since 2019. The lack of skilled personnel (50 percent of respondents) and no clear ownership (47 percent of respondents) continue to make credential management difficult. Insufficient personnel increased from 34 percent to 46 percent of respondents. Not causing as much pain are the inadequacy of key management tools (from 52 percent to 32 percent) and systems are isolated and fragmented (from 46 percent to 29 percent). 
  • Many types of keys are getting less painful to manage. Between 2019 to 2024 the following keys have become less painful to manage are external cloud or hosted services including Bring Your Own Keys (from 54 percent to 22 percent of respondents), SSH keys (from 57 percent to 27 percent of respondents) and signing keys (e.g. code signing, digital signatures (from 52 percent to 25 percent of respondents). 
  • Management of credentials is challenging because it is harder to consistently apply security policies over credentials used across multi-cloud and cross cloud environments. Fifty-five percent of respondents say the management of credentials becoming more challenging in a multi-cloud and cross-cloud environment. Thirty-six percent of respondents say it is due to the difficulty in consistently applying security policies over credentials used across cloud services followed by it is harder to have visibility over credentials that protect and enable access to critical data and applications (33 percent of respondents).  The applications that require the use of credential management across cloud-based deployments are mainly KMIP-compliant applications (44 percent of respondents), and databases, back-up and storage (43 percent of respondents). 
  • More organizations are using Hardware Security Modules (HSMs). HSMs are a dedicated crypto processor that is specifically designed for the protection of the crypto key lifecycle. Since 2019, the use of HSMs has increased from 47 percent of respondents to 55 percent of respondents. 
  • Organizations value the use of HSMs. Since 2019, organizations are increasing the use of HSMs as part of their encryption and credential management strategies. The use of application-level encryption, database encryption and TLS/SSL have increased significantly. For the first time, respondents were asked where HSMs are deployed.  Most are deployed in online root, offline root and issuing CA.  

You can download a full copy of the report at Entrust’s website.

They’re finding dead bodies outside scam call centers; it’s time to sound the alarm on fraud

Bob Sullivan

“The cartel just very quickly, easily, and efficiently made an example of them by leaving their body parts in 48 bags outside of the city….They’re good at making high profile, gruesome examples of those who would defy them.”

I’ve spent many years writing about Internet crime, so I don’t spook easily.  After working on this week’s podcast, I’m spooked.

For the last year or two, I’ve had a gathering sense of doom about the computer crime landscape. I hear about scams constantly, but something has seemed different lately. The dollar figures seem higher, the criminals more relentless, the cover stories far more sophisticated. Thanks to fresh reporting and statistics, I am now fairly certain I’m not being paranoid. Increasingly, Internet scams are being run by organized crime organizations that combine the dark side of street gangs with Fortune 500 sales tactics.   I will share numbers in a moment, but stories are always needed to make a point this important, and that’s why we bring you “James’ ” harrowing tale this week.  He wanted to sell an old, useless timeshare, but instead had $900,000 stolen from him — by the New Generation Jalisco Cartel in Mexico. That same group was blamed for murdering call center workers and spreading their body parts around Jalisco.

 This episode offers a rare chance to hear a scam in action. James recorded some of his calls with criminals and shared them with us.  ‘Show me, don’t tell me’ is the oldest advice in storytelling, and that’s why I really hope you’ll listen.  As you hear criminals who go by the names “Michael” and “Jesus” badger and manipulate James, your skin will crawl, as mine did. But I hope it will also place a memory deep in your limbic brain, so when you inevitably find yourself on the phone with such a criminal one day, an autonomic defensive reaction will kick in.

For this episode, I also spoke with a remarkable journalist named Steve Fisher. An American from rural Pennslyvania, he worked in farms as a youth, learned a lot about the plight of migrant workers, and that led him to take a post as an investigative journalist in Mexico City covering crime gangs.  Fisher recently wrote about a victim like James who thought he was unloading an unwanted timeshare, but instead had $1.8 million stolen during a decade of interactions with the cartel. In that victim’s case, about 150 different cartel “workers” interacted with him. I can’t begin to stress how vast this conspiracy is — how detailed the cartel’s record-keeping must be — in order to carry on this kind of ongoing crime. As we point out in the story, timeshare scams have become so profitable that this dangerous Mexican cartel is trading in drug running operations for call centers.  The gruesome methods of control remain, however.

This model is replicated around the world.  From India, there are tech support scams. From Jamaica, we get sweepstakes fraud. From Southeast Asia, cryptocurrency scams.  From Africa, romance scams.

(Of course, there are scams operated in the U.S., too, but those criminals don’t enjoy the natural protection that international boundaries and jurisdictional challenges provide).

I know most of us imagine scam criminals sitting in dark, smoke-filled boiler rooms placing 100s of calls every day desperately hunting for single victims.  That’s not how it works any more. Scam call center “employees” work in cubicles  (though in some cases, they are victims of human trafficking).  They have fine-tuned software; they work from lead lists; they have well-researched sales scripts; they have formalized training. And they succeed, very often.

The numbers bear this out. Theft through fraud has surged over the last five years, with losses jumping from $2.4 billion in 2019 to more than $10 billion in 2023. Of course, many scam losses are never reported, so the real number could easily be four or five times that.

But I trust my own ears, and you should too.  Recently, I have heard a pile of stories from victims in my larger social circle.  Plenty of near misses — friends who tell me they got a call from a “sheriff” about an arrest summons that was so believable they were driving to a bitcoin ATM before something triggered skepticism.  Unfortunately, I hear plenty of heartbreaking stories too, of people who bought gift cards or sent crypto before that skepticism kicked in.

I want you to listen for these stories in your life. Look for them in your social feeds, ask for them at family parties. I bet you leave this exercise just as concerned as I am. Scams have become big businesses, operated by large, sophisticated crime gangs all over the world. It’s time to talk with your friends and family about this.

We can’t educate our way out of this problem.  There’s a lot more than U.S. financial institutions, regulators, and law enforcement can do to slow the massive growth of fraud.  But at the moment, you are the best defense for yourself and the people you love.

To that end, I would like to suggest you listen to this week’s episode and share it with people you care about. I can tell you that scams are up, and that criminals are so persuasive anyone can be vulnerable. But there is nothing like hearing it for yourself.

Be careful out there.

You can listen to part 1 of our series by clicking here. And part two is at this link.

The ‘protected health information’ crisis in healthcare

The PHI crisis in healthcare is putting patient safety and privacy at risk. Healthcare organizations represented in this research experienced an average of 74 cyberattacks in the past two years and almost half of respondents (47 percent) say these cyberattacks resulted in the loss, theft or data breach of PHI. Over the past two years, the cost to detect, respond and remediate PHI cyberattacks was $2.6 million and another $1.6 million was spent to hire staff, paralegals and technologies to determine the cost to patients.

Protected health information (PHI) is any information in the medical record or designated record set that can be used to identify an individual and was created, used, or disclosed when providing a health care service such as diagnosis or treatment.

The purpose of this research, sponsored by Tausight and independently conducted by the Ponemon Institute, is to understand the challenges healthcare organizations face in securing PHI data. Ponemon Institute surveyed 551 US IT and IT security practitioners who are in the following healthcare organizations: hospitals (37 percent of respondents), healthcare service providers (23 percent of respondents), clinics (21 percent of respondents) and healthcare systems (19 percent of respondents). The primary responsibilities of respondents are managing IT and IT security budgets, assessing cyber risks to PHI, setting IT or IT security priorities and selecting vendors and contractors.

Healthcare organizations’ ability to protect patient PHI is in critical condition. Organizations are losing control of the risk because of the lack of visibility into the enormous amount of PHI outside EHR. There are two serious root causes of the PHI crisis. According to 58 percent of respondents, their organizations are unable to determine how much PHI exists outside of EHR, where it is and how it is being accessed. And Fifty-five percent of respondents say their organizations are at risk because of the excessive presence of PHI across their data centers, endpoints and email accounts. On average, organizations have 30,030 network-connected devices.

Findings that illustrate the PHI crisis in healthcare 

  • Organizations lack the budget to invest in PHI protection technologies (52 percent of respondents) and the ability to have the necessary expertise to manage PHI protection technologies (48 percent of respondents). 
  • Current legacy technologies have difficulty protecting the enormous amounts of PHI across our systems (66 percent of respondents) and identifying PHI on servers and endpoints to understand what to put in organizations’ secure storage (69 percent of respondents).
  • Migration to the cloud and collaboration tools have increased risks to PHI (52 percent of respondents).
  • The level of security risk to PHI created by remote care and accessing or transmission of PHI outside the firewall is very high, according to 57 percent of respondents.
  • Current technologies are not improving visibility into PHI outside EHR. As a result, only 39 percent of respondents say their organizations have a high ability to detect and classify unstructured data and only 47 percent of respondents say their organizations have a high ability to detect and classify structured data wherever they exist throughout the expanding digital environment.
  • Only 30 percent of respondents say their organizations have significant visibility into PHI located in the data center and endpoints where it is exchanged between doctors’ and patients’ systems or applications.
  • Most organizations say DLP and DSP software are not effective in improving visibility into PHI on endpoints, networks and in the cloud and providing visibility into data movement of PHI.
  • Once organizations have a PHI data breach, 71 percent of respondents say it very difficult to assess how many patients were affected by the breach and almost half of respondents (47 percent) say their organizations are likely to overreport the number of patients affected because of the difficulty in determining the device or server that was compromised.
  • The negative consequences of a PHI data breach are exacerbated because it can take an average of more than two months to recover, remediate and assess the impact to PHI and to be able to disclose the breach and notify affected patients.
  • Insiders put PHI data at risk. The most frequent types of insider negligence are accessing PHI on uncontrolled devices and accessing hyper-connected endpoints on networks and varying IT security standards. Other frequent incidents are sending emails with unencrypted PHI and moving PHI to an unknown USB drive and data is lost.

Click here to to watch a webinar about these findings with Larry Ponemon and David Ting — CTO and Co-Founder of Tausight, which helps healthcare organizations protect data.

When fraud turns fatal — Uber driver shot after ‘grandparent scam’ call

Bob Sullivan

When consumers and criminals interact, you just never know how combustible a situation can become. A recent story out of Ohio is a reminder that any scam can get very serious and lead to devastating consequences.

An Ohio man who had been communicating with criminals attempting to commit a “grandparent scam” shot and killed an Uber driver that he said he believed was part of the scam; he has been indicted for murder and pleaded not guilty.

Police say 81-year-old Michael Brock told them he had spent hours talking on the phone with someone who claimed that his nephew was in jail and needed bail money. Brock allegedly believed that Lo-Letha Hall, 61, had come to his house to pick up the money. He accused her of being part of the scam, and when she tried to leave, he fatally shot her.

Local news reports indicate Hall was an Uber driver simply picking up a package for what she thought was a normal delivery.

“Upon being contacted by Ms. Hall, Mr. Brock produced a gun and held her at gunpoint, making demands for identities of the subjects he had spoken with on the phone,” the sheriff’s office said, according to the Associated Press. Hall was unarmed and unthreatening, the sheriff’s office alleges in that story. A video posted on a local news site shows her walking away from Brock as he threatens her with a gun.

“I’m sure glad to see you guys out here because I’ve been on this phone for a couple hours with this guy trying to say to me I had a nephew in jail and had a wreck in Charleston and just kept hanging on and needing bond money,” Brock said to police, according to the Associated Press. “And this woman was supposed to get it.”

According to a memorial page set up for Hall, she was retired.

Whenever I speak in front of cybersecurity and fraud groups, I try to remind them how important their work is. There are plenty of reasons to take cybersecurity and financial fraud seriously — even crimes that might seem like common thefts can turn very serious, or be part of wider conspiracies. Even though it can feel exhausting and at times fruitless, all of us must continue the fight against scams and cybercrime.

Surprise! Here’s another attempt at a federal privacy law. Does it have a chance?

Bob Sullivan

Just when you think Congress isn’t going to be able to do anything constructive for the rest of this presidential election year, out comes a bipartisan proposal called The Americans Privacy Rights Act of 2024. Its name is delightfully simple; it arrives after decades of failed attempts to create federal protections or American privacy rights.  Given its authors — Sen. Cathy McMorris Rodgers, R-Wash., and Sen. Maria Cantwell, D-Wash — chair committees important to its passage, the proposed legislation seems to have a chance. It’ll get a hearing almost immediately. Rodgers has scheduled discussion for April 17 in the House Committee on Energy and Commerce.

The proposal attempts a tight-rope act on controversial elements that have stalled previous federal law efforts — namely state law pre-emption and the ability for consumers to sue bad actors, known as a private right of action.  One at a time:

Corporations have long demanded pre-emption, arguing that they shouldn’t have to deal with 50 different state standards when crafting policies. That argument has grown increasingly more tenuous as time has passed and statehouse after statehouse has done the work to pass their own privacy laws. (Check out this IAPP tracker.) The count sits at 15 now, with more coming soon. Those state lawmakers never look kindly at their hard work being discarded by Congress.  As a compromise, APRA would pre-empt state laws except when it wouldn’t.  There are exceptions for consumer protection rules, for example.

Corporations and some conservatives also dislike the private right of action previous laws have included.  Here’s why. The Telephone Consumer Protection Act allows consumers who receive unwanted calls to sue for $500 a pop under certain circumstances.  That’s a powerful incentive to prevent spam phone calls. Sometimes, it can abused by class action lawyers. The alternative is to leave enforcement to state attorneys general or other government agencies, which aren’t always responsive to consumer complaints.  APRA would create a TCPA-like right, and even set aside mandatory arbitration clauses for cases that involve minors or a “significant privacy harm.” You’d have to expect this provision will get major pushback. Without citing it specifically, Sen. Ted Cruz (R-Texas) issued a statement saying, in part, “I cannot support any data privacy bill that empowers trial lawyers.”

So, roadblocks remain. Still, arrival of The Americans Privacy Rights Act is a welcome surprise and…who knows? Perhaps Congress will take a step into the 21st Century.

For more reading, I found “10 Things to Know about APRA” at Wiley.com very helpful.”  And  So was this analysis by IAPP helpful.

CLICK HERE to read the American Privacy Rights Act discussion draft.

CLICK HERE to read the section-by-section of the discussion draft.

The 2024 Study on the State of AI in Cybersecurity

Sponsored by MixMode, the purpose of this research is to understand the value of artificial intelligence (AI) to strengthen an organizations’ security posture and how they are integrating these technologies into their security infrastructure. A challenge faced by organizations is the ability to overcome AI’s tendency to increase the complexity of their security architecture.

Ponemon Institute surveyed 641 IT and security practitioners in organizations that are at some stage of AI adoption. All respondents are involved in detecting and responding to potentially malicious content or threats targeting their organization’s information systems or IT security infrastructure. They also have some level of responsibility for evaluating and/or selecting AI-based cybersecurity tools and vendors.

AI improves IT security staff’s productivity and the ability to detect previously undetectable threats — 66 percent of respondents believe the deployment of AI-based security technologies will increase the productivity of IT security personnel. Given the oft-cited problem of a shortage of IT security expertise, this can be an important benefit. According to the research, an average of 34 security personnel are dedicated to the investigation and containment of cyber threats and exploits. About half of respondents (49 percent) say dedicated security personnel have specialized skills relating to the supervision of AI tools and technologies.

Sixty-six percent of respondents say their organizations are using AI to detect attacks across the cloud, on-premises and hybrid environments. On average, organizations receive 22,111 security alerts per week and an average of 51 percent of these alerts can be handled by AI without human supervision and an average of 35 percent are investigated. An average of 9,854 false positives are generated by security tools in a typical week. An average of 12,009 unknown threats go undetected. Seventy percent of respondents say AI is highly effective in detecting previously undetectable threats.

“AI is a game-changer for cybersecurity, as it can automate and augment the detection and response capabilities of security teams, as well as reduce the noise and complexity of security operations,” said John Keister, CEO of MixMode. “However, AI also poses new challenges and risks, such as the threat of AI being used for adversarial attacks and the need for specialized operator skills. MixMode understands the complexity of AI and delivers automated capabilities to revolutionize the cybersecurity landscape through our patented self-learning algorithm that can detect threats and anomalies in real-time at high speed and scale. This helps enterprises rapidly recognize new malware and insider risks to help strained security teams automate mundane tasks and focus on higher-level defenses.”

Following are the findings that describe the value of AI and the challenges when leveraging AI to detect and respond to cyberattacks.

 The value of AI

 AI is valuable when used in threat intelligence and for threat detection. In threat intelligence, AI is mainly used to track indicators of suspicious hostnames, IP addresses and file hashes (65 percent of respondents). In threat detection it creates rules based on known patterns and indicators of cyber threats (67 percent of respondents). Other primary uses in threat intelligence are the results of cybercrime investigations and prosecutions (60 percent of respondents) and in Tactics, Techniques & Procedure Reports (TTP). TTP’s are used to analyze and APT’s operation or can be used as a means of profiling a certain threat actor.

The benefits of AI include an improved ability to prioritize threats and vulnerabilities and the identification of application security vulnerabilities. Fifty percent of respondents say their security posture improves because of being better able to prioritize threats and vulnerabilities and 46 percent of respondents say AI identifies application security vulnerabilities.

Most AI adoption is at the early stage of being mature, but organizations are already reaping benefits. Fifty-three percent of respondents say their organizations’ use of AI is at the early stage of adoption. That means the AI strategy is defined, investments are planned and partially deployed. Only 18 percent of respondents say AI in cybersecurity activities are fully deployed and security risks are assessed. Effectiveness is measured with KPIs and C-level executives are regularly updated about how AI is preventing and reducing cyberattacks

AI effectiveness is measured by the financial benefits these technologies deliver. Sixty-three percent of respondents say their organizations measure the decrease in the cost of cybersecurity operations, 55 percent of respondents say they measure increases in revenue and 52 percent of respondents say they measure the productivity increases the SOC team’s ability to detect and respond to threats.

Defensive AI is critical to protecting organizations from cyber criminals using AI. Fifty-eight percent of respondents say their organizations are investing in AI to stop AI-driven cybercrimes. Sixty-nine percent of respondents say defensive AI is important to stopping cybercriminals’ ability to direct targeted attacks at unprecedented speed and scale while going undetected by traditional, rule-based detection. The basic rules model refers to a traditional approach where predefined rules and signatures are used to analyze and detect threats.

 The challenges of AI deployment

 Difficulty in applying AI-based controls that span across the entire enterprise and interoperability issues are the two main barriers to effectiveness. Sixty-one percent of respondents say a deterrent to AI effectiveness is the inability to apply AI-based controls that span across the entire enterprise and 60 percent of respondents say there are interoperability issues among AI technologies.

However, AI adoption complicates integrating AI-based security technologies with legacy systems (65 percent of respondents) and requires the simplification of security architecture to obtain maximum value from AI-based security technologies (64 percent of respondents). Seventy-one percent of respondents say digitization is a prerequisite and critical enabler for deriving value from AI.

Organizations are struggling to identify areas where AI would create the most value. Only 44 percent of respondents say they can accurately pinpoint where to deploy AI. However, 62 percent of respondents say their organizations have a high ability to identify where machine learning would add the most value. A machine learning (ML) model in cybersecurity is a computational algorithm that uses statistical techniques to analyze and interpret data to make predictions or decisions related to security.

Eighty-one percent of respondents say generative AI is highly important. Generative AI refers to a category of AI algorithms that generates new outputs based on the large language models they have been trained on. Sixty-nine percent of respondents say it is highly important to integrate advanced analysis methods, including machine learning and/or behavioral analytics.

Outside expertise is needed to obtain value from AI. Fifty-four percent of respondents say their organizations need outside expertise to maximize the value of AI-based security technologies. Fifty percent of respondents say a reason to invest in AI is to make up for the shortage in cybersecurity expertise.

The lack of budget and internal expertise are barriers to getting value from AI. The top two reasons organizations are not leveraging AI is insufficient budget for AI-based technologies (56 percent of respondents). Currently, 60 percent of respondents say 25 percent or less of the average annual total IT security budget of $28 million is allocated to investments in AI and ML investments. Another barrier is the lack of internal expertise to validate vendors’ claims (53 percent of respondents). Forty-two percent of respondents say there is not enough time to integrate AI-based technologies into security workflows.

Many employees outside of IT and IT security distrust decisions made by AI. Fifty-six percent of respondents say it is very difficult to get rank-and-file employees to trust decisions made by AI. Slightly more than half (52 percent of respondents) say it is very difficult to safeguard confidential and personal data used in AI. Forty-six percent of respondents say it is very difficult to comply with privacy and security regulations and mandates. As the deployment of AI matures, organizations may become more confident in their ability to safeguard personal data and comply with privacy and security regulations and mandates.

Despite AI using personal data, few organizations have privacy policies applied to AI. Sixty-five percent of respondents say confidential consumer data is used by their organizations AI. However, only 38 percent of respondents say their organizations have privacy policies specifically for the use of AI. If they do have policies, 44 percent of respondents say they conduct regular privacy impact assessments and 41 percent of respondents say their organizations appoint a privacy officer to oversee the governance of AI privacy policies. Only 25 percent of respondents say they work with vendors to ensure “privacy by design” in the use of AI technologies.

Organizations are at risk because consumer data is often used by organizations’ AI. According to the research, organizations are having difficulties in safeguarding confidential data. However, 65 percent of respondents say consumer data is being used by their organizations’ AI. Without having the needed safeguards in place, consumer data is at risk of being breached. Seventy percent of respondents say analytics are used by AI.

Despite AI technologies using personal data, few organizations have privacy polices applied to AI. As discussed, 65 percent of respondents say confidential consumer data is used by their organizations AI. However, only 38 percent of respondents say their organizations have privacy policies specifically for the use of AI. If they do have policies, 44 percent of respondents say they conduct regular privacy impact assessments and 41 percent of respondents say their organizations appoint a privacy officer to oversee the governance of AI privacy policies.

Organizations should take a unified approach with an organizational task force to manage risk. According to the findings, few organizations have an enterprise-wide strategy for understanding where AI adds value. Less than half of respondents (49 percent) have an organizational task force to manage AI risk and only 37 percent of respondents have one unified approach to managing both AI and privacy security risks.

To read the rest of this study, visit MixMode.com

 

Ill-conceived TikTok ban is a missed opportunity; a real privacy law would be far superior

Bob Sullivan

In the age where the U.S. House of Representatives can’t really do anything, it has voted to ban TikTok in its current ownership structure. Color me unimpressed, and more than a bit concerned. I’d say the fact that Congress’ lower house was able to pass the legislation should give everyone pause.

Of course TikTok poses a national risk. So do other platforms, which is why it’s time for Congress to pass comprehensive privacy and national security legislation that forces all platforms to handle personal data with care. But that’s…hard.  Grandstanding that you’ve been tough on TikTok is easy, so that’s what we’re getting.

TikTok’s owner, Bytedance, has been given the choice to divest itself from the popular social media service. I hope you’ll think it’s strange that Congress can force such a sale …. of a single company. But setting that aside for the moment, it’s hard to imagine that doing so will solve the problem everyone seems to agree exists — that the Chinese government can access TikTok’s intimate data about its users.  Would a the sale really stop that? And what of China’s ability to buy such data from any one of hundreds of data brokers in the U.S. who are willing to sell such information to the highest foreign bidder? (Duke University has tested this theory.)

And as for Chinese ownership, it must be asked, why stop with TikTok? Anyone who’s been online in the past six months has seen the near-ubiquitous ads for a shopping service named Temu and its “shop like a billionaire” tagline. That’s because this Chinese-owned firm has spent billions of dollars advertising with companies like Meta/Facebook.  But Temu has been sued for, essentially, loading its software up with spyware.  You should read this analysis for yourself, but here’s a highlight: “The app has hidden functions that allow for extensive data exfiltration unbeknown to users, potentially giving bad actors full access to almost all data on customers’ mobile devices.”

A well-written law would stop a company from doing what Temu is (accused of) doing before it starts, or make it very easy to shut it down. Instead, Congress is doing something so arbitrary that it has made TikTok into a sympathetic character, which would have seemed impossible a few months ago.

Hopefully, cooler heads will prevail in the Senate, and a better law can emerge from this rare moment of focus on data privacy. If not, I fear this legislation could delay passage of a real, comprehensive federal privacy law.   Congress could mistakenly believe it has solved a problem and turn its meager attention in other directions; and the law could backfire so badly that lobbyists could point to it for years as proof no law should ever be passed that limits big tech’s powers.

I found Alex Stamos’ appearances on NBC networks to be informative; you can watch them here.