Monthly Archives: October 2024

The dark deepfake version I’m most worried about

Bob Sullivan

Everyone should be concerned about deepfakes and voice cloning; what’s difficult is deciding how concerned to be.  When it comes to the use of AI in crime, I think we have a 5-alarm fire on our hands.

I just came back from a week of talks at the University of Georgia journalism school, and I can tell you students there are very worried about the use of artificial intelligence in the generation of fake news.  I’m less worried about that, perhaps betraying naivete on my part. This is the second presidential election cycle where much has been made about the potential to create videos of candidates saying things they’d never say; so far, there are no high-profile examples of this swaying an electorate. There was a high-profile cloning of then-candidate Joe Biden’s voice during the New Hampshire primary, when an operative paid for robocalls designed to suppress the vote (he said, later, just to make a point).  That fake call was exposed pretty quickly.

We can all imagine a fake video that a candidate’s opponents might want to spread, but my sense is that such a deepfake wouldn’t persuade anyone to change their vote — it would merely reinforce an existing opinion.  I could be wrong; in an election this close, a few votes could make all the difference.  But there are far easier ways to suppress votes — such as long voting lines — and those should get at least as much attention as deepfakes.

I am far more concerned about more mundane-sounding AI attacks, however. Research I’ve done lately confirms what I have long feared — AI will be a boon for scam criminals. Imagine a crime call center staffed by robots armed with generative conversational skills.  AI bot armies really can replace front-line scam call center operators, and can be more effective at finding targets.  They don’t get tired, they have no morals, and perhaps most significantly — they hallucinate.  That means they will change their story randomly (say from, ‘your child’s been kidnapped’ to ‘your child is on the operating table’), and when they hit on a story that works, they’ll stick with it. This might allow a kind of dastardly evolution at a pace we’ve not seen before.  And while voters might see through attempts to put words in the mouths of presidential candidates, a hysterical parent probably won’t detect a realistic-sounding imitation of their kid after a car accident.

As with all new tech, we risk blaming too much fraud and scandal on the gadgets, without acknowledging these very same schemes have always been around.  Tech is a tool, and tools can always be used for both good and bad.  The idea of scaling up crime should concern everyone, however.  Think about spam. It’s always been a numbers game. It’s always been an economic battle.  There’s no way to end spam.  But if you make spam so costly for criminals that the return on investment drops dramatically – if spammers make $1 from every 100,000 emails, rather than $1 from every 1,000 emails — criminals move on.

That’s why any tech which lets criminals scale up quickly is so concerning.  Criminals spend their time looking for their version of a hot lead — a victim who has been sent to a heightened emotional state so they can be manipulated.  Front-line crime call center employees act as filtering mechanisms. Once they get a victim on the line and show that person can be manipulated into performing a small task, like buying a $50 gift card or sharing personal information, these “leads” are passed on to high-level closers who escalate the crime.  This process can take months, or longer.  Romance scam criminals groom victims for years occasionally. Now, imagine AI bots performing these front-line tasks.  They wouldn’t have to be perfect. They’d just have to succeed at a higher rate than today’s callers, who are often victims of human trafficking working against their will.

This is the dark deepfake future that I’m most worried about.  Tech companies must lead on this issue. Those who make AI tools must game out their dark uses before they are released to the world.  There’s just too much at stake.

The 2024 Study on Cyber Insecurity in Healthcare: The Cost and Impact on Patient Safety and Care

An effective cybersecurity approach centered around stopping human-targeted attacks is crucial for healthcare institutions, not just to protect confidential patient data but also to ensure the highest quality of medical care.

This third annual report was conducted to determine if the healthcare industry is making progress in reducing human-centric cybersecurity risks and disruptions to patient care. With sponsorship from Proofpoint, Ponemon Institute surveyed 648 IT and IT security practitioners in healthcare organizations who are responsible for participating in such cybersecurity strategies as setting IT cybersecurity priorities, managing budgets and selecting vendors and contractors.

According to the research, 92 percent of organizations surveyed experienced at least one cyberattack in the past 12 months, an increase from 88 percent in 2023. For organizations in that group, the average number of cyberattacks was 40. We asked respondents to estimate the single most expensive cyberattack experienced in the past 12 months from a range of less than $10,000 to more than $25 million. Based on the responses, the average total cost for the most expensive cyberattack was $4,740,000, a 5 percent decrease over last year. This included all direct cash outlays, direct labor expenditures, indirect labor costs, overhead costs and lost business opportunities.

At an average cost of $1.47 million, disruption to normal healthcare operations because of system availability problems continues to be the most expensive consequence from the cyberattack, a 13 percent increase from an average $1.3 million in 2023. Users’ idle time and lost productivity because of downtime or system performance delays decreased from $1.1 million in 2023 to $995,484 in 2024. The cost of the time required to ensure the impact on patient care is corrected also decreased from an average of $1 million average in 2023 to $853,272 in 2024.

Cloud/account compromise. The most frequent attacks in healthcare are against the cloud, making it the top cybersecurity threat for the third consecutive year. Sixty-three percent of respondents say their organizations are vulnerable or highly vulnerable to a cloud/account compromise. Sixty-nine percent say their organizations have experienced a cloud/account compromise. In the past two years, organizations in this group experienced an average of 20 cloud compromises.

Supply chain attacks. Organizations are very or highly vulnerable to a supply chain attack, according to 60 percent of respondents. Sixty-eight percent say their organizations experienced an average of four attacks against its supply chain in the past two years.

Ransomware. Ransomware remains an ever-present threat to healthcare organizations, even though concerns about it have declined. Fifty-four percent of respondents believe their organizations are vulnerable or highly vulnerable to a ransomware attack, a decline from 64 percent. In the past two years, organizations that had ransomware attacks (59 percent of respondents) experienced an average of four such attacks. While fewer organizations paid the ransom (36 percent in 2024 vs. 40 percent in 2023), the ransom paid spiked 10 percent to an average of $1,099,200 compared to $995,450 in the previous year.

Business email compromise (BEC)/spoofing/impersonation. Concerns about BEC/spoofing/impersonation attacks have decreased. Fifty-two percent of respondents say their organizations are vulnerable or highly vulnerable to a BEC/spoofing/impersonation incident, a decrease from 61 percent in 2023. Fifty-seven percent of respondents say their organizations experienced an average of four attacks in the past two years.

Cyberattacks can cause poor patient outcomes due to delays in procedures and tests.
As in the previous report, an important part of the research is the connection between cyberattacks and patient safety. Among the organizations that experienced the four types of cyberattacks in the study, an average of 69 percent report disruption to patient care.

Specifically, an average of 56 percent report poor patient outcomes due to delays in procedures and tests, an average of 53 percent saw an increase in medical procedure complications and an average of 28 percent say patient mortality rates increased, a 21 percent spike over last year.

The following are additional trends in how cyberattacks have affected patient safety and patient care delivery. 

  • Supply chain attacks are most likely to affect patient care. Sixty-eight percent of respondents say their organizations had an attack against their supply chains. Of this group, 82 percent say it disrupted patient care, an increase from 77 percent in 2023. Patients were primarily impacted by an increase in complications from medical procedures (51 percent) and delays in procedures and tests that resulted in poor outcomes (48 percent).
  • A BEC/spoofing/impersonation attack causes delays in procedures and tests. Fifty-seven percent of respondents say their organizations experienced a BEC/spoofing/impersonation incident. Of these respondents, 65 percent say a BEC/spoofing/impersonation attack disrupted patient care. Sixty-nine percent say the consequences caused delays in procedures and tests that have resulted in poor outcomes and 57 percent say it increased complications from medical procedures.
  • Ransomware attacks cause delays in patient care. Fifty-nine percent of respondents say their organizations experienced a ransomware attack. Of this group, 70 percent say ransomware attacks had a negative impact on patient care. Sixty-one percent say patient care was affected by delays in procedures and tests that resulted in poor outcomes and 58 percent say it resulted in longer lengths of stay, which affects organizations’ ability to care for patients.
  • Cloud/account compromises are least likely to disrupt patient care. Sixty-nine percent of respondents say their organizations experienced a cloud/account compromise. In this year’s study, 57 percent say the cloud/account compromises resulted in disruption in patient care operation, an increase from 49 percent in 2023. Fifty-six percent of respondents say cloud/account compromises increased complications from medical procedures and 52 percent say it resulted in a longer length of stay. 
  • Data loss or exfiltration has had an impact on patient mortality. Ninety-two percent of organizations had at least two data loss incidents involving sensitive and confidential healthcare data in the past two years. On average, organizations experienced 20 such incidents in the past two years. Fifty-one percent say the data loss or exfiltration resulted in a disruption in patient care. Of these respondents, 50 percent say it increased the mortality rate and 37 percent say it caused delays in procedures and tests that resulted in poor outcomes. 

Other key trends 

Employee negligence because of not following policies caused data loss or exfiltration. The top one root cause of data loss or exfiltration, according to 31 percent of respondents, was employee negligence. Such policies include employees’ responsibility to safeguard sensitive and confidential information and the practices they need to follow. As shown in the research, more than half of respondents (52 percent) say their organizations are very concerned about employee negligence or error.

Cloud-based user accounts/collaboration tools that enable productivity are most often attacked. Sixty-nine percent of respondents say their organizations experienced a successful cloud/account compromise and experienced an average of 20 cloud/account compromises over the past two years. The tools most often attacked are text messaging (61 percent of respondents), email (59 percent of respondents) and Zoom/Skype/Videoconferencing (56 percent of respondents).

The lack of clear leadership is a growing problem and a threat to healthcare’s cyber security posture. While 55 percent of respondents say their organizations’ lack of in-house expertise is a primary deterrent to achieving a strong cybersecurity posture, the lack of clear leadership as a challenge increased significantly since 2023 from 14 percent to 49 percent of respondents. Not having enough budget decreased from 47 percent to 40 percent of respondents in 2024. Survey respondents note that their annual budgets for IT increased 12 percent from last year ($66 million in 2024 vs. $58.26 million in 2023) with 19 percent of that budget dedicated to information security. The healthcare industry seems to recognize cyber safety is patient safety \based on the findings.

Organizations continue to rely on security training awareness programs to reduce risks caused by employees but are they effective?  Negligent employees pose a significant risk to healthcare organizations. While more organizations (71 percent in 2024 vs. 65 percent of respondents in 2023) are taking steps to address the risk of employees’ lack of awareness about cybersecurity threats, are they effective in reducing the risks? Fifty-nine percent say they conduct regular training and awareness programs. Fifty-three percent say their organizations monitor the actions of employees.

To reduce phishing and other email-based attacks, most organizations are using anti-virus/anti-malware (53 percent of respondents). This is followed by patch& vulnerability management (52 percent of respondents) and multi-factor authentication (49 percent of respondents).

Concerns about insecure mobile apps (eHealth) increased. Organizations are less worried about employee-owned mobile devices or BYOD, a decrease from 61 percent in 2023 to 53 percent of respondents in 2024, BEC/spoofing/impersonation, a decrease from 62 percent in 2023 to 46 percent in 2024 and Cloud/ account compromise, a decrease from 63 percent in 2023 to 55 percent in 2024.  However, concerns about insecure mobile apps (eHealth) increased from 51 percent to 59 percent of respondents in 2024.

 AI and machine learning in healthcare For the first time, we include in the research the impact AI is having on security and patient care. Fifty-four percent of respondents say their organizations have embedded AI in cybersecurity (28 percent) or embedded in both cybersecurity and patient care (26 percent). Fifty-seven percent of these respondents say AI is very effective in improving organizations’ cybersecurity posture.

AI can increase the productivity of IT security personnel and reduce the time and cost of patient care and administrators’ work. Fifty-five percent of respondents agree or strongly agree that AI-based security technologies will increase the productivity of their organization’s IT security personnel. Forty-eight percent of respondents agree or strongly agree that AI simplifies patient care and administrators’ work by performing tasks that are typically done by humans but in less time and cost.

Thirty-six percent of respondents use AI and machine learning to understand human behavior. Of these respondents, 56 percent of respondents say understanding human behavior to protect emails is very important, recognizing the prevalence of socially-engineered attacks. 

While AI offers benefits, there are issues that may deter wide-spread acceptance. Sixty-three percent of respondents say it is difficult or very difficult to safeguard confidential and sensitive data used in organizations’ AI.

Other challenges to adopting AI are the lack of mature and/or stable AI technologies (34 percent of respondents), there are interoperability issues among AI technologies (32 percent of respondents) and there are errors and inaccuracies in data inputs ingested by AI technology (32 percent of respondents).

Click here to read the entire report at Proofpoint.com