The 2020 Study on the State of Industrial Security

Larry Ponemon

Ponemon Institute is pleased to present the findings from The 2020 State of Industrial Security Study, sponsored by TÜV Rhineland. The purpose of the research is to understand cyber risks across a broad spectrum of industries and the steps organizations are taking to reduce cyber risk in the operational technology (OT) environment.

Ponemon Institute surveyed 2,258 cybersecurity practitioners in the following industries: automotive, oil and gas, energy and utilities, health and life science, industrial manufacturing and logistics and transportation. All respondents are responsible for securing or overseeing cyber risks in the OT environment and are aware of how cybersecurity threats could affect their organization.

In the context of this research, Operational Technology (OT) is the hardware and software dedicated to detecting or causing changes in physical processes through direct monitoring and/or control of physical devices. Simply put, OT is the use of computers to monitor or alter the physical state of a system, such as the control system for a power station. The term has become established to demonstrate the technological and functional differences between traditional IT systems and industrial control systems environment.

The OT environment is vulnerable to cyberattacks: 57 percent of respondents say their organizations’ security operations and/or business continuity management teams believe there will be one or more serious attacks within the OT environment. Almost half (49 percent and 48 percent of respondents) say it is difficult to mitigate cyber risks across the OT supply chain and cyber threats present a greater risk in the OT than the IT environment.

The following findings reveal the cybersecurity vulnerabilities in the OT environment.  

  • OT and IT security risk management efforts are not aligned. Sixty-three percent of respondents say OT and IT security risk management efforts are not coordinated making it difficult to achieve a strong security posture in the OT environment. The management of OT security is painful because of the lack of enabling technologies in OT networks, complexity and insufficient resources. 
  • On average, organizations had four security compromises that resulted in the loss of confidential information or disruption to OT operations. Forty-seven percent of respondents say OT technology-related cybersecurity threats have increased in the past year. The top three cybersecurity threats are phishing and social engineering, ransomware and DNS-based denial of service attacks. One-third of respondents say such exploits have resulted in the loss of OT-related intellectual property. 
  • The majority of organizations have not achieved a high degree of cybersecurity effectiveness. Less than half of respondents say they are very effective in responding to and containing a security exploit or breach (48 percent), continually monitoring the infrastructure to prioritize threats and attacks (47 percent) and pinpointing sources of attacks and mobilizing the right set of technologies and resources to remediate the attack (47 percent of respondents). 
  • To minimize OT-related risks organizations need to replace outdated and aging connected control systems in facilities, according to 61 percent of respondents. More than half (52 percent of respondents) say vulnerable software is creating risks in the OT environment. 
  • Not enough expertise and budget are often cited as reasons for not having a strong security posture in the OT environment. Organizations represented in this research are spending annually an average of $64 million on cybersecurity operations and defense (OT and IT combined). An average of 26 percent of this budget or approximately $17 million is allocated to the security of OT assets and infrastructure and an average of 17 percent or approximately $10 million is allocated specifically to OT cybersecurity. Respondents say their OT budgets are inadequate to properly execute their cybersecurity strategy. 
  • Accountability for executing a successful cybersecurity strategy. Respondents were asked who is most accountable for executing a successful cybersecurity strategy. Only 20 percent of respondents say it is the OT security leader followed by the CIO/CTO (18 percent) and the IT security leader (17 percent). 
  • Organizations are lagging behind in adopting advanced security technologies. Only 38 percent of respondents say their organizations are using automation, machine learning and artificial intelligence to monitor OT assets. The majority of companies are not integrating security and privacy by design in the engineering of OT control systems.

To read the full report, visit TUV Rheinland’s website.

If we’re going to talk about Section 230, let’s get it right

Now we’ve started something

Bob Sullivan

With President Donald Trump threatening retribution against Twitter with an executive order, you’re going to hear a lot about Section 230 this week — and maybe for many weeks. The ensuing discussion could shake the Internet to its very roots.  That’s going to make legal scholars very happy, but it might seem like a dizzying discussion for most.  That’s by design. Interested parties are conflating all kinds of big ideas to muddy the waters here: the First Amendment, innovation, bias, abuse, millions of followers, billions of dollars.  I’m going to try to sort it out for you here.  Who am I to do that? Well, I’m old enough to remember when the Communications Decency Act and its Section 230 was passed into law.

But if you are going on this journey with me, here’s the rules: Nothing is as simple nor as absolute as it sounds.  Free speech isn’t limitless.  “Speech” isn’t even what you think it is. Immunity isn’t limitless. The First Amendment doesn’t generally apply to private companies…most of the time.  But in a rare confluence of events, there are reasons for both conservatives and liberals to take a good long look at updating and fixing Section 230, which has been the source of much profit for corporations and much pain for Internet users since it became law in 1996.

(And if you really want to understand Section 230, I recommend reading this very readable 25-page academic paper titled The Internet As a Speech Machine and Other Myths Confounding Section 230 Speech Reform. Authors Danielle Keats Citron and Mary Ann Franks do a great job explaining the history of the law and the myths that hold America back from reasonable reform. Or, even better, consider The Twenty-Six Words That Created the Internet, a book by Jeff Kosseff, all about Section 230.)

Section 230 was written at the time of Prodigy and Compuserve, when online services were mainly text-based chat tools, and virtually no consumers used websites.  These services had a problem: Were they liable for everything users said? Could they be sued for defamation, or charged criminally, if users misbehaved? To use the kind of shorthand that journalists love but lawyers hate, should they be treated like publishers of the content — akin to a newspaper editor or book publisher — or mere distributors, akin to a newstand owner?  Courts were split on the matter, and that terrified tech firms. Imagine the liability a company like Google, or Facebook, or America Online, would face if it could be charged with every crime committed on their service.

The defensive shorthand I was taught at my startup, inaccurate as it might be, was this: When a tech company actively moderates user content, it becomes a publisher and increases liability. When a tech company just shoves the stuff automatically out into the world, it’s merely a newsstand, a distributor.  So: Don’t touch!

That free-for-all worked about as well as you might imagine (Porn! Stolen goods! Harassment!) so lawmakers tried to help by passing Section 230.  It sounds straightforward: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” The idea was to shield online service providers who tried to do the right thing (stop harassment and other crimes) from liability.  The law was actually meant to encourage content moderation. It gave service providers a shield against responsibility for third-party content.  But what a winding road it’s been since then.

First, the good: Plenty of folks see Section 230 as the First Amendment of the Internet. Scholar Eric Goldman actually argues that it’s better than the First Amendment. It’s inarguable that online services have thrived since then, and plenty of them credit Section 230.

However, this simple you’re-not-responsible-for-third-party-content rule has been extended by courts and corporations far beyond its original intention. Recall, it was written right about the time Amazon was invented.  The Internet was nearly 100% text-based speech, digital conversation, at the time.  Today, it’s Zoom and car buying and television and it elects a U.S. president.

So that leads us to the moment at hand. The Internet is awash in disinformation, harassment, crime, racism…the dark side of humanity thrives there.  Plenty of people have been driven from its various platforms through doxxing, gender abuse, or simple exhaustion from nasty arguments. I argue all the time that it has made us dumber as a people, offering the Flat Earth Movement as proof. In short, the Internet sucks (More than a decade later, this is still a great read). As Citron and Frank say:

“Those with political ambitions are deterred from running for office. Journalists refrain from reporting on controversial topics. Sexual assault victims are discouraged from holding perpetrators accountable…. An overly capacious view of Section 230 has undermined equal opportunity in employment, politics, journalism, education, cultural influence, and free speech. The benefits Section 230’s immunity has enabled surely could have been secured at a lesser price.”

For better and worse, this is a good time to reconsider what Section 230 hath wrought.

For a quick moment, here are some obstacles to the discussion, forged by confusion. You, and I, and President Trump, can’t have our First Amendment right to free speech suppressed by Twitter or Facebook or Instagram. Generally, the First Amendment applies to governments, not private enterprises.  Facebook, as any true conservative or libertarian would tell you, is free to do what it wants with its company, and the president is free to not use it.  In fact, the government compelling a social media company to say certain things or not say other things — to argue it could not add a link for fact-checking — is a rather obscene violation of the company’s First Amendment rights.

Even on this fairly clear point, there is some room for discussion, however.  In Canada, courts have ruled that social media is so ubiquitous that it can be akin to a public square, according to Sinziana Gutui, a Vancouver privacy lawyer.  So might the US some day feel that cutting off someone’s Twitter account is akin to cutting off their telephone line or electricity? Perhaps.  It sure seems less strained to suggest President Trump simply find another platform to use for his 320-character messages.

And even on this issue of speech, there is confusion. U.S. courts have broadly expanded the definition of speech far beyond talking, publishing pamphlets, or writing posts on an electronic bulletin board.  Commercial activity can be considered speech now.  And that expanded definition has helped websites argue for Section 230 immunity when their members are committing illegal acts — such as facilitating the sale of counterfeit goods, or guns to criminals known to be evading background checks.

Immunity often encourages bad behavior, a classic “moral hazard,” as Franks has written. Set aside fake autographs and illegally purchased domestic violence murder weapons for the moment — the Internet is drowning in antagonism, bots, and harassment that has made it inhospitable for women and men of good faith. It rewards extremism.  It is unhealthy for people and society. It’s not going to fix itself. Citron and Franks again:

“Market forces alone are unlikely to encourage responsible content moderation. Platforms make their money through online advertising generated when users like, click, and share,. Allowing attention-grabbing abuse to remain online often accords with platforms’ rational self-interest. Platforms “produce nothing and sell nothing except advertisements and information about users, and conflict among those users may be good for business.” On Twitter, ads can be directed at users interested in the words “white supremacist” and “anti-gay.” If a company’s analytics suggest that people pay more attention to content that makes them sad or angry, then the company will highlight such content. Research shows that people are more attracted to negative and novel information. Thus, keeping up destructive content may make the most sense for a company’s bottom line.”

Facebook profits massively off all this social destruction. We learned this week that employees inside Facebook have come up with some very clever technological solutions to this problem, only to be kneecapped by Mark Zuckerberg, clearly drunk on conveniently-profitable take-no-responsibility libertarian ideals.

What’s the solution? For sure, that’s much harder.  Citron and Franks suggest adding a simple “reasonable” requirement on companies like Facebook, meaning they have to take reasonable steps to police users in order to maintain Section 230 immunity. Reasonable is a difficult standard, possibily leading to endless ’round-the-rosie’ debate, but it is a common standard in U.S. law. Facebook’s engineers came up with notions worth trying, detailed in this Wall Street Journal story, such as shifting extreme discussions into sub-groups.  The firm could also stop giving extra algorithm juice to obsessives who post 1,000 times a day.

As always, a mix of innovation and smart rules that balance interests is needed.

It won’t be easy, but we have to try. So, it’s good that President Trump has shined a light on Section 230. The discussion is long overdue, as is the will to act. Will the discussion be productive? Probably not if it happens on Twitter. Definitely not if it’s focused on an imaginary social media bias against Trump or Trump’s 80 million followers, who clearly have no trouble finding each other. Instead, let’s focus on making the world safe again for reasonable people.

The state of endpoint security risk: it’s skyrocketing

Larry Ponemon

The Third Annual Study on the State of Endpoint Security Risk, sponsored by Morphisec, reveals that organizations are not making progress in reducing their endpoint security risk, especially against new and unknown threats. In fact, in this year’s research, 68 percent of respondents report that their company experienced one or more endpoint attacks that successfully compromised data assets and/or IT infrastructure over the past 12 months, an increase from 54 percent of respondents in 2017.

A webinar on the report is available for free at Morphisec’s website.

“Corporate endpoint breaches are skyrocketing and the economic impact of each attack is also growing due to sophisticated actors bypassing enterprise antivirus solutions,” said Larry Ponemon, Chairman and Founder of Ponemon Institute. “Over half of cybersecurity professionals say their organizations are ineffective at thwarting major threats today because their endpoint security solutions are not effective at detecting advanced attacks.”

Ponemon Institute surveyed 671 IT security professionals responsible for managing and reducing their organization’s endpoint security risk. Companies represented in this research are very concerned about the significant increase in new and unknown threats against their organization (an increase from 69 percent of respondents in 2017 to 73 percent in 2019). On a positive note, since 2017 more respondents say their organizations have ample resources to minimize IT endpoint risk due to infection or compromise (an increase from 36 percent to 44 percent).

Following are 10 key findings from this research.

  1. The frequency of attacks against endpoints is increasing and detection is difficult. Sixty-eight (68) percent of respondents say the frequency of attacks has increased over the past 12 months. More than half of respondents (51 percent) say their organizations are ineffective at surfacing threats because their endpoint security solutions are not effective at detecting advanced attacks.
  1. The cost of successful attacks has increased from an average of $7.1 million to $8.94 million. Costs due to the loss of IT and end-user productivity and theft of information assets have increased. The cost of system downtime has decreased significantly since 2017. 
  1. New or unknown zero-day attacks are expected to more than double in the coming year. The frequency of existing or known attacks is expected to decrease significantly from 77 percent to an anticipated 58 percent in the coming year. In contrast, the frequency of new or unknown zero-day attacks is expected to increase to 42 percent next year. 
  1. An average of 80 percent of successful breaches are new or unknown “zero-day attacks.” These attacks either involved the exploitation of undisclosed vulnerabilities or the use of new/polymorphic malware variants that signature-based detection solutions do not recognize.
  2. Zero-day attacks continue to increase in frequency. In addition to being more successful, zero-day attacks have also become more prevalent. As a result, organizations are investing more budget to protect against these threats. 
  1. Most organizations either use or plan to use Microsoft Windows Defender antivirus solution. Eighty percent (80) of respondents say they currently have (34 percent) or plan to have in the near future (46 percent) the Microsoft Windows Defender antivirus solution. The top two reasons are to reduce the number of separate endpoint security tools and the solution is on par with other antivirus tools. 
  1. The challenge in the use of traditional antivirus solutions are a high number of false positives and security alerts, inadequate protection and too much complexity. Fifty-six (56) percent of respondents say their organizations replaced their endpoint security solution in the past two years. Of these respondents, 51 percent say they kept their traditional antivirus solution but added an extra layer of protection. According to these respondents, the challenges with traditional antivirus solutions are a high number of false positives and security alerts, inadequate protection and too much complexity in the deployment and management of these solutions. 
  1. Antivirus products missed an average of 60 percent of attacks. Confidence in traditional antivirus (AV) solutions continues to drop. On average, respondents estimate their current AV is effective at blocking only 40 percent of attacks. In addition to the lack of adequate protection, respondents cite high numbers of false positives and alerts as challenges associated with managing their current AV solutions. 
  1. The average time to apply, test and fully deploy patches is 97 days. The findings reveal the difficulties in keeping endpoints effectively patched. Forty percent (40) of respondents say their organizations are taking longer to test and roll out patches in order to avoid issues and assess the impact on performance.
  1. Ineffectiveness and lack of in-house expertise are reasons not to use an EDR. Sixty-four (64) percent of respondents who say their organizations do not have an EDR cite its ineffectiveness against new or unknown threats (65 percent of respondents) followed by 61 percent who say they don’t have the staff to support.

Go to Morphisec’s website to read the full report.

 

Covid-19: The Golden Age of Scams

Bob Sullivan

Nearly 100,000 scam-ready domains have been registered since the Covid-19 pandemic began. It’s the Super Bowl for digital criminals, the golden age of computer fraud. Why? Because a con artist’s best friend is urgency.

We are living through the golden age of scams right now, so I’m going to do an ongoing series about coronavirus crimes.  First up: My conversation with Grace Brombach, who just wrote a report on scams(PDF) for the U.S. Public Interest Research Group.

“We are dealing with so much fear and confusion right now,” Brombach tells me. “People are being put in a very difficult situation where they don’t really know what to believe.”

Of particular worry: Homebound computer users are being told to download all kinds of new software and fill out forms full of personal information, doing things that ordinarily they would never do. For example: Employees are working from home, Zooming everywhere.  Think about how believable an email might be that appeared to come from an HR department, promising new video conference guidelines or requiring new software installation.

Making matters worse, as cybersecurity expert Harri Hursti has told me, a lot of corporate security software is designed to look for unusual patterns in network traffic — like massive downloads or a surprising number of remote logins. Everything is unusual now.

In addition, there’s also a lot of burden on parents (and grandparents) to help their kids do schoolwork from home. That opens up a big attack vector.   Urgent messages claiming to be from schools, including assertions that children have been infected are particularly insidious.

Brombach says most scams fall into two categories: Sale of false cures; and phishing scams designed to commit ID theft. Some of these emails are incredibly believable. There are email alerts from scammers posing as the CDC or WHO promising Covid alerts. Criminals benefit from trading off the trust big brand names have.

“There was a recent map that came out tracking coronavirus cases … posing from Johns Hopkins and when people would click on the map it would actually download malware onto their computers to steal their personal information,” Brombach said. “It’s all across the board…They really are difficult to identify.”

NOTE: Organizations like WHO or the CDC will not send you unsolicited texts or emails unless you’ve already signed up for them.  But given all the talk about contact tracing apps, it’s easy to understand why a consumer might fall for a text message with an alert warning them they’d been near someone who’d tested positive for Covid.

“There’s this misconception that people have of, ”I would never fall for a scam,’ but some of them are so, so believable, so it’s really important to be on your guard as much as possible,” Brombach warned.

Here’s the scams she’s most worried about in the near future:

  • Criminals offering help with economic impact payments. In some cases, only an SSN and a birthdate are needed to access government benefits.  In other cases, criminals are promising frustrated aid recipients they can help get faster payments.
  • Fake Covid testing sites
  • Price gouging
  • Fake cures and treatments. “It’s so hard for the FDA to keep up with all these claims,” she said. Also, remember that it’s generally legal to sell supplements with broad claims like immune system boosting.

You can hear my conversation with PIRG’s Grace Brombach by clicking play below or by clicking on this link

The economic value of prevention in the cybersecurity lifecycle

Larry Ponemon

Ponemon Institute is pleased to present the findings of The Economic Value of Prevention in the Cybersecurity Lifecycle, sponsored by Deep Instinct. The cybersecurity lifecycle is the sequence of activities an organization experiences when responding to an attack. The five high-level phases are prevention, detection, containment, recovery and remediation.

We surveyed 634 IT and IT security practitioners who are knowledgeable about their organizations’ cybersecurity technologies and processes. Within their organizations, most of these respondents are responsible for maintaining and implementing security technologies, conducting assessments, leading security teams and testing controls.

“If we could quantify the cost savings of the prevention of attacks, we would be able to increase our IT security budget and debunk the C-suite’s myth that AI is a gimmick. I believe AI is critical to preventing attacks.” —  CISO, financial services industry.

The key takeaway from this research is that when attacks are prevented from entering and causing any damage, organizations can save resources, costs, damages, time and reputation.

To determine the economic value of prevention, respondents were first asked to estimate the cost of one of the following five types of attacks: phishing, zero-day, spyware, nation-state and ransomware. They were then asked to estimate what percentage of the cost is spent on each phase of the cybersecurity lifecycle, including prevention. Because there are fixed costs associated with the prevention phase of the cybersecurity lifecycle, such as in-house expertise and investments in technologies, there will be a cost even if the attack is stopped before doing damage. For example, the average total cost of a phishing attack is $832,500 and of that 82 percent is spent on detection, containment, recovery and remediation. Respondents estimate 18 percent is spent on prevention. Thus, if the attack is prevented the total cost saved would be $682,650 (82 percent of $832,500).

Seventy percent of respondents (34 percent + 36 percent) believe the ability to prevent cyberattacks would strengthen their organization’s cybersecurity posture. However, 76 percent of respondents (40 percent + 36 percent) say they have given up on improving their ability to prevent an attack because it is too difficult to achieve.

The following are the most noteworthy findings from the research.

  • Organizations are most effective in containing cyberattacks. Fifty-five percent of respondents say their organizations are very or highly effective at containing attacks in the cybersecurity lifecycle. Less than half of respondents (46 percent) say their organizations are very or highly effective in preventing cyberattacks. Organizations are also allocating more of the IT security budget to technologies and processes in the containment phase than in the prevention phase. 
  • Prevention of a cyberattack is the most difficult to achieve in the cybersecurity lifecycle. Eighty percent of respondents say prevention is very difficult to achieve followed by recovery from a cyberattack. The reason for the difficulty is that it takes too long to identify an attack. Other reasons are outdated or insufficient technologies and lack of in-house expertise. The technology features considered most important are the ability to prevent attacks in real-time and based on different types of files. 
  • Automation and advanced technologies increase the ability to prevent cyberattacks. Sixty percent of respondents say their organizations currently deploy AI-based or plan to deploy AI for cybersecurity within the next 12 months. Sixty-seven percent of respondents believe the use of automation and advanced technologies would increase their organizations’ ability to prevent cyberattacks. Further, 67 percent of respondents expect to increase their investment in these technologies as they mature. 
  • Deep learning is a form of AI and is inspired by the brain’s ability to learn. In the context of this research, deep learning is defined as follows: once a human brain learns to identify an object, its identification becomes second nature. Deep learning’s artificial brains consist of complex neural networks and can process high amounts of data to get a profound and highly accurate understanding of the data analyzed. The top three reasons to incorporate a deep- learning-based-solution are to lower false positive rates, increase detection rates and prevent unknown first-seen cyberattacks. 
  • Perceptions that AI could be a gimmick and lack of in-house expertise are the two challenges to deployment of AI-based technologies. Fifty percent of respondents say when trying to gain support for the adoption of AI there is internal resistance because it is considered a gimmick. This is followed by the inability to recruit personnel with the necessary expertise (49 percent of respondents).
  • Organizations are making investments in technology that do not strengthen their cybersecurity budget based on the wrong metrics. Fifty percent of respondents say their organizations are wasting limited budgets on investments that don’t improve their cybersecurity posture. The primary reasons for the failure are system complexity, personnel and vendor support issues. Another reason is that most organizations are using return on investment (ROI) to justify investments and is not based on the technology’s ability to increase prevention and detection rates. 
  • IT security budgets are considered inadequate. Only 40 percent of respondents say their budgets are sufficient to achieve a strong cybersecurity posture. The average total IT budget is $94.3 million and of this 14 percent or approximately $13 million is allocated to IT security. Nineteen percent or approximately $2.5 million will be allocated to investments in enabling security technologies such as AI, machine learning, orchestration, automation, blockchain and more.

Sample finding:

With the exception of the exploitation phase of the kill chain, zero-day attacks are very difficult to prevent in the cyber kill chain. The cyber kill chain is a way to understand the sequence of events involved in an external attack on an organization’s IT environment. Understanding the cyber kill chain model is considered helpful in putting the strategies and technologies in place to “kill” or contain the attack at various stages and better protect the IT ecosystem. Following are the 7 steps in the cyber kill chain:

  1. Reconnaissance: the intruder picks a target, researches it and looks for vulnerabilities
  2. Weaponization: the intruder develops malware designed to exploit the vulnerability
  3. Delivery: the intruder transmits the malware via a phishing email or another medium
  4. Exploitation: the malware begins executing on the target system
  5. Installation: the malware installs a backdoor or other ingress accessible to the attacker
  6. Command and Control (C2): the intruder gains persistent access to the organization’s systems/network
  7. Actions on Objective: the Intruder initiates end goal actions, such as data theft, data corruption or data destruction

Respondents were asked to rate the difficulty in preventing a zero-day attack in every phase of the cyber kill chain on a scale of 1 = not difficult to 10 = very difficult. Figure 16 presents the very difficult responses (7+ on the 10-point scale). The most difficult phase to prevent the zero-day attack is the command and control phase (80 percent) in which the intruder gains persistent access to the organization’s systems/network followed by the delivery phase of the kill chain (78 percent).

 

Read the full report by visiting Deep Instinct’s website

 

 

 

 

New Podcast: Erin and Noah on the run, why Americans carry tracking devices everywhere now

Alia “followed” me around through cyberspace during one day in Los Angels.

Bob Sullivan

Erin and her son Noah think they’ve finally found a safe place to live, in a quiet Ohio town, invisible to Erin’s abusive ex-husband.  But that life is shattered by a disturbing voice mail after a single photo of Noah accidentally appears on a school website. The call sends mother and child on the run again, but not before a near-disaster at Noah’s school.

Sometimes, privacy is a matter of life and death. And while Erin and Noah’s story is fiction, the story of how privacy advocate Brian Hofer ended up in a police car, with a gun pointed at his brother’s head, is chillingly real.

This week we begin this second season of No Place to Hide. You’re going to hear something very new, and very different: a combination of fiction and non-fiction storytelling. Each episode begins with a scene from the story of Erin and Noah, a mom and her son on the run from his abusive father.  The story is designed to make listeners feel the way victims feel when they are stalked through cyberspace. Then, Alia and I take on the big privacy topics of our day, concluding with a look at the world in 2030 if nothing is done to manage the coming tsunami of privacy invasive technologies.

No Place to Hide is sponsored by Intel Corp.

I’m really proud of this concept, and this series. Privacy isn’t some esoteric idea, or a first-world problem. Privacy is about freedom, and free will, and personal safety, and creativity.

This week’s episode is about location data, a topic that’s in the news a lot right now. Authoritarian regimes around the world are trying to stem the tide of coronavirus by tracking citizens’ movements via their cell phones. Well, every country is trying to do that. In places like China, there is no pretense of worry about civil liberties. In the U.S., Apple and Google have announced a system that uses Bluetooth to alert people who’ve been near a patient that’s tested positive. Theoretically, that limits the information to a small group who really needs it. Still, plenty of firms and governments are bragging about use of cell phone location data as a public health tool  — the state of New Mexico, for example, is using data to see how well residents are social distancing.   The data is being examined nationally, also.

Only a fool wouldn’t try all available tools to beat back coronavirus. But what are the long-term implications of these more aggressive steps by governments to track citizens via their cell phones? And how did we all end up with tracking devices in our pockets in the first place?

This week’s episode of No Place to Hide delves deep into the history of location data; I hope it will help inform public discussion as we move forward to the next step in this crisis, which is sure to include a lot of arm wrestling between the good technology can do and the potential harms.

Ep 4: On location Summary

Erin and Noah — Dad finds them in Ohio because an errant photo ended up on a school website. They have to drop everything and flee, right as their dad shows up at school.

Bob and Alia: Cell phones track our every move, in perhaps the biggest attack on privacy of our time. On location in Los Angeles, Bob and Alia discuss the past ten years of location-specific data hoarding by large companies. Then we hear why Oakland Privacy Commission chair Brian Hofer ended up in a police squad car, and his brother had a gun pointed to his head ‘executioner-style,’ all over a database error.

 

Partial transcript

BOB: A single piece of location information doesn’t seem that distressing. But when you can put it all on a map, over time, and build a picture of someone’s life, that’s when you’ve really, really invaded their privacy.

ALIA: You know, it kind of reminds me of this person I knew a long time ago, Bob. And I remember one day we were having coffee, and he was telling me about how, uh, assassinations worked. And I thought that was really creepy, but do you know what the first rule was to figure out how to assassinate someone? The rule was you get to know their habits, and you get to know their days, and you watch them. Where they go, how they get there, when they get there, every single day. Because if you know their habits, then you know where the holes are when you might do the deed.

BOB: …That’s what Liam Youens did to Amy Boyer…

ALIA: Yeah… that’s really scary. So what you’re talking about , in like learning someone’s habits– their daily whereabouts– you can look for opportunities to do something terrible potentially. And he was talking about it in like the old school sense of, you know, like stakeouts. You’re watching this person. And what you’re talking about is, basically, you don’t have to do that anymore, because Google does it for you.

BOB: And not just Google of course, any cellphone does this for you.

ALIA: Right. Ugh. 

BOB: Mobile devices are tracking devices, and so who has access to that information? Maybe through that Terms & Conditions box you checked? Your mobile provider.  Your apps. Hundreds of companies in between that are collecting these incredibly detailed profiles of your movements. You know, I recently wrote about a selfie app that teenagers love — it has 300 million downloads. And sends all their location information to the developer…in China.

ALIA: And there’s that NYTimes exposé on location data, that we’ve both been obsessed with. Someone gave the reporters at the Times a copy of a location database with a year’s worth of data.  Using that, they were able to track specific people, like a secret service agent, someone protecting the president, from their home to the White House to their church. And they had this location data for over 12 million people.

BOB: Just imagine what our fictional angry ex-husband from the opening could do with data like that.

ALIA: That’s so scary. 

BOB: When we talk about issues like privacy and data security, I get emotional and philosophical about civil liberties. And maybe you don’t care if Google knows what websites you visit or Amazon knows what kind of dog food you buy. But location data is next level. And as our little experiment showed, as the NYT story showed, something incredible happened in the past decade. The advent of smartphones means that most Americans, and about half the people on Earth, now carry small, incredibly accurate tracking devices with them at all times. And… I don’t remember anyone having a great, open, honest debate about the wisdom of that.

ALIA: Me neither. But I think we should.

{break}

BRIAN HOFER: Yeah, I, you know, I, I can’t get half of my friends to use like Signal or other encrypted software, or to, you know, have two factor authorization, cause you know, we’ll trade anything for convenience and speed. 

ALIA: That’s Brian Hofer. He’s a community activist in Oakland, California. We’ll hear a lot more about his activism later, but for now, he paints an amazing picture about the importance of location information.

BRIAN HOFER: It only takes four geospatial data points. So that’s time and location. Four different geospatial data points to identify over 95% of humans. Why? Cause I drive to work the same way, I drive to the gym the same way, I go to the same grocery store. We’re creatures of habit. So you know, whether it’s your scooter, uh, whether it’s even public transit that now mostly use like electronic payments, uh, obviously license plate readers, and obviously cell phones, you only need a couple of, you know, four or five data points and you, and your, you can map somebody out, you can figure out who it is.

The question usually is, Well, I have nothing to hide, so I have nothing to fear. And that, and that’s totally wrong. And I like how Edward Snowden, uh, flipped that on its head and said, you have something to protect. What if we just did have an abortion and there’s cameras right outside of that clinic and a license plate scanner, uh, and you’re tracking my phone calls, you know, to the clinic and my location? Or what if I keep driving and parking in front of the same cancer doctor’s office? Maybe I didn’t want to tell you I had cancer. Um, what if I am exploring my sexuality and there is facial recognition on the front of, uh, bars, you know, a same-sex bar that I wanted to walk into, but now I’m scared because there’s facial recognition. So all these little data points by themselves, probably not a civil Liberty threat, probably not, uh, invading my privacy, but together because of the nature of all the commingled data and databases together, what we now call it, and you’ve seen it in, uh, Sotomayor’s, uh, uh, some of her opinions, we call it the Mosaic Theory, that there’s all these little tiles, these little pieces…

BOB: So, Mosaic theory. This is really important.  It’s super creepy that in an instant, you could see everywhere I went all morning. But it should be even creepier to think that with just a few details, I could pretty much size up your whole life. I mean, imagine you are Erin and Noah, trying to get away from an angry ex husband. In just a moment, with data like this, he would know exactly when to show up at school to snatch a child. You see, most people’s lives aren’t really that complex. We only go to a few places 95% of the time. 

We talked Marc Groman about this — he was the first-ever chief privacy officer at the FTC and senior advisor for privacy in the Obama White House

MARC GROMAN – If you have my precise location over say a couple of weeks, you essentially can draw highly sensitive inferences about my entire life. You will understand my religious beliefs, my political beliefs, my health issues potentially. And by the way, it’s so precise now we know not just that you’re in the hospital, but if you’re in a 12 story building, we know what floor in the hospital

ALIA: Wow. 

ALIA:  Susan Grant of the Consumer Federation of America. We talked to her for a while about location data and I gotta say, when she talked about the creation of ‘megaprofiles’ I got chills.

SUSAN GRANT: Location is just one of the many, um, very revealing things about you that can be compiled into a mega profiles about you. So it’s not just where you are at any given moment, but it’s where you go most frequently.  Um, which can tell a lot about you. Um, you know, uh, uh, where you go to church reveals what your religion is, for instance. Um, these are things that people have a right to keep private if they want. Um, and uh, yet this information is being collected when it’s not needed

ALIA: Ok, this all sounds pretty awful. Tracking gadgets in our pockets and purses. Really precise data being sent to companies we work with, all around the world, available to the government…but I have to ask a question I know you love, Bob. So, Bob …who’s making money off all this location information?

BOB:  That’s always the important question to ask. And, we have to credit Buzzfeed for a great story explaining how valuable location information is.

BOB: “Location-sharing agreements between app developers and app brokers – where apps can send your GPS coordinates up to 14,000 times per day – can bring in a lot of revenue. With just 1,000 users, app developers can get $4/month. If they have 1 million active users, they can get $4,000/month. And that’s from just one broker. If they work with two app brokers with similar payouts, and have at least 10 million active monthly users, they could stand to make $80,000/month.”

BOB: Quote: “With more dangerous permissions given by the user, they will get more sensitive data, which means they’ll make more money.” End Quote. 

ALIA: So…that selfie app we talked about. It had 300 million downloads! OMG, how much money they must be making.

BOB: Exactly. But to me, it’s important to remember that big fish eat little fish metaphor from the first half of the series.

ALIA: Bob, I was waiting for a metaphor.

BOB: So a consumer group in Norway recently investigated dating apps like  Grindr, Tinder, OkCupid, and so on, and found they were selling sensitive data like location data into this ecosystem…but one of the buyers was a firm named MoPub. Which is owned by Twitter.

ALIA: Ahh Twitter. Because someone has to be writing those big checks, driving this whole ecosystem. And again, when did we decide as a society that we were ok with this? We didn’t. It just kind of…happened

BRIAN HOFER: And what is so scary, you know, back when we all read, like, 1984, we thought the government was just going to force everything on us–

ALIA: Here’s Brian Hofer again–

BRIAN: and what the American business genius was is nah, just ask people to do it voluntarily, you know, we’ll just offer them convenience and they’ll do all these things on their own and most people don’t look beneath the hood and don’t really look to see what the ramifications are.

 

Is Your Company Ready for a Big Data Breach?

Larry Ponemon

The Seventh Annual Study: Is Your Company Ready for a Big Data Breach? sponsored by Experian Data Breach Resolution and conducted by Ponemon Institute tracks the steps companies are taking, or not taking, to respond to a data breach. According to the findings, since 2017 significantly more organizations are having data breaches, highlighting the importance of being prepared.

This year, we surveyed 650 professionals in the United States 456 in EMEA[1]. A comparison of the US and EMEA findings are presented in Part 3 of this report. All respondents work in IT and IT security, compliance and privacy and are involved in data breach response plans in their organizations. In the context of this research, we define a data breach as the loss or theft of information assets, including intellectual property such as trade secrets, contact lists, business plans and source code. Data breaches happen for various reasons including human errors and system glitches. They also happen as a result of malicious attacks, hactivism or criminal attacks that seek to obtain valuable data, disrupt business operation or tarnish reputation.

Organizations are challenged to respond to the loss or theft of confidential business information and intellectual property. Sixty-seven percent of respondents say their organizations are most concerned about the loss or theft of intellectual property. However,  since 2017 the ability to respond to a data breach involving this type of information has not improved significantly. Organizations are better able to respond to breaches that require notification to victims and regulators.

In this year’s research, we introduced the following new topics:

  • The maturity of organizations’ privacy and data protection program
  • The frequency, consequences and preparedness to deal with spear phishing attacks
  • The frequency, consequences and preparedness to deal with ransomware
  • The impact of the General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) on data breach preparedness

The following findings describe organizations’ abilities to respond to a big data breach

Investments in security technologies are increasing to improve the ability to determine and respond quickly to a data breach. More data breaches are occurring. As a result, 68 percent of respondents say their organizations have increased their investments in security technologies in order to be able to detect and respond quickly to a data breach.

C-suite executives are more knowledgeable than the board of directors about data breach preparedness plans. The C-suite’s knowledge about the data breach preparedness plans is much higher than the board of directors (55 percent of respondents vs. 40 percent of respondents).

Most training and awareness programs are conducted when employees are hired. Seventy-two percent of respondents have a privacy and training program for employees and other stakeholders who have access to sensitive or confidential information. Almost half (49 percent of respondents) say training is conducted during the on-boarding of new employees.

Cyber insurance coverage is focused on attacks by cyber criminals and malicious or criminal insiders. About half of respondents (49 percent) say their organizations have a data breach and cyber insurance policy. Of the 51 percent of respondents who currently do not have a cyber insurance policy, 58 percent will purchase one within the next two years. Eighty-three percent of respondents say it covers incidents caused by cyber criminals and 65 percent of respondents say it covers malicious or criminal insiders. Only 38 percent of respondents say it covers human error, one of the major causes of a data breach.

Since 2017, the coverage of identity protection services to victims has increased significantly. The top areas of coverage are legal defense costs and identity protection and notification costs to data breach victims. Seventy-two percent of respondents say identity protection services are covered, an increase from 64 percent in 2017.

The primary benefit of sharing information about data breach experiences and incident response plans is collaborating with peers. Fifty-seven percent of respondents currently or are planning to participate in a sharing program about data breaches and incident response plans. The primary benefit is that it fosters collaboration among peers and industry groups.

Effectiveness of data breach response plans continues to improve. Since 2017, more respondents say their data breach response plans are very or highly effective. An increase from 49 percent of respondents to 57 percent of respondents. However, 66 percent of respondents say their organizations have not reviewed or updated the plan since it was put in place or have not set a specific time to review and update the plan. Only 26 percent of respondents say it is reviewed annually.

The majority of organizations practice responding to a data breach. Seventy-five percent of respondents say they practice their ability to respond to a data breach. Of these, 45 percent of respondents say they do this twice per year.

More organizations are regularly reviewing physical security and access to confidential information. The primary steps being taken to prepare for a data breach are regular reviews of physical security and access to confidential information (73 percent of respondents) and conducting background checks on new full-time employees and vendors (69 percent of respondents).

Organizations are not confident in their ability to minimize reputational consequences and prevent the loss of customers. To prevent the loss of customers, 62 percent of respondents believe credit monitoring protection for victims is the best protection for consumers and the most effective in keeping customers. However, only 23 percent of respondents say their organization is confident in its ability to minimize the financial and reputational consequences of a material data breach and only 38 percent of respondents say they are effective at doing what needs to be done following a material data breach to prevent the loss of customers’ and business partners’ trust and confidence.

Spear phishing attacks are pervasive and confidence in dealing with them is declining. Sixty-nine percent of respondents had one or more spear phishing attacks and 67 percent of respondents say the negative consequences of these attacks was very significant or significant. Despite the frequency of these attacks, 50 percent of respondents do not train their employees to recognize and minimize spear phishing incidents. Since 2017, respondents who say their organizations are very confident or confident in their ability to deal with spear phishing attacks has declined from 31 percent to 23 percent.

Respondents are even less confident in their ability to deal with ransomware. Only 20 percent of respondents are very confident in their ability. Thirty-six percent of respondents say their organizations had a ransomware attack. The average ransom was $6,128 and 68 percent of respondents say it was paid.

More breaches are international or global in scope and only 34 percent of respondents say they are confident in their organizations’ ability to respond to these breaches. As discussed previously, 63 percent of respondents say their organization had a data breach in the past two years. Forty-five percent of respondents say one more of these breaches were global. Since 2017, respondents reporting that their incident response plan includes processes to manage an international data breach increased significantly from 54 percent to 64 percent. Fifty-seven percent say the plan is specific to each location it operates.

Now that the General Data Protection Regulation (GDPR) has been in effect for more than a year, organizations have improved their ability to comply with it. Fifty-four percent of respondents say they have a high or very high ability to comply with the regulation (an increase from 36 percent) and 50 percent of respondents have a high or very high effectiveness in complying with the data breach notification rules (an increase from 23 percent). Having the necessary security technologies in place to detect the occurrence of a data breach quickly is the number one reason for being effective.

CCPA results in organizations having to make comprehensive changes in business practices. Fifty-six percent of respondents say they are aware of the CCPA and of these respondents, 47 percent of respondents say they are subject to the Act. The top two challenges to compliance with the CCPA are similar to achieving compliance with the GDPR, which are the need to change business practices and not enough budget to hire additional staff.

Lessons learned from organizations with a mature privacy and data protection program

The report presents a special analysis on how the maturity of organizations’ privacy and data protection programs can affect data breach preparedness. Nineteen percent of respondents self-reported that their organization have a mature program, which means that activities are fully defined, maintained across the enterprise and measured with KPIs. In addition, C-level executives are regularly informed about the program’s effectiveness. The following findings are persuasive in showing how making the needed investments to achieve maturity will improve data breach preparedness.

  • Mature privacy and data protection programs have fewer data breaches. Fifty-five percent of respondents in mature programs say their organizations had a data breach in the past two years. In contrast, a minimum of 60 percent of respondents in the other levels of maturity report having a data breach.
  • Mature programs are more adept at preventing negative public opinion and media coverage. Fifty-five percent of respondents say they are effective in managing the risk of negative opinions and media coverage following a material data breach. In contrast, only 37 percent of respondents in programs that are in the middle stage say they are effective.
  • More mature programs represented in this study are increasing investments in security technologies to be able to detect and respond quickly to a data breach.
  • Mature programs are more likely to participate in sharing information about their data breach and incident response experiences with government and industry peers.
  • Mature programs are better prepared to manage an international data breach. Seventy-one percent of respondents in mature programs say their incident response plan includes processes to manage an international data breach.

 For the full results, visit Experian’s website 

Coronavirus could be a tipping point (finally) for telecommuting

Bob Sullivan

Since the 1973 oil embargo, and the nearly concurrent coining of the term “gridlock,” Americans have mused about telecommuting as the solution to many modern ills. When high-speed Internet began making its way into homes in the late 1990s, telecommuting seemed on the verge of a breakout. Why waste time in traffic jams when email can get to your home office just as quickly?  The promise of returning 10 or so hours each week to workers — not to mention dramatic potential savings in office rental costs — sounded irresistible.

Instead, managers seemed too attached to the physical presence of their employees, and some employees wondered if their stay-at-home co-workers were really getting much done in their jammies.  A bit of a backlash emerged after the turn of the century, reaching its apex when Yahoo CEO Marissa Meyer effectively killed that company’s work from home program.

So much for leaving rush-hour traffic behind.

Today, a scant 3 percent of Americans telecommute most of the time, according to FlexJobs. That means just about as many Americans will suffer through daily “extreme commutes” — lasting more than 90 minutes, each way — as will take advantage of full-time telecommuting.

The Coronavirus might finally change that.

In reaction to the outbreak’s foothold in Seattle, big tech companies in the Pacific Northwest have quickly adopted telecommuting plans.  Microsoft, Amazon, Facebook, and Google have all told employees to work from home whenever possible, and to stay there for most of March.  So has King County, the local government in the Seattle area.  Fred Hutchinson Cancer Research Center told many of its employees they have no choice — they must work at home.

Early 2020 might turn into a forced social experiment that could finally answer the question: Do we need rush hour any more?

“While about 50% of people work from home at least half the week on a regular basis, we still see that only about 3-4% work from home full-time. Now, because of the coronavirus, we’re seeing a real focus on remote work that may very well be a tipping point in terms of wider-spread adoption of full-time remote work,” said Brie Weiler Reynolds, Career Development Manager and Coach at FlexJobs. “It seems that, in this latest situation, companies have more easily jumped to remote work as one big solution to keep employees safe, maintain continuity of operations, and handle the uncertainty day by day.”

Of course, not everyone can work from home. Bus drivers and security workers, for example, must remain at their posts. The Seattle Times has an important story about this newly and rapidly forming digital divide.  That group cannot be ignored in this social experiment.

But it’s hard not to imagine Seattle companies might get used to all those empty desks, not to mention emptier highways, and with new work patterns in place, find a way to continue their ad-hoc work-from home arrangements long-term. It’s a stretch to look for silver linings in today’s climate, but climate researchers have found one when looking at China. Air pollution has plummeted there during the crisis.   It’s easy to imagine that kind of unintended consequence in Seattle as well, as thousands of cars are taken off the road and gridlock is reduced.

Widespread adoption of telecommuting holds out big promises, FlexJobs says: 124 billion fewer car miles driven annually, 8 billion fewer trips, an $8 billion reduction in auto accident costs, 54 million tons less greenhouse gas emissions.

While most companies are sensibly making only short-term plans right now, Weiler expects virus-related work-from-home arrangements will probably last well past the end of March.

“Because the virus’s threat is ongoing and it’s hard to predict how long things may stay this way, we may see companies using remote work daily for the coming weeks or months, and realizing that it’s actually a productive, effective way to work over a long term basis,” she said.

Privacy worries not slowing shift to the cloud (but concerns linger)

Larry Ponemon

The Ponemon Institute is pleased to present the findings of Data Protection and Privacy Compliance in the Cloud, sponsored by Microsoft. The purpose of this research is to better understand how organizations undergo digital transformation while wrestling with the organizational impact of complying with such significant privacy regulations as the GDPR. This research explored the reasons organizations are migrating to the cloud, the security and privacy challenges they encounter in the cloud, and the steps they have taken to protect sensitive data and achieve compliance.

The Ponemon research qualified 1,049 IT and IT security participants from the United States and the European Union (EU). All of them were familiar with their organization’s approach to privacy and data protection compliance and responsibility for ensuring that personal data is protected in  the cloud environment. Fifty five percent of respondents operate a cloud infrastructure with one primary cloud service provider; 45 percent operate in multiple or hybrid cloud environments.

Privacy concerns are not slowing the adoption of cloud services. The importance of the cloud in
reducing costs and speeding time to market seem to override privacy concerns. Only one-third of US respondents and 38 percent of EU respondents say they have stopped or slowed their adoption of cloud services because of privacy concerns,

Most privacy-related activities are easier to deploy in the cloud. These include such governance practices as conducting privacy impact assessments, classifying or tagging personal data for sensitivity or confidentiality, and meeting legal obligations, such as those of the GDPR. However, managing incident response is considered easier to deploy on premises than in the cloud.

However, most organizations lack confidence in, visibility into, and a clear delineation
of responsibility for managing privacy in the cloud.

  • Despite the anticipated increase in the importance of the cloud in meeting privacy and data protection objectives, 53 percent of US and 60 percent of EU respondents are not confident that their organization currently meets their privacy and data protection requirements. This lack of confidence may be because most organizations are not vetting cloud-based software for privacy and data security requirements prior to deployment.
    • Organizations are reactive and not proactive in protecting sensitive data in the cloud. Specifically, just 44 percent of respondents are vetting cloud-based software or platforms for privacy and data security risks, and only 39 percent are identifying information that is too sensitive to be stored in the cloud.
    • Just 29 percent of respondents say their organizations have the necessary 360-degree visibility into the sensitive or confidential data collected, processed, and/or stored in the cloud. Organizations also lack confidence that they know all the cloud applications and platforms that they have deployed.
    • In most organizations, the IT security and compliance teams are not responsible for ensuring
    security safeguards and compliance with privacy and data protection regulations. Thirty six percent of respondents expect the cloud service provider to ensure the security of SaaS applications. In contrast, 46 percent of respondents say the organization is responsible. Further, privacy and data protection teams are rarely involved in evaluating cloud applications or platforms when they are under consideration. Almost half of respondents (49 percent) rarely or never determine if certain cloud applications or platforms meet data protection and privacy requirement.

Part 1: Privacy concerns are not slowing migration to the cloud, but organizations struggle to ensure the protection of data

Cloud services or platforms are used to achieve faster deployment and reduce costs.
The top two reasons for using cloud services and platforms are faster deployment
time and lower costs.

Cost savings, scalability, and faster time to market are the top reasons for migrating
to the cloud — 67 percent of respondents agree that migration results in cost savings and 64 percent of respondents agree that it enables scalability and faster time to market. More than half (54 percent) of the respondents believe migration will improve security and privacy protections.

There is no consensus about who is responsible for addressing privacy and data
protection requirements. Respondents were asked who in their organization would be most responsible for ensuring that SaaS and PaaS applications meet privacy and data protection requirements. Some assigned this responsibility to the cloud service provider; some state that the company and the cloud service provider share the responsibility; others allocate the responsibility within the company among end users and IT.

The importance of both SaaS and PaaS in meeting privacy and data protection
objectives will increase significantly —  64 percent of respondents say that deploying SaaS will be essential or very important in meeting privacy and data protection objectives over the next two years. Fifty-three percent of respondents say using PaaS will be essential or very important.

Respondents are not confident that their current use of SaaS and PaaS meets privacy
and data protection requirements. Currently the majority of respondents are not confident that their SaaS applications and PaaS resources meet privacy and data protection requirements. More respondents (60 percent) lack confidence in the privacy and data protection capabilities of PaaS.

Confidence in SaaS and PaaS applications is low because most organizations are not
vetting them for privacy and data security requirements prior to deployment.
As discussed previously, there is a lack of confidence in the ability of SaaS and PaaS applications to protect and secure data. Why? Fifty percent of respondents say their organizations are not
vetting their SaaS applications before deployment and 58 percent say PaaS resources are not being vetted.

To read the rest of this study, visit Microsoft’s website.

Plastic surgeon’s patients extorted by hackers, as ransomware gangs ramp up dual-threat hacks

Bob Sullivan

When the Center for Facial Restoration announced it had been hit by ransomware recently, the hack attack might have sounded like just another expensive cyber incident for a small business. But the hack of the rhinoplasty practice near Miami included another, darker threat. The criminals added another potential revenue stream to their enterprise — extorting patients by threatening release of potentially embarrassing photos.

So in addition to worrying about restoring data that had been encrypted with malware, Dr. Richard E. Davis had to worry about the publication of before and after photos that might humiliate patients.

This dual threat — criminal hackers stealing data before they scramble it with ransomware — parallels the recent global incident involving currency exchange company Travelex.  It’s a disturbing new trend among computer criminal gangs.

When the Center for Facial Restoration announced on its website recently that it had been hit by ransomware, the firm’s website had to add this chilling warning.

“(Hackers) demanded a ransom negotiation, and as of November 29, 2019, about 15-20 patients have since contacted (the firm) to report individual ransom demands from the attackers threatening the public release of their photos and personal information unless unspecified ransom demands are negotiated and met,” the warning said, “I filed a formal complaint with the FBI Cyber Crimes Center and two days later met with the FBI where they recorded detailed information regarding the cyberattack and ransom demands. The investigation is currently ongoing.”

It’s easy to imagine the seriousness of that kind of threat. On its website, the center says it specializes in repairing other rhinoplasty — or “nose job” — surgeries that left patients unsatisfied.

“Do you avoid cameras or social situations? Let cosmetic rhinoplasty restore your self confidence with a natural-looking, attractive nose that suits your face,” the website says. “Get ready to look at the camera and smile.”

The firm has not immediately responded for comment, so it’s unclear if more patients have been threatened with extortion. But Davis told HealthITSecurity.com that he hopes the damage was limited by recent security upgrades.

“While upgrading my defenses clearly won’t help those individuals whose data has already been stolen, there is reason to suspect that the theft of patient photographs may be limited to only a very small number of individuals – mostly those patients who used email to send or receive their photographs – so the upgrades may prove useful,” Davis said.

But the trend has security professionals worried.

“At least one other ransomware group is also routinely stealing data prior to encrypting it: Maze,” said Brett Callow, a threat analyst who studies ransomware for security firm Emsisoft. “This is a recent and concerning development, especially given how susceptible the public and private sectors seem to be ransomware attacks.”

The double-whammy of ransomware and data breach can leave victim firms scrambling to respond.

“An organization whose data is stolen has no good options available,” Callow said. “Refusal to pay will probably result in the data being published; payment will get them a pinky promise that the data will be deleted. And, as that pinky promise is being made by a criminal enterprise, it carries very little weight.”

Emisoft’s 2019 report about ransomware victims found that nearly 1,000 government agencies, non-profits, and medical organizations were victims of such criminal attacks last year — and there no indication the attacks are slowing down. The dual threat gives small organizations something else to worry about.

“I am dismayed to report (our office)… was the victim of a criminal cyberattack,” Davis says on his website.  “I deeply regret that individuals currently or formally under my care have been victimized by this criminal act, and I urge you to monitor your financial information closely. … . I am sickened by this unlawful and self-serving intrusion, and I am truly very sorry for your involvement in this senseless and malicious act.”