Global Study on Zero Trust Security for the Cloud

Implementing Zero Trust security methods doesn’t just safeguard hybrid cloud environments, but actually enables—and likely even accelerates—cloud transformation, according to a survey of nearly 1,500 IT decision makers and security professionals in the U.S., Europe and the Middle East (EMEA) and Latin America (LATAM).

The survey, conducted by Ponemon Institute on behalf of Appgate, the secure access company, reveals a clear link between the implementation of Zero Trust security measures to mitigate distributed IT infrastructure risks and the realization of cloud transformation objectives.

Different cloud environments, but consistent motivations

This report presents consolidated global findings and insights from the research. According to the study, there is enormous cloud environment diversity in respondents’ organizations. Specifically, there are varied mixes of public/private clouds and on-premises infrastructure, different adoption rates for containers and disparate portions of IT and data processing in the cloud. However, as the research reveals, the drivers of cloud investments are broadly consistent from region to region.

Overall, increasing efficiency is the top motivation for cloud transformation, according to 62 percent of respondents. The second most common motivation is reducing costs (53 percent of respondents) followed by a virtual tie between improving security (48 percent of respondents) and shortening deployment timelines (47 percent of respondents).

New cybersecurity risks not addressed by traditional solutions

But cloud transformation has its own set of security risks and challenges. In fact, nearly 50 percent of respondents flag network monitoring and visibility difficulties as the most significant challenge, followed by a lack of in-house expertise (45 percent) and a recognition of the increased attack vectors that come with having more resources in the cloud (38 percent).

To read the entire study, download it from the website.

To hear Larry on a podcast discussing the study, visit the Zero Trust Thirty podcast.

Focusing on specific security threats, 59 percent of study participants indicate account takeover or credential theft is a major concern, just ahead of third-party access risks. This points to widespread worries about secure access to cloud resources by an organization’s users and outside vendors/suppliers alike.

Addressing cloud security risks is a known hurdle, with 36 percent of respondents reporting that the siloed nature of traditional security solutions creates cloud integration challenges. Modern “shift left” development methodologies only partially address the issue and may even add new risks into the mix. For instance, 52 percent of respondents agree or strongly agree that the inability of current network security controls to scale fast enough affects DevOps productivity or introduces vulnerabilities.

Zero Trust Network Access (ZTNA) offers a proven solution

The research also reveals that Zero Trust Network Access (ZTNA) is a practical solution to cloud security pain points poorly addressed by the over-privileged access approach of siloed solutions and traditional perimeter defenses. As evidence, the top two security practices identified as being the most important to achieving secure cloud access are enforcing least privilege access (62 percent of respondents) and evaluating identity, device posture and contextual risk as authentication criteria (56 percent of respondents).

Ranking third and fourth are a consistent view of all network traffic across IT environments (53 percent of respondents) and cloaking servers, workloads and data to prevent visibility and access until the user or resource is authenticated (51 percent of respondents). The robust capabilities of ZTNA directly addresses all four of these major cloud security practices deemed as necessities.

Zero Trust is a victim of its own success

The survey also hints that Zero Trust security may be dismissed by some as a buzzword despite high-profile industry calls for action, including a U.S. White House mandate for federal agencies to meet a series of Zero Trust security requirements by 2024. However, there is evidence that this dismissal is based on a poor understanding of what Zero Trust actually is. For example, of those respondents who have not deployed ZT, roughly a quarter of respondents point to it as being “just about marketing”. Many of these respondents also highlight specific ZTNA capabilities as being essential to protect cloud resources.

Similarly, many of the respondents who indicate that their organizations are not implementing Zero Trust nevertheless believe that security components that strongly align with Zero Trust security principles are important. This further indicates the confusion about what Zero Trust security actually means.

Those who have knowingly adopted Zero Trust tenets (49 percent of respondents) report a range of benefits. Of the 49 percent of respondents, 65 percent of respondents say the top benefit is increased productivity of the IT security team, followed by stronger authentication using identity and risk posture (61 percent of respondents) and a tie between increased productivity for DevOps and greater network visibility and automation capabilities (both 58 percent of respondents).

Zero Trust is an enabler not an add-on

These benefits suggest that Zero Trust goes beyond “simply” protecting valuable data and mission-critical services within hybrid cloud environments.  In fact, it can drive enterprise productivity gains and accelerate digital transformation. In other words, Zero Trust security principles shouldn’t be regarded as something to add after completing a cloud migration, but instead can be recognized as supporting the speeding up and securing of the transformation.

Ultimately, the speed of business is only going to continue to accelerate the adoption of cloud, containers, DevOps and microservices. Zero Trust security can help organizations quickly and securely keep pace with agile cloud deployments. A comprehensive Zero Trust Network Access is the unified policy engine glue that delivers secure access for all users, devices and workloads, regardless of where they reside. The cloud train has left the station and continues to accelerate without regard for increased risk and security complexity. The results of this study demonstrate the ability for Zero Trust security to help security keep pace.

To read the entire study, download it from the website.

To hear Larry on a podcast discussing the study, visit the Zero Trust Thirty podcast.

Zelle might change long-standing unfair policy on fraud refunds — now, onto the rest of our Too Big to Scale problems

Bob Sullivan

Zelle, a favorite tool for online criminals, *might* begin protecting users from scams soon.  Victims who report they’ve been “robbed” by thieves on the service have long been denied dispute rights we take for granted with other kinds of electronic transactions.  Recently, banks leaked a plan to the Wall Street Journal that would reverse this position. According to the story, banks that give an account to a criminal and receive stolen funds would be forced to refund the victim’s bank, which would then refund the victim. This is great news. It would bring P2P payments out of the dark ages.  It would let Zelle thrive the way zero-liability policies turbocharged the credit/debit card market. More important, it would force banks to invest much more time and money into spotting and stopping criminals, since they’d be on the hook for losses.

For now, it’s just a story in the Wall Street Journal — and The New York Times, which really deserves credit for dragging Sen. Elizabeth Warren and her hearing-shaming tactics into this fight.   There’s always the chance this is a stalling tactic. The Consumer Financial Protection Bureau is currently weighing rules that would impose this kind of liability on Zelle-member banks, and it’s long been theorized that banking regulators are weighing a make-an-point lawsuit against Zelle. So don’t count your chickens yet.  But critically, if you are one of the thousands of Zelle victims who’ve reached out to me through the years, keep those records handy.  I doubt banks would make this new policy retroactive on day one, but there may very well be legal opportunities to force their hand.

Don’t expect banks to give up this issue without a fight, however. Zelle is a consortium of the world’s largest banks, and it has been resisting this obvious step for years.  The first time I met with a Zelle representative was in 2018, around the time I’d done a series of stories with devastating examples of Zelle victims.  Creating credit-card-like consumer protections sounded off the table then. And as recently as October, the American Bankers Association drafted a letter to the CFPB opposing any new regulation, claiming it would effectively kill Zelle’s business model.  It’s a manifesto that could apply to any attempt at making banks behave better. Here are some greatest hits from that letter with my notes.

  • In a section arguing why irreversibility — criminals’ favorite feature — is essential to Zelle, the ABA says: “Consumers value the fact that P2P payments are made quickly—and importantly—cannot be reversed. … The finality of payment means recipients can confidently use the money as soon as it is received.” But one paragraph earlier, the ABA writes that Zelle should be used  “to pay the babysitter, lawn mower, or handyman, to send money to a college student, or to repay a friend for dinner or concert tickets.”  Maybe bankers have bad friends and scheming babysitters, but I don’t worry too much about my friends reversing my $40 Zelle payments after lunch.
  • Banks may also have to consider placing “holds” on money sent by P2P, which would fundamentally alter the value and appeal of the “faster payment” product that consumers have overwhelmingly indicated they want.” I’d like to see research on that. People want banks that are safe, first and foremost.  But more to the point, I’d love to examine Zelle’s transaction data, because I have a sneaking suspicion that the vast majority of funds never leave Zelle’s ecosystem.  That is — the $45 you pay a buddy for dinner stays in her Zelle account until she pays $30 to her friend for happy hour next Tuesday.  Speed is not of the essence in those transactions.  This is mere pixel placeholding. I’ve long advocated for a delay when transactions exceed a reasonable threshold — say $200? — or maybe anything that’s 500% more than your typical transactions. Such a threshold would CLEARLY communicate what banks obliquely say in their disclaimers, that Zelle should only be used for friends, family, etc.
    At any rate, there *is* often a delay when consumers try to actually get their money out of P2P apps. It costs up to $25 to get an ‘instant’ transfer from Venmo, otherwise there’s a 1-3 day delay.
  • Banks curiously argue that increasing consumer rights will lead to more fraud. “Shifting liability to banks for authorized but fraudulently induced transactions also will increase scams and embolden scammers. Armed with a written federal government policy stating that consumers are entitled to a return of money sent to scammers, scammers will be better able to induce consumers to send money. They will assure them that there is no downside or risk in sending the money because the bank will reimburse them.”
    This is akin to the Sam Pelzman-like argument that seat belts actually make people less safe because they drive more dangerously. The grain of truth in this argument is swallowed up for real-world data and experience showing that banking professionals are in a far better place to stop fraud than amateur consumers just trying to give each other IOUs and have another drink.
  • This argument shows there are no limits to the strained logic banks are willing to attempt in defense of their scam-infested software.  If higher fraud controls were in place, there would be false positives, and banks would wrongly deny some legitimate transactions. True. But the ABA warns of dire consequences: “For example, a bank might face liability based on the consumer’s claim that the failure to send money caused the consumer to miss out on a profitable investment or purchase opportunity.” If banks are ready to admit their liability for causing lost time and opportunity, I’d think consumers who were wrongly denied loans would get the first number in that massive lawsuit.

You can see why Ed Mierzwinski of the Public Interest Research Group dismissed the ABA’s position as farcical in a recent blog post. “Fire, brimstone, higher costs and other signs of the apocalypse are standard fodder for any industry screed against needed regulation, so I’m not surprised,” he wrote this week.

Specious arguments aside, I’d like to focus on what the ABA says quite plainly in its manifesto against fixing Zell. Fraud on the service is “de minimus.” As in, “too trivial to merit consideration.” Yup. That’s you, bank customers. Too trivial to merit consideration. I’ve written about an elderly woman who didn’t even know she had a Zelle account and had $23,000 stolen from her — about a widow who had every penny of her small business loan stolen via Zelle — about a woman who donated to a friend who needed a kidney transplant, had her account drained, and was forced to make a ‘hostage video.’ In each case, and in hundreds more, banks denied legitimate fraud claims.  Claims that devastate real human beings.  They are all “de minimus.”  Google “Zelle fraud” now, or search Twitter. You won’t be able to read all the results you get.

All those victims are “de minimus.” Too trivial to merit consideration.

And so, dear reader, are you. That’s what passes for a business model in the age of Gotcha Capitalism.  Become as large as possible as fast as possible, and dismiss the collateral damage as de minimus.

I’m belaboring the point because I’ll say to anyone who will listen nowadays — poor customer service is our greatest security vulnerability. Mistreated consumers have become a favorite vector for criminals.  People pay hackers to get their hijacked Instagram accounts back. They pay bots to get a spot on the IRS telephone helpline.  And criminals use this frustration as an easy way to hack corporate networks. Why guess usernames and passwords when you can simply enlist disgruntled consumers to steal for you? The ABA basically admits this.

It is difficult to persuade customers not to send the money because criminals have coached them not to contact or trust the bank,” the AMA writes. Exactly.  Consumers trust random callers rather than their banks when faced with a critical choice. Maybe that sounds absurd until you read the story of a woman who was in the middle of a Zelle scam, walked into a bank, and couldn’t even get help when she put the criminal on speakerphone in the bank lobby.

That’s what a “de minimus” world gets you.  We live in a world where most businesses are Too Big to Scale. They just can’t reasonably service their customers. They use technology to feign a token effort (“Try our self-service app”) but when anything really goes wrong, you’re screwed.  And, you’re de minimus. That is, digital roadkill. There’s no human who can make a reasonable good-faith judgment on your issue; there’s only a software-driven infinite loop saying NO.   Professionalism, morality, the natural human urge to intervene when human suffering lands at your door — these have all been downsized out of the system. “Sorry, grandma, your life savings is gone and we can’t do anything.  Now, how would you rate this customer service interaction? 5 stars? Would you like to apply for an auto loan?”

Too Big to Scale is a problem I will be writing about more in the coming weeks and months. For now, delight in small victories. The Zelle network might do the right thing.  That’s very good news.  Thank a journalist like Stacey Cowley if you get the chance.

If time is money, vulnerability backlog is really expensive

Sponsored by Rezilion, the purpose of this research is to understand the state of organizations’ DevSecOps efforts to manage vulnerabilities throughout the software attack surface. Ponemon Institute surveyed 634 IT and IT security practitioners who are knowledgeable about their organizations’ attack surface and effectiveness in managing vulnerabilities.

All organizations have adopted DevSecOps or are in the process of adopting a DevSecOps approach. According to the research, the lack of the right security tools is the primary barrier to having an effective DevSecOps. This challenge is followed by a lack of workflow integration and the growing vulnerability backlog.

In this research, we have defined DevSecOps (short for development, security and operations) as the automation of the integration of security at every phase of the software development lifecycle from initial design through integration, testing, deployment and software delivery.

At the heart of having a successful vulnerability management program is alignment between DevSecOps and the development team in being able to achieve both innovation and security when delivering products. Only 47 percent of respondents say their organizations’ development team delivers both an enhanced customer experience and secure applications and 53 percent of respondents are concerned that the lack of visibility and prioritization in DevOps security practices puts product security at risk.

Fifty-five percent of respondents say their development engineers, product security teams and compliance teams are aligned to understand their organizations’ security posture and each other’s area of responsibilities to deliver secure products.

The following are key takeaways from the research.

 The two primary reasons to adopt DevSecOps are to improve the collaboration between development, security and operations and reduce the time to patch vulnerabilities, according to 45 percent of respondents. In addition to improving collaboration and reducing time to patch, 41 percent of respondents say it automates the delivery of secure software without slowing the software development cycle (SDLC).

 Almost half of respondents say their organizations have a vulnerability backlog. Forty-seven percent of respondents say in the past 12 months organizations had applications that have been identified as vulnerable but not remediated. On average, 1.1 million individual vulnerabilities were in this backlog in the past 12 months and an average of 46 percent were remediated. However, respondents say their organizations would e satisfied if 29 percent of vulnerabilities in a year were remediated.

“This is a significant loss of time and dollars spent just trying to get through the massive vulnerability backlogs that organizations possess,” said Liran Tancman, CEO of Rezilion, which sponsored the research. ”If you have more than 100,000 vulnerabilities in a backlog, and consider the number of minutes that are spent manually detecting, prioritizing, and remediating these vulnerabilities, that represents thousands of hours spent on vulnerability backlog management each year. These numbers make it clear that it is impossible to effectively manage a backlog without the proper tools to automate detection, prioritization, and remediation.”

 The inability to prioritize what needs to be fixed is the primary reason vulnerability backlogs exist, according to 47 percent of respondents. A primary reason for the existence of backlogs is not having enough information about risks that would exploit vulnerabilities (45 percent of respondents) and the lack of effective tools (43 percent of respondents).

Forty-seven percent of respondents say their organizations have adopted a shift right strategy, which enables continuous feedback from users. Fifty-one percent of respondents believe the benefit of a shift right strategy empowers engineers to test more, test on time and test late.

Organizations are slightly more effective in prioritizing their most critical vulnerabilities than patching vulnerabilities. Fifty-two percent of respondents say their organizations’ prioritization of critical vulnerabilities is very effective but only 43 percent of respondents say timely patching is highly effective.

Vulnerability patching is mostly delayed because of the difficulty in tracking whether vulnerabilities are being patched in a timely manner. Difficulty in tracking (51 percent of respondents) is followed by the inability to take critical applications and systems off-line so they can be patched quickly (49 percent of respondents).

Automation significantly shortens the time to remediate vulnerabilities. Fifty-six percent of respondents say their organizations use automation to assist with vulnerability management. Of these respondents, 59 percent say their organizations automate patching, 47 percent say prioritization is automated and 41 percent say reporting is automated. Each week, the IT security team spends most of its time on the remediation of vulnerabilities. Sixty percent of respondents with automation say it significantly shortens the time to remediate vulnerabilities (43 percent) or slightly shortens the time (17 percent).

DevOps is an approach based on lean and agile principles to quickly deliver software that enables organizations to quickly seize market opportunities. Fifty-one percent of respondents say they have some involvement in their organization’s DevOps activities. As shown Fifty-two percent of these respondents say they are involved in vulnerability management and 49 percent of these respondents say they are involved in application security.

Certain features are important to creating secure applications or services. Sixty-five percent of respondents say the ability to perform tests as part of the workflow instead of stopping, testing, fixing and restarting development is very important and 61 percent of respondents say automating vulnerability, scanning and remediation at every stage of the SDLC is very important.

The inability to quickly detect vulnerabilities and threats is the number one reason vulnerabilities are difficult to remediate in applications. Sixty-one percent of respondents say it is very difficult or difficult to remediate vulnerabilities in applications. Why it is so difficult is because of the inability to quickly detect vulnerabilities and threats (55 percent of respondents), the inability to quickly perform patches on applications in production (49 percent of respondents) followed by the lack of enabling security tools (43 percent of respondents).

More than half of organizations focus only on those vulnerabilities that pose the most risk. Fifty-three percent of respondents believe it is important to focus on only those vulnerabilities that pose the most risk and not on remediating all vulnerabilities. Forty-nine percent of respondents say their organization remediates all vulnerabilities because it does not know which ones pose the most risk.

Testing applications and keeping an inventory of business-critical applications are steps that have been fully or partially implemented. To manage vulnerabilities, 45 percent of respondents say their organizations test the application for vulnerabilities using automation and 44 percent of respondents say their organizations have created and maintained an inventory of applications and assess their business criticality.

Software Bill of Materials (SBOM) is a list of components in a piece of software. Software vendors often create products by assembling open source and commercial components. The SBOM describes the components in the product. A dynamic SBOM is updated automatically whenever a release or change occurs. Forty-one percent of respondents say their organizations use SBOM. Risk assessment and compliance with regulations are the top two features of these organizations’ SBOMs. While 70 percent of respondents say continuous automatic updates are important or very important, only 47 percent say their SBOM features continuous updates.

 The growing software attack surface is a high concern.  Seventy-one percent of respondents say their organizations are very or highly concerned about risks created by the growing software attack surface. A higher percentage of respondents (77 percent) believe it is very or highly important.

Despite the concerns, most organizations are not effective in both knowing the attack surface and securing it. Only 43 percent of respondents say their organizations’ effectiveness is very high and only 45 percent of respondents say their organizations are effective in knowing the attack surface.

 Elimination of complexity and eliminate vulnerabilities that are exploitable are the most important steps to safeguard the attack surface. Sixty percent of respondents say the elimination of complexity in the software attack surface vulnerabilities that are exploitable (56 percent of respondents) will reduce threats to the attack surface. This is followed by knowledge of all software components (51 percent of respondents). Only 26 percent of respondents say regular network scans reduce threats.

To read the complete results of the survey, visit the Rezilion website.

When my smartphone was stolen, Instagram (and 2FA) was the worst part

Why am I holding this odd-looking sign in what looks a lot like a mug shot? Because my cell phone was stolen recently.  And the worst part of that experience has been dealing with …. Instagram.  As I’ve written before, poor customer service is actually a massive security vulnerability, and I think my story will illustrate that.  But if you don’t care for those details, at least scroll down to watch me and my dog struggle to submit a selfie video so I could attempt to regain access to @RustyDogFriendly. It’s worth the price of admission. (And, sadly, it did not work. Many, many times)

Many years ago I was scared straight on two-factor authentication when I was working on a documentary podcast about Russian hackers and I received notification from Facebook that someone in St. Petersburg, Russia, had tried to hack into my Instagram account.  I was already pretty careful with my work and banking accounts, but now I put two-factor on everything I could.

And I didn’t opt for the less-secure SMS text-message-code style two-factor. I went with the stronger token-based model.  I installed Google Authenticator on my phone and used its mathematically-generated codes as my second step when logging into all my various accounts. Even my @bobsulli Instagram, used mainly for my photography hobby, and @rustydogfriendly, where fans of my beloved golden retriever could get their fix of Rusty. (Long-time readers know Rusty has enjoyed his own time in the media spotlight from a story I wrote for the Today show).

That worked well until my cellphone was stolen while traveling last week.  Everyone understands the hassle that usually brings. I was actually fortunate. I have insurance, so after a $230 deductible, I received a replacement phone and all my data is backed up, so I didn’t really lose anything.

Except my sanity as I tried to log back into sites where I had employed Google Authenticator.  You see, there is no way to restore that. When your phone gets stolen, your token math is gone. There’s no way to import the old math into the new phone with out access to the old phone; at least none I am aware of. So every site which required an Authenticator code now required an alternative sign-in process.  The good news is: None of them were particularly easy. I wouldn’t want that! After all, what good is two-factor authentication if someone can just say, “I forgot my password” and sign in with a new one.

So I went through various alternative means of logging in…many involved using other gadgets or laptops where  I was already logged into these accounts and answering various questions.  Most sites that use tokens offer the chance to download a series of one-time backup codes designed for such an emergency, which I had done in most cases. Of course, many of the codes date back to those frantic moments five years ago when I was preparing for a possible Russian hack, so they weren’t necessarily easy to find.  But, I muddled through.

Until I got to Instagram.

I’ve written a lot recently about the problems Instagram is having with hackers. Well — in my view — the real problem is Instagram’s customer service failures. It’s easy to find horror stories about Instagram users who’ve had their accounts hijacked — then, those impersonator accounts are used for ongoing crimes, like crypto scams — and the victims are unable to even get the accounts turned off, let alone restored to the rightful owner.

So I shouldn’t have been surprised when attempts to log into my Instagram accounts with a brand new phone — and without my Authenticator — ran into roadblocks worthy of Fort Knox.  Let me be clear: I am glad Instagram makes it hard to log into my accounts from a strange new cell phone. Kudos to them for making this challenging.  But…when challenging becomes impossible, something else becomes clear. Their security implementation is a failure. And as a result, I can no longer recommend that Instagram users employ strong two-factor authentication, because you may very well be signing your account’s death warrant that way.

My @BobSulli account is much older, and I occasionally use it for professional purposes — I was among the first Instagram users, relatively speaking — so I dived into that problem first. I asked for a password reset at the email address on file. That worked. I tried to log in. I couldn’t without an Authenticator code.  I asked for an alternative.  I entered the backup codes I had. That didn’t work. I felt desperate.

I should note that every one of these interactions with Instagram came with a subject line “Hi BobSulli — we’ve made it easy to get back on Instagram.”

So I asked the software one more time — isn’t there any alternative?

I was then asked if pictures of myself were in the account. Of course!  So then I was told to make a “selfie video.”  Great!  Someone — or  something — was going to look at this video, compare it to the 500 other pictures of me, and override whatever system was blocking me from the account. Perhaps (hopefully?) after a phone call or some other final check of who I was.  Following the instructions, I looked right, looked up, looked left….and then submitted it.  I was told it might take three or four days. Bummer, but worth it to have this piece of the puzzle solved!

Within minutes, I had my answer.

“We weren’t able to confirm your identity from the video you submitted. You can submit a new video and we’ll review it again,” a sad email said.

That was fast, I thought. It’s probably a machine.  So I set about running around my home trying to find the same lighting as my profile photo. Even the same suit jacket I wore.  I submitted several selfie videos over the next few hours.  All of them were rejected.

The only other alternative I was offered was ….no alternative.  Just a link to a help center page that, predictably, offered no help at all.  I was at a dead end.

But then I had one more thought.  I had an old iPod touch. Perhaps I had logged into my account from that device and Instagram would recognize it through device fingerprinting or something similar and I’d at least get a different alternative.  Bang!  This time, when I entered my password, failed the Authenticator test, and begged for help, I was presented with a form to fill out. I did so, and received a hopeful — if odd — response in email.

“Thanks for contacting us. Before we can help, we need you to confirm that you own this account. Please reply to this message and attach a photo of yourself holding a hand-written copy of the code below…. 6XXXX…Please make sure that the photo you send includes the above code hand-written on a clean sheet of paper, followed by your full name and username …Clearly shows both the code and your face.”

So, that’s where the mugshot photo above comes in. I sent it in.  To shorten the story a bit, that worked.  Withing a few hours, I had access to @BobSulli!  That was the hard part, I thought.  There was just one more thing to do to recover from the awful experience of having my smartphone stolen — recover my dog’s account, @rustydogfriendly.

And that, dear readers, has proven to be my Waterloo.

Because when the time came to submit a selfie video for his account…that didn’t go quite as well, as you can see in the embedded YouTube video below.

I know what you’re thinking: I’ve already tried a selfie video of just me. That didn’t work either. I’ve tried about 10 different variations. Each time, the video is rejected. I actually had a reasonable dialog with the folks who helped me log into @BobSulli over email. I pleaded with them to look at @rustydogfriendly. The accounts are linked in both their bios!  It’s obvious we are connected! I sent a list of pictures with both me and him together! The person(s) on the other end of the keyboard kept telling me they could only help with my @BobSulli account. I begged for an alternative. The “line” went dead.

So I am stuck in a perpetual loop, as the geeks say. I log in, I’m asked for an Authenticator code, I say I don’t have it, I’m asked for a backup code, I try it, it fails, I ask for an alternative, I’m asked to make a selfie video, it fails, and then my only option is to make another selfie video.

The backup codes I have for @rustydogfriendly were downloaded the day I opened the account. Why don’t they work? I don’t know. Perhaps they have expired.  I don’t recall ever using them, but who knows. It was four years ago.

So I am stuck. Rusty is usually a big hit on Halloween, but I’ve now written that holiday off.  Perhaps there is some other route to logging in that I’ve missed, and for that I’m sorry, but I believe I’ve taken every step a reasonable consumer would take in my situation — and a few that only a cybersecurity journalist would take — and I have nothing but a zombie account to show for it.  And a belabored blog post that many of you probably have not finished reading.

But I belabor the point because it’s important: when security isn’t accompanied by customer service, it’s a failure. Poor customer service is, I believe, our greatest security vulnerability. Two-factor authentication is ESSENTIAL.  Many people use text-message-based authentication, and I’ve been telling anyone who will listen that it’s now failed. It’s too easy for criminals to intercept those texts or obtain them in other ways.  So I’ve been urgings banks and other institutions to force consumers into Authenticator or other software-based tokens instead. They are much safer.

But I can no longer do this in good conscience. Because if there is no plan for consumers who lose access to their phones, there is no plan.  I can’t tell you how much of the weekend I spent explaining to various websites — “No, I can’t verify myself via text because….I don’t have texts right now.” And when there is no alternative, the implementation is broken.

Thanks to Instgram, I will forever be gun-shy about two-factor authentication now. And the more stories like this that you hear, the more you will be inclined to turn it off, too.  After all, which risk is higher — a criminal hacking your account or a corporation blocking your access to the account?

Note that I *was* able to restore my other accounts, so clearly the problem is fixable. I will continue to use two-factor everywhere I can. And you should too.  But know this: If Facebook were required to answer the phone — virtually or otherwise — these situations would not arise. Poor customer service gives security a bad name, and that puts us all at risk.

Meanwhile, if anyone has any suggestions for getting back into my dog’s account, I’m all ears.


The 2022 Data Risk in the Third-Party Ecosystem Study

Organizations are dependent upon their third-party vendors to provide such important services as payroll, software development or data processing. However, without having strong security controls in place vendors, suppliers, contractors or business partners can put organizations at risk for a third-party data breach.  A third-party data breach is an incident where sensitive data from an organization is not stolen directly from it, but through the vendor’s systems that are misused to steal sensitive, proprietary or confidential information.

Sponsored by RiskRecon, a Mastercard Company and conducted by Ponemon Institute,1,162 IT and IT security professionals in North America and Western Europe were surveyed. All participants in the research are familiar with their organizations’ approach to managing data risks created through outsourcing. Sixty percent of these respondents say the number of cybersecurity incidents involving third parties is increasing. (Click here for a link to the full study)

We define the third-party ecosystem as the many direct and indirect relationships companies have with third parties and Nth parties. These relationships are important to fulfilling business functions or operations. However, the research underscores the difficulty companies have in detecting, mitigating and minimizing risks associated with third parties and Nth parties that have access to their sensitive or confidential information.

Third-and-Nth party data breaches may be underreported. Respondents were asked to rate how confident their organizations are that a third or Nth party would disclose a data breach involving its sensitive and confidential information.

Only about one-third of respondents say that they have confidence that a primary third party would notify their organizations (34 percent) and even fewer respondents (21 percent) say the Nth party would disclose the breach.

Based on the findings, companies should consider the following actions to reduce the likelihood of a third-party or Nth party data breach.

  1. Create an inventory of all third parties with whom you share information and evaluate their security and privacy practices. Before onboarding new third parties, conduct audits and assessments to evaluate the effectiveness of their security and privacy practices. However, only 36 percent of respondents say that before starting a business relationship that requires the sharing of sensitive or confidential information their company evaluates the security and privacy practices of all vendors. Organizations should have a comprehensive list of third parties who have access to confidential information and how many of these third parties are sharing this data with one or more of their contractors. Identify vendors who no longer meet your organization’s security and privacy standards. Facilitate the offboarding of these third parties without causing business continuity issues.
  1. Conduct frequent reviews of third-party management policies and programs. Only 43 percent of respondents say their organizations’ third-party management policies and programs are frequently reviewed to ensure they address the ever-changing landscape of third-party risk and regulations. Organizations should consider automating third-party risk evaluation and management.
  1. Study the causes and consequences of recent third-party breaches and incorporate the takeaways in your assessment processes. Only 40 percent of respondents say their third parties’ data safeguards, security policies and procedures are sufficient to prevent a data breach and only 39 percent of respondents say these data safeguards, security policies and procedures enable organizations to minimize the consequences of a data breach. In the past year, breaches were caused by such vulnerabilities as unsecured data on the Internet, not configuring cloud storage buckets properly and not assessing and monitoring password managers.
  1. Improve visibility into third or Nth parties with whom you do not have a direct relationship. More than half (53 percent) of respondents say they are relying upon the third party to notify their organization when data is shared with Nth parties. A barrier to visibility is that only 35 percent of respondents say their organizations are monitoring third-party data handling practices with Nth parties. To increase visibility into the security practices of all parties with access to company sensitive information – even subcontractors, notification when data is shared with Nth parties is critical. In addition, organizations should include in their vendor contracts requirements that third parties provide information about possible third-party relationships with whom they will be sharing sensitive information.
  1. Form a third-party risk management committee and establish accountability for the proper handling of third-party risk management program. Many organizations have strategic shortfalls in third-party risk management governance. Specifically, only 42 percent of respondents say managing outsourced relationship risk is a priority in our organization and only 40 percent of respondents say there are enough resources to manage these relationships. To improve third-party governance practices, organizations should centralize and assign accountability for the correct handling of their company’s third-party risk management program and ensure that appropriate privacy and security language is included in all vendor contracts. Create a cross-functional team to regularly review and update third-party management policies and programs. 
  1. Require oversight by the board of directors. Involve senior leadership and boards of directors in third-party risk management programs. This includes regular reports on the effectiveness of these programs based on the assessment, management and monitoring of third-party security practices and policies. Such high-level attention to third-party risk may increase the budget available to address these threats to sensitive and confidential information.

To see the full study, visit the website.


Poor customer service is our greatest cybersecurity vulnerability

Bob Sullivan

When Bank of America put Hank Molenaar on hold recently, it told the Houston resident there would be a long wait time and he could press 1 to get a call back instead.  But before the bank called, criminals called, impersonating the bank, and stole his money via Zelle.  It was a Perfect Scam.  And the vulnerability that was exploited? It was poor customer service.

There’s a new, disturbing trend I’ve spotted and it’s time to ring the alarm bell. It’s hard work to hack into a bank and steal money. It’s much easier to enlist real consumers as allies to do it for you.  Theft via scam is on the rise, overtaking traditional identity theft / credential hacking, according to a recent report by Javelin Research & Strategy. Criminals are enlisting the help of account holders and other consumers with all manner of creative cover stories and impersonation schemes — the kind of stories I tell at AARP’s The Perfect Scam podcast. Financial institutions and retail outlets have laid the groundwork for this shift through years of neglectful treatment. When it comes time to make a trust choice — as a consumer, do you trust your bank or the person on the phone telling you a bank insider is stealing your cash? — all these years of mistreatment are forcing victims into the arms of criminals.

That’s what Diane Clements told me during a heart-wrenching interview for The Perfect Scam, a podcast I host. Diane and her husband, Tom, are both retired professors.  They worked their whole lives to build a humble $600,000 nest egg that would fund their retirement.  But when Diane’s computer went ballistic on her recently, and a message popped up telling her to “call Microsoft,” she followed the instructions. Soon, an operator on the other end of the line told her that all her bank accounts were hacked. It was an inside job!  And they wanted Diane’s help catching the bad guys. Diane was already struggling — her breast cancer would soon return, requiring aggressive treatment, and that only increased the frantic nature of these communications with “bank” security officials.  During the next three months, after near daily conversations with a set of online criminals, Diane and Tom slowly moved every penny of that $600,000 into accounts controlled by the criminals, all the while thinking they were helping catch a bank insider committing a crime.

I know it can be hard to understand how these crimes occur, but when you hear Diane tell her story, it makes sense (click here to listen ).  The thing that really touched me deeply was the stark contrast Diane experienced when talking with the criminals vs. talking to her bankers during the episode.  The criminals sounded kind, empathetic, thoughtful — while workers at her local bank were downright mean. One even accused her of lying about having cancer during the episode.

“They (were) really mean. They’re rude. They are not helpful to me. Nobody reaches out to me and says, Dianne, I’m concerned about you. Everybody saw me as a perpetrator, not as a victim. I still struggle with that,” Diane told me. “The contrast between them and the banks was stark. And the dissonance that caused me took its toll, because I could not understand how the banks could be so indifferent. So uncaring. Or so cavalier.”

When the day came that someone at a financial institution needed to intervene on behalf of a consumer in distress, Diane’s bank just couldn’t do that.  When a criminal told her to distrust workers at the bank, that was an easy story to sell. Years of neglect had set her up for a confrontational exchange, and that’s what she had.

You can’t mistreat people for years and then suddenly ask them to trust you.   Trust is won over a long stretch of time, through hundreds of interactions large and small. I see companies erode trust every day.  I just looked at my phone while writing this piece and saw an email from Uber with the subject line: WARNING!  It was a marketing pitch. Think about all the communications you receive that include trigger words like “verify” or “transaction,” all focus-grouped to make you click because you *think* it’s an important message about security — when it’s just an ad.  One day, when Uber really needs me to read a communication from them, I’ll probably ignore it. Or worse.

If Diane had felt some positive vibes from her bank, and if someone there had taken the time to really talk with her, she might still have that $600,000. And this scenario plays out over and over again at retailers and financial institutions across the country. For some reason, corporations have adopted the habit of treating their customers like potential criminals. In doing so, they’ve opened the door wide for the real criminals.

This is the message I delivered at a talk I gave recently to Navy Federal Credit Union employees about online scams.  We’ve given lip service for years to the idea that we should enlist consumers to help with cybersecurity. We want them to forward phishing emails they get. We want them to read our happy bulletins explaining the latest scams.  It hasn’t worked.  We need to do much more than that.  We need to make sure that consumers are on our side. We need to make sure consumers trust us. We need their hearts and minds. Criminals are enrolling consumers as accomplices, making the job of hackers so much easier.  To combat this, smart companies will invest in long-term consumer trust, deputizing their shoppers and account holders as agents who can spot scams, but more important, trust them enough to come to them when something feels wrong.

Back to Hank Molenaar. The real reason that scam worked? Bank of America was going to put him on hold for 40 minutes.  That gave criminals a big window of time to call him back first, impersonating the bank. Poor customer service was the security vulnerability. Imagine if Diane *knew* that she could send an email or place a phone call to a kind company representative who would answer her questions as quickly as the criminals did. The bank would have had a fighting chance, anyway.  Good customer service is good security.

Corporations spend billions of dollars on expensive software and experts designed to thwart sophisticated digital attacks.  That’s fine, but criminals are just sending manipulated consumers into the front door to steal money for them. Some of that cybersecurity money should be spent investing in customer service instead. When your consumers trust a random caller claiming to be from the IRS more than they trust you, cybersecurity is only one of the problems you have.

I know it’s poor form to repeat myself, but this message needs to come through — Javelin recently found that more money was lost to scams (“consumer-assisted crime”) than to credential hacking.  This is a trend with staying power. Ignore it at your peril.

I’ve spent my career wearing two hats: as a cybersecurity reporter, and as a consumer reporter.  Often, editors were confused that I insisted on covering both beats, as on the surface, they can seem quite different. Why should I care about the latest buffer overflow *and* unfair overdraft fees?  Now, you know why. They are two sides of the same coin.  And everyone should care about both.

Investing in the Cybersecurity Infrastructure to Reduce Third-party Remote Access Risks

The purpose of this second-annual research study is to understand how organizations are investing in their cybersecurity infrastructure to minimize third-party remote access risk and what primary factors are considered when making improvements to the cybersecurity infrastructure. In this year’s report, we include the best practices of organizations that are more effective in establishing a strong third-party risk management security posture.

Sponsored by SecureLink, Ponemon Institute surveyed 632 individuals who are involved in their organization’s approach to managing remote third-party data risks and cyber risk management activities. According to the research, 54 percent of respondents say their organizations experienced one or more cyberattacks in the past 12 months and the financial consequences of these attacks during this period averaged $9 million.

The average annual investment in the cybersecurity infrastructure is $50.8 million. According to the research, incentives to invest in the infrastructure include solving system complexity and effectiveness (reducing high false positives) and increasing in-house expertise.

Since last year’s research, no progress has been made in reducing third-party remote access risks. The security of third-party remote access is not improving. Therefore, the correct decisions regarding investment in the cybersecurity infrastructure to reduce these third-party risks are becoming increasingly important. Respondents were asked to rate the effectiveness of their response to third-party incidents, detection of third-party risks and mitigation of remote access third-party risks on a scale of 1 = not effective to 10 = highly effective.

Only 40 percent of respondents say mitigating remote access is very effective, 53 percent of respondents say detecting remote access risks is very effective and 52 percent of respondents say responding to these risks and controlling third-party access to their network is highly effective.

The risks of third-party remote access

 In the past 12 months, organizations that had a cyberattack (54 percent) spent an average of more than $9 million to deal with the consequences. Most of the $9 million ($2.7 million) was spent on remediation & technical support activities, including forensic investigations, incident response activities, help desk and customer service operations. This is followed by damage or theft of IT assets and infrastructure ($2.1 million).

 Investments in the cybersecurity infrastructure should focus on improving governance and oversight practices and deploying technologies to improve visibility of people and business processes. Investment in oversight is important because of the uncertainty about third-parties compliance with security and privacy regulations. On average, less than half (48 percent) of respondents say their third parties are aware of their industry’s data breach reporting regulations. Only 47 percent of respondents rate the effectiveness of their third parties in achieving compliance with security and privacy regulations that affect their organization as very high.

 Data breaches caused by third parties may be underreported. Respondents reporting their organization had a third-party data breach increased from 51 percent to 56 percent. However, organizations may not have an accurate understanding of the number of data breaches because only 39 percent of respondents say they are confident that the third party would notify them if the data breach originated in their organizations.

In the past 12 months, 49 percent of respondents say their organizations experienced a data breach caused by a third party either directly or indirectly, an increase from 44 percent in 2021. Of these respondents, in this year’s research 70 percent of respondents say it was the result of giving too much privileged access to third parties. A slight decrease from 74 percent of respondents in 2021.

Organizations are having to deal with an increasing volume of cyberthreats. Fifty-four percent of respondents say their organizations experienced one or more cyberattacks in the past 12 months. Seventy-five percent of respondents say in the past 12 months there has been a significant increase (25 percent), increase (27 percent) or stayed the same (23 percent) in the volume of cyberthreats. The security incidents most often experienced in the past 12 months were credential theft, ransomware, DDoS and lost or stolen devices.

Managing remote access to the network continues to be overwhelming but the security of third parties’ remote access to its network is not a an IT/IT security priority. Sixty-seven percent of respondents say managing third-party permissions and remote access to their networks is overwhelming and a drain on their internal resources. Consequently, 64 percent of respondents say remote access is becoming their organization’s weakest attack surface. Despite the risks, less than half (48 percent) of respondents say the IT/IT security function makes ensuring the security of third-parties remote access to its network a priority.

Remote access risks are created because only 43 percent of respondents say their organizations can provide third parties with just enough access to perform their designated responsibilities and nothing more. Further, only 36 percent of respondents say their organizations have visibility into the level of access and permissions for both internal and external users.

The ability to secure remote access requires an inventory of third parties that have this access. Only 49 percent of respondents say their organizations have a comprehensive inventory of all third parties with access to its network. Of the 51 percent of respondents who say their organizations don’t have an inventory or are unsure, say it is because there is no centralized control over third-party relationships (60 percent) and the complexity in third-party relationships (48 percent).

Organizations continue to rely upon contracts to manage the third-party risk of those vendors with access to their sensitive information. Only 41 percent of respondents say their organizations evaluate the security and privacy practices of all third parties before allowing them to have access to sensitive and confidential information.

Of these respondents, 56 percent of respondents say their organizations acquire signatures on contracts that legally obligates the third party to adhere to security and privacy practices followed by 50 percent of respondents who say written policies and procedures are reviewed. Only 41 percent of respondents say their organizations assess the third party’s security and privacy practices.

A good business reputation is the primary reason not to evaluate the security and privacy practices of third parties. Fifty-nine percent of respondents say their organizations are not evaluating third parties’ privacy and security practices or they are unsure if they do. The top two reasons are respondents (60 percent) have confidence in the third party’s business reputation and 58 percent of respondents say it is because the third party is subject to contractual terms.

Ongoing monitoring of third parties is not occurring in many organizations and a possible reason is few organizations have automated the process. Only 45 percent of respondents say their organizations are monitoring on an ongoing basis the security and privacy practices of hird parties with whom they share sensitive or confidential information.

Of these organizations, only 36 percent of respondents say the monitoring process of third parties is automated. These organizations spend an average of seven hours per week automatically monitoring third-party access. Those organizations that manually monitor access (64 percent of respondents) say that they spend an average of eight hours each week monitoring access. The primary reasons for not monitoring third parties’ access is reliance on the business reputation of the third party (59 percent of respondents), the third party is subject to contractual terms and not having the internal resources to monitor (both 58 percent of respondents).

 Poorly written security and privacy policies and procedures is the number one indicator of risk.  Only 41 percent of respondents say their third-party management program defines and ranks levels of risk. Sixty-three percent of respondents say poorly written security and privacy policies and procedures followed by a history of frequent data breach incidents (59 percent of respondents) are the primary indicators of risk. Only 35 percent say they view the third party’s use of a subcontractor that has access to their organizations’ information as an indicator.

To read the full report, including charts and graphs, visit SecureLink’s website here

‘Data broker’ Oracle misleads billions of consumers, lawsuit alleges, enables privacy end-arounds

Bob Sullivan

At least one Big Tech firm has glided mostly under the radar during the recent techlash — Oracle — but that relative obscurity might be coming to an end. A class-action lawsuit filed against the data giant by some heavy-hitters in the privacy world alleges that Oracle combines some of the worst qualities of Google and Facebook, at a scale even those firms have trouble matching.  Oracle has incredibly intimate information on 5 billion people around the planet — and the lawsuit alleges that the firm trades on that information largely without anyone’s consent.

Oracle combines a variety of data it collects through its own cookies, data it buys from third parties, and data it acquires from real-world retailers, to harmonize billions of data points into single identities that can be targeted with political or commercial messages, the lawsuit says.  This “onboarding” of offline with online data creates uniquely detailed profiles of consumers.

“The regularly conducted business practices of defendant Oracle America amount to a deliberate and purposeful surveillance of the general population,” the lawsuit alleges. “In the course of functioning as a worldwide data broker, Oracle has created a network that tracks in real-time and records indefinitely the personal information of hundreds of millions of people.”

Oracle holds data on 300 million Americans, or about 80% of the population, according to the suit. Those individual consumers can be tracked “seamlessly across devices.” In a video posted by the plaintiffs, Oracle founder Larry Ellison boats that Oracle data can track consumers into stores, micro-target them right to the location where they stand in an aisle, and connect that to store inventory in that very aisle.

“By collecting this data and marrying it to things like micro-location information, Internet users’ search histories, website visits and product comparisons along with their demographic data, and past purchase data, Oracle will be able to predict purchase intent better than anyone,” Ellison boasts in the video.

The firm also builds extensive profiles of individuals, then places them into marketable categories.

“Oracle then infers from this raw data that, for example, a person isn’t sleeping well, or is experiencing headaches or sore throats, or is looking to lose weight, and thousands of other invasive and highly personalized inferences,” the suit says.

One of the plaintiffs is Johnny Ryan, Senior Fellow of the Irish Council for Civil Liberties, who I interviewed extensively for our recent “Too Big to Sue” podcast with Duke University.

“Oracle has violated the privacy of billions of people across the globe. This is a Fortune 500 company on a dangerous mission to track where every person in the world goes, and what they do. We are taking this action to stop Oracle’s surveillance machine,” Ryan said in a statement about the lawsuit.

One serious claim the lawsuit makes: Oracle goes to great trouble to avoid consumers’ stated preferences *not* to be tracked — the firm combines various cookies to avoid third-party cookie blocking tools, for example.

“Data brokers participating in Oracle’s Data Marketplace freely portray themselves as able to defeat users’ anti-tracking precautions, a pitch at odds with Oracle’s privacy policies and its professed respect for the right of individuals to opt out,” the suit alleges. It cited a study that found “even when users specifically decline consent to be tracked, various adtech participants—including Oracle—ignore those expressions of consent and place trackers on users’ devices. The same study discovered that Oracle places tracking cookies on a user’s device before the user even has a chance to decline consent.”

The lawsuit also claims that Oracle also uses categories with clever names as an evasive maneuver to sell data the firm claims not to share.

“Oracle segments people based on intimate information, including a person’s views on their weight, hair type, sleep habits, and type of insurance,” it says. “Other categories appear to be proxies for medical information that Oracle purports not to share, like “Emergency Medicine,” “Imaging & Radiology,” “Nuclear Medicine,” “Respiratory Therapy,” “Aging & Geriatrics” “Pain Relief,” and “Allergy & Immunology.” ”

Oracle’s data marketplace also enabled racially-targeted advertising, even after Facebook took steps to stop it, the suit claims: “Oracle facilitates the creation of proxies for protected classes like race, and allows its clients to exclude on that basis. For example, one Oracle customer website describes how, after Facebook made it more difficult to target ads based on race in the employment and credit areas, Oracle helped it achieve the same result.”

Oracle’s data marketplace also permits activity that many would find a threat to democracy, the suit claims: “During the summer of 2020, Mobilewalla tracked mobile devices to collect data on 17,000 Black Lives Matter protesters including their home addresses and demographics. Mobilewalla also released a report entitled ‘George Floyd Protester Demographics: Insights Across 4 Major US Cities,’ which prompted a letter and investigation by Senator Elizabeth Warren and other Congress members.”

Some categories sold by data partners are incredibly intimate:

“OnAudience, a ‘data provider’ that profiles Internet users by ‘observing user activity based on websites visited, content consumed and history paths to find clear behavior patterns and proper level of intent,’ lets customers target individuals categorized as interested in ‘Brain Tumor,’ ‘AIDS & HIV,’ ‘Substance Abuse’ and ‘Incest & Abuse Support.’ ”

The suit alleges violation of California’s Unfair Competition Law and various other counts.  A good analysis of the plaintiff’s legal strategy can be found at this Twitter thread by Robert Bateman.

It’s good that Oracle’s time under the radar might be ending; the firm should be standing next to Google, Facebook, Apple, Microsoft and the other Big Tech names finally getting the scrutiny they deserve.

Email Data Loss Prevention: The Rising Need for Behavioral Intelligence

The purpose of this study is to learn what practices and technologies are being used to reduce one of the most serious risks to an organization’s sensitive and confidential data. The study finds that email is the top medium for data loss and the primary pathways are employees’ accidental and negligent data exfiltration through email. According to the research, 59 percent of respondents say their organizations experienced data loss and exfiltration that involved a negligent employee or an employee accidentally sending an email to an unintended recipient. On average, organizations represented in this research had 25 of these incidents each month.

To reduce these risks, organizations should consider technologies that leverage machine learning and behavioral capabilities. This approach enables organizations to proactively prevent data loss vulnerabilities so organizations can stop email data loss and exfiltration before they happen. Thirty-six percent of respondents say their organizations use behavior-based machine learning and artificial intelligence technology. Seventy-seven percent of these respondents report that it is very effective.

Sponsored by Tessian, Ponemon Institute surveyed 614 IT and IT security practitioners who are involved in the use of technologies that address the risks created by employees’ negligent email practices and insider threats. They are also familiar with their organizations’ data loss protection (DLP) solutions.

Current solutions and efforts to minimize risks caused by employees’ misuse of emails are ineffective. Respondents were asked to rate the effectiveness of their organizations’ ability in preventing data loss and exfiltration caused by vulnerabilities in employees’ use of emails. Only 41 percent of respondents say their current data loss prevention solutions are effective or very effective in preventing data loss caused by misdirected emails. As one consequence of not having the right solutions, and only 32 percent of respondents say their organizations are effective or very effective in preventing these incidents.

The following recommendations are based on the research findings. 

  • Data is most vulnerable in email. Employee negligence when using email is the primary cause of data loss and exfiltration. According to the research, 65 percent of respondents say data is most vulnerable in emails. In the allocation of resources, organizations should consider technologies that reduce risk in this medium. On average, enterprises have 13 full-time IT and IT security personnel assigned to securing sensitive and confidential data in employees’ emails.
  • Organizations should assess the ability of their current technologies to address employee negligence risks related to email. Forty percent of respondents say email data loss and exfiltration incidents were due to employee negligence or by accident. Additionally, 27 percent of respondents say it was due to a malicious insider. As revealed in this research, many current email data loss technologies are not considered effective in mitigating these risks. Accordingly, organizations should consider investing in technologies that incorporate machine learning and artificial intelligence to understand data loss vulnerabilities through a behavioral intelligence approach.
  • Identify the highest risk functions in the organization. According to respondents, the practices of the marketing and public relations functions are most likely to cause data loss and exfiltration (61 percent of respondents). Accordingly, organizations need to ensure they provide training that is tailored to how these functions handle sensitive and confidential information when emailing. As shown in this research, organizations are most concerned about data loss involving customer and consumer data, which is very often used by marketing and public relations as part of their work. Other high-risk functions are production and manufacturing (58 percent of respondents) and operations (57 percent of respondents). Far less likely to put data at risk are client services and relationship management functions (19 percent of respondents).
  • Despite the risk, many organizations do not have training and awareness programs with a focus on the sensitivity and confidentiality of data transmitted in employees’ email. Sixty-one percent of respondents say their organizations have training and awareness programs for employees and other stakeholders who have access to sensitive or confidential personal information. Only about half (54 percent of the 61 percent of respondents with programs) say the programs address the sensitivity and confidentiality of data in employees’ emails.
  • Sensitive and confidential information are at risk because of the lack of visibility and the ability to detect employee negligence and errors. Fifty-four percent of respondents say the primary barrier to securing sensitive data is the lack of visibility of sensitive data that is transferred from the network to personal email. Fifty-two percent of respondents say the greatest DLP challenges are the inability to detect anomalous employee data handling behaviors and the inability to identify legitimate data loss incidents.
  • On average, it takes 18 months to deploy and find value from the DLP solution. Organizations spend an average of slightly more than a year (12.3 months) to complete deployment of the DLP solution and more than half a year (6.5 months) to realize the value of the solution. The length of time to deploy and realize value can affect the ability for organizations to achieve a more mature approach to preventing email-related compromises by employees.
  • The length of time spent in detecting and remediating email compromises puts sensitive and confidential data at risk. According to the research, security and risk management teams spend an average of 72 hours to detect and remediate a data loss and exfiltration incident caused by a malicious insider on email and an average of almost 48 hours to detect and remediate an incident caused by a negligent employee. This places a heavy burden on these teams who must triage and investigate these incidents and become unavailable to address other security issues and incidents. 

Other takeaways

  • Regulatory non-compliance is the number one consequence of a data loss and exfiltration incident followed by a decline in reputation. These top two consequences can be considered interrelated because non-compliance with regulations (57 percent of respondents) will impact an organization’s reputation (52 percent of respondents). Regulatory non-compliance is considered to have the biggest impact on organizations’ decision to increase the budget for DLP solutions.
  • Organizations consider end-user convenience very important. Seventy-five percent of respondents say end-user convenience in DLP solutions is very important.

To read the full report, please visit Tessian’s website.

Data brokers, in bed with scammers, aimed their algorithms at millions of elderly, vulnerable

Bob Sullivan

Several large data brokers profited for years by selling what are known, cruelly, as “suckers lists” to criminals who used them to fine-tune scams designed to cheat elderly and vulnerable people, a new report on LawfareBlog explains. It’s a stomach-churning analysis which shines a harsh light on an open secret about many industries: Stealing from the elderly is good business, and rarely comes with much risk.

The Lawfare story — written by Justin Sherman and  Alistair Simmons, describes the prosecution of three large data brokers — Epsilon, Macromark, and KBM Group — during the past couple of years. Details in the guilty pleas are harrowing.  Much more below, but first, a quick step onto the soap box:

Medium-sized crime gangs, or even small-time criminals, are usually behind the scams I’ve written about for several decades — fake sweepstakes, fraudulent grant programs, and so on.  Many are life-altering for the victims. Often, their entire life savings is stolen. For the elderly, there is no time to recover from such a scam.  Some get sick, or even commit suicide after a bout with a scam like this.  The criminals who take their money should be vigorously prosecuted, of course. But for many years, I have seen that a slate of legitimate, multi-national companies facilitate these crimes. Sometimes, they even profit from these crimes.  And sometimes, their very business model depends on this dirty business. Yet, these companies that remain an arm’s length from the victims often suffer little to no consequence. That has to change.  Matt Stoller, a loud advocate for antitrust reforms, has a habit of yelling “Jail Time!” when obvious corporate malfeasance is largely ignored by our judicial system.  It’s a cry more should join. Stealing from the elderly and vulnerable should not be an acceptable business model, or even an acceptable by-product of a business model. People who help criminals steal from the elderly should go to jail.

Onto the details. Readers might remember Epsilon from an incident that’s a decade old, when the then-obscure data hoarding firm suffered what some called the largest data breach in history. Starting before that incident, and lasting through July 2017 — for more than a decade — Epsilon employees helped criminals send mail stuffed with all manner of obvious scams, according to court documents. There were fake sweepstakes, alleged personal astrology invitations, auto-warranty solicitations, dietary-supplement scams, and fraudulent government grant offers. Epsilon employees knew these were scams.  Clients would occasionally get arrested. In one case, a worker lamented that one client, “brought us rev[enue] for 5 years but the law caught up with them and shut them down.”

The solicitations were fraudulent on their face. Sweepstakes mailer recipients were told they were one of a kind; it was obviously impossible they could all be winners. Yet Epsilon continued to work with such firms. It earned money from selling targeted lists of those who were most likely to respond.  In fact, it had special names for the characters in this scam: targeted consumers were called euphemistically “opportunity seekers,” before they were victims. Clients who sent the fraudulent mailers were called “opportunistic.” The Justice Department leaves no doubt what these terms really meant — “opportunity seekers frequently fell within the same demographic pool: elderly and vulnerable Americans.”

During this decade, Epsilon helped criminals attack 30 million American consumers by selling these companies data that was used to facilitate “fraudulent mass-mailing schemes,” according to the Department of Justice.

Meanwhile, there was a devilish feedback loop also. Data from the criminal enterprises was used to hone Epsiolon’s algorithms, as Sherman and Simmons explain in their piece:

“Two employees ‘collaborated on a model in February 2016 ‘for clients engaged in fraud that used data from’ one of Epsilon’s clients. They expanded Epsilon’s databases by getting information back from scammers, and then used that information to determine which people would be most susceptible to future targeting. In other words, those who fell for a scam once would be documented in Epsilon’s database, so it could provide other scammers with lists of people who were identified to be … receptive to that kind of marketing.”

Epsilon agreed to “deferred prosecution” in its case, which means it essentially pled guilty and agreed to pay $150 million in fines and restitution.  Separately, two former Epsilon employees have been charged criminally, a welcome development. One year later, their federal cases are slowly moving their way through a Colorado federal court. The most recent filing action in the case involved Epsilon trying to quash a subpoena issued by the defendants, who seem to believe corporate documents could exonerate them by showing they were just following orders. Epsilon denies that and says the defendants are on an evidentiary fishing scheme.

Macromark’s prosecution followed similar lines, court documents say.   That firm also spent more than a decade helping criminals steal millions of dollars from thousands of victims who were targeted because they were likely to respond to a fraudulent psychic scam.

“In general, the most effective mailing lists for any particular fraudulent mass mailing were lists made up of victims of other mass-mailing campaigns that used similarly deceptive letters,” the Macromark guilty plea says.

There was no doubt Macromark knew what clients were doing, according to the plea document: “A Macromark executive sent a client a link to a newspaper article with the headline ‘Feds: Mail fraud schemes scam seniors,’ together with materials connecting the client’s own letters to the subject of the newspaper article.” The guilty plea says a Macromark employee actually helped a client change names to evade law enforcement.

“List brokers and service providers such as Macromark who facilitate these schemes are especially dangerous,” said Inspector in Charge Delany DeLeon-Colon of the U.S. Postal Inspection Service’s Criminal Investigations Group, which investigated the crime.  “Data firms such as this have extraordinary access to consumer’s personal information, not just their mailing address.  The sale and distribution of this data exponentially magnifies the scale and impact of these schemes. Macromark pleaded guilty to wire fraud, and admitted that the lists it provided to scammers led to losses of $9.5 million from victims. The company was sentenced to three years of probation and a $1  million fine.

Two Macromark executives were also indicted for mail and wire fraud as part of that investigation.

At KBM Group, an employee enjoyed a laugh at the expense of victims, court documents say. One solicitation sent using KBM data said recipients were entitled to $45,000 from an old dormant account, which would be released if a small fee was paid. A general manager at KBM said in an email, “Who responds to this stuff?? Obviously, we have those people.” Later, that same manager fought for a client that another employee had flagged as fraudulent, leading to the sale of 100,000 consumers’ data.

KBM pled guilty and paid agreed to pay victim compensation penalties totaling $42 million.

Fines are fine. Occasionally, victims of these frauds do get some money back thanks to restitution funds, and that’s fine, too, though often years late and many dollars short. Still, these examples show how brazen companies can be when providing a platform for criminals to connect with vulnerable people. Platform accountability calls for swift justice and jail time.  Each week as host of The Perfect Scam, I listen to people talk about their lives torn apart by crimes like these.

When your actions logically begin a chain of events that leads to ruined lives, well, your life should be ruined, too.

I’ll let Shermer and Simmons have the last word:

“Data brokers are extremely profitable and can overcome imposed fines while continuing their operations. The more money they make, the more money they will have to spend on legal defenses. In the three mentioned cases, the data brokers’ internal compliance measures were ineffective, because these companies already knew that they were partnering with scammers and continued to do so because they saw it as financially advantageous. If controls were in place, they were ignored. And in the one case where controls were enforced, the controls were overridden by data broker employees pushing for profit above all else. This raises a series of critical policy questions about the effectiveness of company controls today and how much company controls should be prioritized as part of a policy solution when there is evidence that they can be overridden.

Comprehensive legislation, at the federal if not state level, to regulate data brokerage and prevent and mitigate its harms is necessary to protect all Americans. This should include a focus on stopping the algorithmic revictimization of people who fall for scams. It should also include a focus on controlling the sale and licensing of data on vulnerable Americans—particularly when data brokers knowingly use that information to help scammers prey on the elderly, cognitively impaired, and otherwise vulnerable.