If time is money, vulnerability backlog is really expensive

Sponsored by Rezilion, the purpose of this research is to understand the state of organizations’ DevSecOps efforts to manage vulnerabilities throughout the software attack surface. Ponemon Institute surveyed 634 IT and IT security practitioners who are knowledgeable about their organizations’ attack surface and effectiveness in managing vulnerabilities.

All organizations have adopted DevSecOps or are in the process of adopting a DevSecOps approach. According to the research, the lack of the right security tools is the primary barrier to having an effective DevSecOps. This challenge is followed by a lack of workflow integration and the growing vulnerability backlog.

In this research, we have defined DevSecOps (short for development, security and operations) as the automation of the integration of security at every phase of the software development lifecycle from initial design through integration, testing, deployment and software delivery.

At the heart of having a successful vulnerability management program is alignment between DevSecOps and the development team in being able to achieve both innovation and security when delivering products. Only 47 percent of respondents say their organizations’ development team delivers both an enhanced customer experience and secure applications and 53 percent of respondents are concerned that the lack of visibility and prioritization in DevOps security practices puts product security at risk.

Fifty-five percent of respondents say their development engineers, product security teams and compliance teams are aligned to understand their organizations’ security posture and each other’s area of responsibilities to deliver secure products.

The following are key takeaways from the research.

 The two primary reasons to adopt DevSecOps are to improve the collaboration between development, security and operations and reduce the time to patch vulnerabilities, according to 45 percent of respondents. In addition to improving collaboration and reducing time to patch, 41 percent of respondents say it automates the delivery of secure software without slowing the software development cycle (SDLC).

 Almost half of respondents say their organizations have a vulnerability backlog. Forty-seven percent of respondents say in the past 12 months organizations had applications that have been identified as vulnerable but not remediated. On average, 1.1 million individual vulnerabilities were in this backlog in the past 12 months and an average of 46 percent were remediated. However, respondents say their organizations would e satisfied if 29 percent of vulnerabilities in a year were remediated.

“This is a significant loss of time and dollars spent just trying to get through the massive vulnerability backlogs that organizations possess,” said Liran Tancman, CEO of Rezilion, which sponsored the research. ”If you have more than 100,000 vulnerabilities in a backlog, and consider the number of minutes that are spent manually detecting, prioritizing, and remediating these vulnerabilities, that represents thousands of hours spent on vulnerability backlog management each year. These numbers make it clear that it is impossible to effectively manage a backlog without the proper tools to automate detection, prioritization, and remediation.”

 The inability to prioritize what needs to be fixed is the primary reason vulnerability backlogs exist, according to 47 percent of respondents. A primary reason for the existence of backlogs is not having enough information about risks that would exploit vulnerabilities (45 percent of respondents) and the lack of effective tools (43 percent of respondents).

Forty-seven percent of respondents say their organizations have adopted a shift right strategy, which enables continuous feedback from users. Fifty-one percent of respondents believe the benefit of a shift right strategy empowers engineers to test more, test on time and test late.

Organizations are slightly more effective in prioritizing their most critical vulnerabilities than patching vulnerabilities. Fifty-two percent of respondents say their organizations’ prioritization of critical vulnerabilities is very effective but only 43 percent of respondents say timely patching is highly effective.

Vulnerability patching is mostly delayed because of the difficulty in tracking whether vulnerabilities are being patched in a timely manner. Difficulty in tracking (51 percent of respondents) is followed by the inability to take critical applications and systems off-line so they can be patched quickly (49 percent of respondents).

Automation significantly shortens the time to remediate vulnerabilities. Fifty-six percent of respondents say their organizations use automation to assist with vulnerability management. Of these respondents, 59 percent say their organizations automate patching, 47 percent say prioritization is automated and 41 percent say reporting is automated. Each week, the IT security team spends most of its time on the remediation of vulnerabilities. Sixty percent of respondents with automation say it significantly shortens the time to remediate vulnerabilities (43 percent) or slightly shortens the time (17 percent).

DevOps is an approach based on lean and agile principles to quickly deliver software that enables organizations to quickly seize market opportunities. Fifty-one percent of respondents say they have some involvement in their organization’s DevOps activities. As shown Fifty-two percent of these respondents say they are involved in vulnerability management and 49 percent of these respondents say they are involved in application security.

Certain features are important to creating secure applications or services. Sixty-five percent of respondents say the ability to perform tests as part of the workflow instead of stopping, testing, fixing and restarting development is very important and 61 percent of respondents say automating vulnerability, scanning and remediation at every stage of the SDLC is very important.

The inability to quickly detect vulnerabilities and threats is the number one reason vulnerabilities are difficult to remediate in applications. Sixty-one percent of respondents say it is very difficult or difficult to remediate vulnerabilities in applications. Why it is so difficult is because of the inability to quickly detect vulnerabilities and threats (55 percent of respondents), the inability to quickly perform patches on applications in production (49 percent of respondents) followed by the lack of enabling security tools (43 percent of respondents).

More than half of organizations focus only on those vulnerabilities that pose the most risk. Fifty-three percent of respondents believe it is important to focus on only those vulnerabilities that pose the most risk and not on remediating all vulnerabilities. Forty-nine percent of respondents say their organization remediates all vulnerabilities because it does not know which ones pose the most risk.

Testing applications and keeping an inventory of business-critical applications are steps that have been fully or partially implemented. To manage vulnerabilities, 45 percent of respondents say their organizations test the application for vulnerabilities using automation and 44 percent of respondents say their organizations have created and maintained an inventory of applications and assess their business criticality.

Software Bill of Materials (SBOM) is a list of components in a piece of software. Software vendors often create products by assembling open source and commercial components. The SBOM describes the components in the product. A dynamic SBOM is updated automatically whenever a release or change occurs. Forty-one percent of respondents say their organizations use SBOM. Risk assessment and compliance with regulations are the top two features of these organizations’ SBOMs. While 70 percent of respondents say continuous automatic updates are important or very important, only 47 percent say their SBOM features continuous updates.

 The growing software attack surface is a high concern.  Seventy-one percent of respondents say their organizations are very or highly concerned about risks created by the growing software attack surface. A higher percentage of respondents (77 percent) believe it is very or highly important.

Despite the concerns, most organizations are not effective in both knowing the attack surface and securing it. Only 43 percent of respondents say their organizations’ effectiveness is very high and only 45 percent of respondents say their organizations are effective in knowing the attack surface.

 Elimination of complexity and eliminate vulnerabilities that are exploitable are the most important steps to safeguard the attack surface. Sixty percent of respondents say the elimination of complexity in the software attack surface vulnerabilities that are exploitable (56 percent of respondents) will reduce threats to the attack surface. This is followed by knowledge of all software components (51 percent of respondents). Only 26 percent of respondents say regular network scans reduce threats.

To read the complete results of the survey, visit the Rezilion website.

When my smartphone was stolen, Instagram (and 2FA) was the worst part

Why am I holding this odd-looking sign in what looks a lot like a mug shot? Because my cell phone was stolen recently.  And the worst part of that experience has been dealing with …. Instagram.  As I’ve written before, poor customer service is actually a massive security vulnerability, and I think my story will illustrate that.  But if you don’t care for those details, at least scroll down to watch me and my dog struggle to submit a selfie video so I could attempt to regain access to @RustyDogFriendly. It’s worth the price of admission. (And, sadly, it did not work. Many, many times)

Many years ago I was scared straight on two-factor authentication when I was working on a documentary podcast about Russian hackers and I received notification from Facebook that someone in St. Petersburg, Russia, had tried to hack into my Instagram account.  I was already pretty careful with my work and banking accounts, but now I put two-factor on everything I could.

And I didn’t opt for the less-secure SMS text-message-code style two-factor. I went with the stronger token-based model.  I installed Google Authenticator on my phone and used its mathematically-generated codes as my second step when logging into all my various accounts. Even my @bobsulli Instagram, used mainly for my photography hobby, and @rustydogfriendly, where fans of my beloved golden retriever could get their fix of Rusty. (Long-time readers know Rusty has enjoyed his own time in the media spotlight from a story I wrote for the Today show).

That worked well until my cellphone was stolen while traveling last week.  Everyone understands the hassle that usually brings. I was actually fortunate. I have insurance, so after a $230 deductible, I received a replacement phone and all my data is backed up, so I didn’t really lose anything.

Except my sanity as I tried to log back into sites where I had employed Google Authenticator.  You see, there is no way to restore that. When your phone gets stolen, your token math is gone. There’s no way to import the old math into the new phone with out access to the old phone; at least none I am aware of. So every site which required an Authenticator code now required an alternative sign-in process.  The good news is: None of them were particularly easy. I wouldn’t want that! After all, what good is two-factor authentication if someone can just say, “I forgot my password” and sign in with a new one.

So I went through various alternative means of logging in…many involved using other gadgets or laptops where  I was already logged into these accounts and answering various questions.  Most sites that use tokens offer the chance to download a series of one-time backup codes designed for such an emergency, which I had done in most cases. Of course, many of the codes date back to those frantic moments five years ago when I was preparing for a possible Russian hack, so they weren’t necessarily easy to find.  But, I muddled through.

Until I got to Instagram.

I’ve written a lot recently about the problems Instagram is having with hackers. Well — in my view — the real problem is Instagram’s customer service failures. It’s easy to find horror stories about Instagram users who’ve had their accounts hijacked — then, those impersonator accounts are used for ongoing crimes, like crypto scams — and the victims are unable to even get the accounts turned off, let alone restored to the rightful owner.

So I shouldn’t have been surprised when attempts to log into my Instagram accounts with a brand new phone — and without my Authenticator — ran into roadblocks worthy of Fort Knox.  Let me be clear: I am glad Instagram makes it hard to log into my accounts from a strange new cell phone. Kudos to them for making this challenging.  But…when challenging becomes impossible, something else becomes clear. Their security implementation is a failure. And as a result, I can no longer recommend that Instagram users employ strong two-factor authentication, because you may very well be signing your account’s death warrant that way.

My @BobSulli account is much older, and I occasionally use it for professional purposes — I was among the first Instagram users, relatively speaking — so I dived into that problem first. I asked for a password reset at the email address on file. That worked. I tried to log in. I couldn’t without an Authenticator code.  I asked for an alternative.  I entered the backup codes I had. That didn’t work. I felt desperate.

I should note that every one of these interactions with Instagram came with a subject line “Hi BobSulli — we’ve made it easy to get back on Instagram.”

So I asked the software one more time — isn’t there any alternative?

I was then asked if pictures of myself were in the account. Of course!  So then I was told to make a “selfie video.”  Great!  Someone — or  something — was going to look at this video, compare it to the 500 other pictures of me, and override whatever system was blocking me from the account. Perhaps (hopefully?) after a phone call or some other final check of who I was.  Following the instructions, I looked right, looked up, looked left….and then submitted it.  I was told it might take three or four days. Bummer, but worth it to have this piece of the puzzle solved!

Within minutes, I had my answer.

“We weren’t able to confirm your identity from the video you submitted. You can submit a new video and we’ll review it again,” a sad email said.

That was fast, I thought. It’s probably a machine.  So I set about running around my home trying to find the same lighting as my profile photo. Even the same suit jacket I wore.  I submitted several selfie videos over the next few hours.  All of them were rejected.

The only other alternative I was offered was ….no alternative.  Just a link to a help center page that, predictably, offered no help at all.  I was at a dead end.

But then I had one more thought.  I had an old iPod touch. Perhaps I had logged into my account from that device and Instagram would recognize it through device fingerprinting or something similar and I’d at least get a different alternative.  Bang!  This time, when I entered my password, failed the Authenticator test, and begged for help, I was presented with a form to fill out. I did so, and received a hopeful — if odd — response in email.

“Thanks for contacting us. Before we can help, we need you to confirm that you own this account. Please reply to this message and attach a photo of yourself holding a hand-written copy of the code below…. 6XXXX…Please make sure that the photo you send includes the above code hand-written on a clean sheet of paper, followed by your full name and username …Clearly shows both the code and your face.”

So, that’s where the mugshot photo above comes in. I sent it in.  To shorten the story a bit, that worked.  Withing a few hours, I had access to @BobSulli!  That was the hard part, I thought.  There was just one more thing to do to recover from the awful experience of having my smartphone stolen — recover my dog’s account, @rustydogfriendly.

And that, dear readers, has proven to be my Waterloo.

Because when the time came to submit a selfie video for his account…that didn’t go quite as well, as you can see in the embedded YouTube video below.

I know what you’re thinking: I’ve already tried a selfie video of just me. That didn’t work either. I’ve tried about 10 different variations. Each time, the video is rejected. I actually had a reasonable dialog with the folks who helped me log into @BobSulli over email. I pleaded with them to look at @rustydogfriendly. The accounts are linked in both their bios!  It’s obvious we are connected! I sent a list of pictures with both me and him together! The person(s) on the other end of the keyboard kept telling me they could only help with my @BobSulli account. I begged for an alternative. The “line” went dead.

So I am stuck in a perpetual loop, as the geeks say. I log in, I’m asked for an Authenticator code, I say I don’t have it, I’m asked for a backup code, I try it, it fails, I ask for an alternative, I’m asked to make a selfie video, it fails, and then my only option is to make another selfie video.

The backup codes I have for @rustydogfriendly were downloaded the day I opened the account. Why don’t they work? I don’t know. Perhaps they have expired.  I don’t recall ever using them, but who knows. It was four years ago.

So I am stuck. Rusty is usually a big hit on Halloween, but I’ve now written that holiday off.  Perhaps there is some other route to logging in that I’ve missed, and for that I’m sorry, but I believe I’ve taken every step a reasonable consumer would take in my situation — and a few that only a cybersecurity journalist would take — and I have nothing but a zombie account to show for it.  And a belabored blog post that many of you probably have not finished reading.

But I belabor the point because it’s important: when security isn’t accompanied by customer service, it’s a failure. Poor customer service is, I believe, our greatest security vulnerability. Two-factor authentication is ESSENTIAL.  Many people use text-message-based authentication, and I’ve been telling anyone who will listen that it’s now failed. It’s too easy for criminals to intercept those texts or obtain them in other ways.  So I’ve been urgings banks and other institutions to force consumers into Authenticator or other software-based tokens instead. They are much safer.

But I can no longer do this in good conscience. Because if there is no plan for consumers who lose access to their phones, there is no plan.  I can’t tell you how much of the weekend I spent explaining to various websites — “No, I can’t verify myself via text because….I don’t have texts right now.” And when there is no alternative, the implementation is broken.

Thanks to Instgram, I will forever be gun-shy about two-factor authentication now. And the more stories like this that you hear, the more you will be inclined to turn it off, too.  After all, which risk is higher — a criminal hacking your account or a corporation blocking your access to the account?

Note that I *was* able to restore my other accounts, so clearly the problem is fixable. I will continue to use two-factor everywhere I can. And you should too.  But know this: If Facebook were required to answer the phone — virtually or otherwise — these situations would not arise. Poor customer service gives security a bad name, and that puts us all at risk.

Meanwhile, if anyone has any suggestions for getting back into my dog’s account, I’m all ears.

 

The 2022 Data Risk in the Third-Party Ecosystem Study

Organizations are dependent upon their third-party vendors to provide such important services as payroll, software development or data processing. However, without having strong security controls in place vendors, suppliers, contractors or business partners can put organizations at risk for a third-party data breach.  A third-party data breach is an incident where sensitive data from an organization is not stolen directly from it, but through the vendor’s systems that are misused to steal sensitive, proprietary or confidential information.

Sponsored by RiskRecon, a Mastercard Company and conducted by Ponemon Institute,1,162 IT and IT security professionals in North America and Western Europe were surveyed. All participants in the research are familiar with their organizations’ approach to managing data risks created through outsourcing. Sixty percent of these respondents say the number of cybersecurity incidents involving third parties is increasing. (Click here for a link to the full study)

We define the third-party ecosystem as the many direct and indirect relationships companies have with third parties and Nth parties. These relationships are important to fulfilling business functions or operations. However, the research underscores the difficulty companies have in detecting, mitigating and minimizing risks associated with third parties and Nth parties that have access to their sensitive or confidential information.

Third-and-Nth party data breaches may be underreported. Respondents were asked to rate how confident their organizations are that a third or Nth party would disclose a data breach involving its sensitive and confidential information.

Only about one-third of respondents say that they have confidence that a primary third party would notify their organizations (34 percent) and even fewer respondents (21 percent) say the Nth party would disclose the breach.

Based on the findings, companies should consider the following actions to reduce the likelihood of a third-party or Nth party data breach.

  1. Create an inventory of all third parties with whom you share information and evaluate their security and privacy practices. Before onboarding new third parties, conduct audits and assessments to evaluate the effectiveness of their security and privacy practices. However, only 36 percent of respondents say that before starting a business relationship that requires the sharing of sensitive or confidential information their company evaluates the security and privacy practices of all vendors. Organizations should have a comprehensive list of third parties who have access to confidential information and how many of these third parties are sharing this data with one or more of their contractors. Identify vendors who no longer meet your organization’s security and privacy standards. Facilitate the offboarding of these third parties without causing business continuity issues.
  1. Conduct frequent reviews of third-party management policies and programs. Only 43 percent of respondents say their organizations’ third-party management policies and programs are frequently reviewed to ensure they address the ever-changing landscape of third-party risk and regulations. Organizations should consider automating third-party risk evaluation and management.
  1. Study the causes and consequences of recent third-party breaches and incorporate the takeaways in your assessment processes. Only 40 percent of respondents say their third parties’ data safeguards, security policies and procedures are sufficient to prevent a data breach and only 39 percent of respondents say these data safeguards, security policies and procedures enable organizations to minimize the consequences of a data breach. In the past year, breaches were caused by such vulnerabilities as unsecured data on the Internet, not configuring cloud storage buckets properly and not assessing and monitoring password managers.
  1. Improve visibility into third or Nth parties with whom you do not have a direct relationship. More than half (53 percent) of respondents say they are relying upon the third party to notify their organization when data is shared with Nth parties. A barrier to visibility is that only 35 percent of respondents say their organizations are monitoring third-party data handling practices with Nth parties. To increase visibility into the security practices of all parties with access to company sensitive information – even subcontractors, notification when data is shared with Nth parties is critical. In addition, organizations should include in their vendor contracts requirements that third parties provide information about possible third-party relationships with whom they will be sharing sensitive information.
  1. Form a third-party risk management committee and establish accountability for the proper handling of third-party risk management program. Many organizations have strategic shortfalls in third-party risk management governance. Specifically, only 42 percent of respondents say managing outsourced relationship risk is a priority in our organization and only 40 percent of respondents say there are enough resources to manage these relationships. To improve third-party governance practices, organizations should centralize and assign accountability for the correct handling of their company’s third-party risk management program and ensure that appropriate privacy and security language is included in all vendor contracts. Create a cross-functional team to regularly review and update third-party management policies and programs. 
  1. Require oversight by the board of directors. Involve senior leadership and boards of directors in third-party risk management programs. This includes regular reports on the effectiveness of these programs based on the assessment, management and monitoring of third-party security practices and policies. Such high-level attention to third-party risk may increase the budget available to address these threats to sensitive and confidential information.

To see the full study, visit the RiskRicon.com website.

 

Poor customer service is our greatest cybersecurity vulnerability

Bob Sullivan

When Bank of America put Hank Molenaar on hold recently, it told the Houston resident there would be a long wait time and he could press 1 to get a call back instead.  But before the bank called, criminals called, impersonating the bank, and stole his money via Zelle.  It was a Perfect Scam.  And the vulnerability that was exploited? It was poor customer service.

There’s a new, disturbing trend I’ve spotted and it’s time to ring the alarm bell. It’s hard work to hack into a bank and steal money. It’s much easier to enlist real consumers as allies to do it for you.  Theft via scam is on the rise, overtaking traditional identity theft / credential hacking, according to a recent report by Javelin Research & Strategy. Criminals are enlisting the help of account holders and other consumers with all manner of creative cover stories and impersonation schemes — the kind of stories I tell at AARP’s The Perfect Scam podcast. Financial institutions and retail outlets have laid the groundwork for this shift through years of neglectful treatment. When it comes time to make a trust choice — as a consumer, do you trust your bank or the person on the phone telling you a bank insider is stealing your cash? — all these years of mistreatment are forcing victims into the arms of criminals.

That’s what Diane Clements told me during a heart-wrenching interview for The Perfect Scam, a podcast I host. Diane and her husband, Tom, are both retired professors.  They worked their whole lives to build a humble $600,000 nest egg that would fund their retirement.  But when Diane’s computer went ballistic on her recently, and a message popped up telling her to “call Microsoft,” she followed the instructions. Soon, an operator on the other end of the line told her that all her bank accounts were hacked. It was an inside job!  And they wanted Diane’s help catching the bad guys. Diane was already struggling — her breast cancer would soon return, requiring aggressive treatment, and that only increased the frantic nature of these communications with “bank” security officials.  During the next three months, after near daily conversations with a set of online criminals, Diane and Tom slowly moved every penny of that $600,000 into accounts controlled by the criminals, all the while thinking they were helping catch a bank insider committing a crime.

I know it can be hard to understand how these crimes occur, but when you hear Diane tell her story, it makes sense (click here to listen ).  The thing that really touched me deeply was the stark contrast Diane experienced when talking with the criminals vs. talking to her bankers during the episode.  The criminals sounded kind, empathetic, thoughtful — while workers at her local bank were downright mean. One even accused her of lying about having cancer during the episode.

“They (were) really mean. They’re rude. They are not helpful to me. Nobody reaches out to me and says, Dianne, I’m concerned about you. Everybody saw me as a perpetrator, not as a victim. I still struggle with that,” Diane told me. “The contrast between them and the banks was stark. And the dissonance that caused me took its toll, because I could not understand how the banks could be so indifferent. So uncaring. Or so cavalier.”

When the day came that someone at a financial institution needed to intervene on behalf of a consumer in distress, Diane’s bank just couldn’t do that.  When a criminal told her to distrust workers at the bank, that was an easy story to sell. Years of neglect had set her up for a confrontational exchange, and that’s what she had.

You can’t mistreat people for years and then suddenly ask them to trust you.   Trust is won over a long stretch of time, through hundreds of interactions large and small. I see companies erode trust every day.  I just looked at my phone while writing this piece and saw an email from Uber with the subject line: WARNING!  It was a marketing pitch. Think about all the communications you receive that include trigger words like “verify” or “transaction,” all focus-grouped to make you click because you *think* it’s an important message about security — when it’s just an ad.  One day, when Uber really needs me to read a communication from them, I’ll probably ignore it. Or worse.

If Diane had felt some positive vibes from her bank, and if someone there had taken the time to really talk with her, she might still have that $600,000. And this scenario plays out over and over again at retailers and financial institutions across the country. For some reason, corporations have adopted the habit of treating their customers like potential criminals. In doing so, they’ve opened the door wide for the real criminals.

This is the message I delivered at a talk I gave recently to Navy Federal Credit Union employees about online scams.  We’ve given lip service for years to the idea that we should enlist consumers to help with cybersecurity. We want them to forward phishing emails they get. We want them to read our happy bulletins explaining the latest scams.  It hasn’t worked.  We need to do much more than that.  We need to make sure that consumers are on our side. We need to make sure consumers trust us. We need their hearts and minds. Criminals are enrolling consumers as accomplices, making the job of hackers so much easier.  To combat this, smart companies will invest in long-term consumer trust, deputizing their shoppers and account holders as agents who can spot scams, but more important, trust them enough to come to them when something feels wrong.

Back to Hank Molenaar. The real reason that scam worked? Bank of America was going to put him on hold for 40 minutes.  That gave criminals a big window of time to call him back first, impersonating the bank. Poor customer service was the security vulnerability. Imagine if Diane *knew* that she could send an email or place a phone call to a kind company representative who would answer her questions as quickly as the criminals did. The bank would have had a fighting chance, anyway.  Good customer service is good security.

Corporations spend billions of dollars on expensive software and experts designed to thwart sophisticated digital attacks.  That’s fine, but criminals are just sending manipulated consumers into the front door to steal money for them. Some of that cybersecurity money should be spent investing in customer service instead. When your consumers trust a random caller claiming to be from the IRS more than they trust you, cybersecurity is only one of the problems you have.

I know it’s poor form to repeat myself, but this message needs to come through — Javelin recently found that more money was lost to scams (“consumer-assisted crime”) than to credential hacking.  This is a trend with staying power. Ignore it at your peril.

I’ve spent my career wearing two hats: as a cybersecurity reporter, and as a consumer reporter.  Often, editors were confused that I insisted on covering both beats, as on the surface, they can seem quite different. Why should I care about the latest buffer overflow *and* unfair overdraft fees?  Now, you know why. They are two sides of the same coin.  And everyone should care about both.

Investing in the Cybersecurity Infrastructure to Reduce Third-party Remote Access Risks

The purpose of this second-annual research study is to understand how organizations are investing in their cybersecurity infrastructure to minimize third-party remote access risk and what primary factors are considered when making improvements to the cybersecurity infrastructure. In this year’s report, we include the best practices of organizations that are more effective in establishing a strong third-party risk management security posture.

Sponsored by SecureLink, Ponemon Institute surveyed 632 individuals who are involved in their organization’s approach to managing remote third-party data risks and cyber risk management activities. According to the research, 54 percent of respondents say their organizations experienced one or more cyberattacks in the past 12 months and the financial consequences of these attacks during this period averaged $9 million.

The average annual investment in the cybersecurity infrastructure is $50.8 million. According to the research, incentives to invest in the infrastructure include solving system complexity and effectiveness (reducing high false positives) and increasing in-house expertise.

Since last year’s research, no progress has been made in reducing third-party remote access risks. The security of third-party remote access is not improving. Therefore, the correct decisions regarding investment in the cybersecurity infrastructure to reduce these third-party risks are becoming increasingly important. Respondents were asked to rate the effectiveness of their response to third-party incidents, detection of third-party risks and mitigation of remote access third-party risks on a scale of 1 = not effective to 10 = highly effective.

Only 40 percent of respondents say mitigating remote access is very effective, 53 percent of respondents say detecting remote access risks is very effective and 52 percent of respondents say responding to these risks and controlling third-party access to their network is highly effective.

The risks of third-party remote access

 In the past 12 months, organizations that had a cyberattack (54 percent) spent an average of more than $9 million to deal with the consequences. Most of the $9 million ($2.7 million) was spent on remediation & technical support activities, including forensic investigations, incident response activities, help desk and customer service operations. This is followed by damage or theft of IT assets and infrastructure ($2.1 million).

 Investments in the cybersecurity infrastructure should focus on improving governance and oversight practices and deploying technologies to improve visibility of people and business processes. Investment in oversight is important because of the uncertainty about third-parties compliance with security and privacy regulations. On average, less than half (48 percent) of respondents say their third parties are aware of their industry’s data breach reporting regulations. Only 47 percent of respondents rate the effectiveness of their third parties in achieving compliance with security and privacy regulations that affect their organization as very high.

 Data breaches caused by third parties may be underreported. Respondents reporting their organization had a third-party data breach increased from 51 percent to 56 percent. However, organizations may not have an accurate understanding of the number of data breaches because only 39 percent of respondents say they are confident that the third party would notify them if the data breach originated in their organizations.

In the past 12 months, 49 percent of respondents say their organizations experienced a data breach caused by a third party either directly or indirectly, an increase from 44 percent in 2021. Of these respondents, in this year’s research 70 percent of respondents say it was the result of giving too much privileged access to third parties. A slight decrease from 74 percent of respondents in 2021.

Organizations are having to deal with an increasing volume of cyberthreats. Fifty-four percent of respondents say their organizations experienced one or more cyberattacks in the past 12 months. Seventy-five percent of respondents say in the past 12 months there has been a significant increase (25 percent), increase (27 percent) or stayed the same (23 percent) in the volume of cyberthreats. The security incidents most often experienced in the past 12 months were credential theft, ransomware, DDoS and lost or stolen devices.

Managing remote access to the network continues to be overwhelming but the security of third parties’ remote access to its network is not a an IT/IT security priority. Sixty-seven percent of respondents say managing third-party permissions and remote access to their networks is overwhelming and a drain on their internal resources. Consequently, 64 percent of respondents say remote access is becoming their organization’s weakest attack surface. Despite the risks, less than half (48 percent) of respondents say the IT/IT security function makes ensuring the security of third-parties remote access to its network a priority.

Remote access risks are created because only 43 percent of respondents say their organizations can provide third parties with just enough access to perform their designated responsibilities and nothing more. Further, only 36 percent of respondents say their organizations have visibility into the level of access and permissions for both internal and external users.

The ability to secure remote access requires an inventory of third parties that have this access. Only 49 percent of respondents say their organizations have a comprehensive inventory of all third parties with access to its network. Of the 51 percent of respondents who say their organizations don’t have an inventory or are unsure, say it is because there is no centralized control over third-party relationships (60 percent) and the complexity in third-party relationships (48 percent).

Organizations continue to rely upon contracts to manage the third-party risk of those vendors with access to their sensitive information. Only 41 percent of respondents say their organizations evaluate the security and privacy practices of all third parties before allowing them to have access to sensitive and confidential information.

Of these respondents, 56 percent of respondents say their organizations acquire signatures on contracts that legally obligates the third party to adhere to security and privacy practices followed by 50 percent of respondents who say written policies and procedures are reviewed. Only 41 percent of respondents say their organizations assess the third party’s security and privacy practices.

A good business reputation is the primary reason not to evaluate the security and privacy practices of third parties. Fifty-nine percent of respondents say their organizations are not evaluating third parties’ privacy and security practices or they are unsure if they do. The top two reasons are respondents (60 percent) have confidence in the third party’s business reputation and 58 percent of respondents say it is because the third party is subject to contractual terms.

Ongoing monitoring of third parties is not occurring in many organizations and a possible reason is few organizations have automated the process. Only 45 percent of respondents say their organizations are monitoring on an ongoing basis the security and privacy practices of hird parties with whom they share sensitive or confidential information.

Of these organizations, only 36 percent of respondents say the monitoring process of third parties is automated. These organizations spend an average of seven hours per week automatically monitoring third-party access. Those organizations that manually monitor access (64 percent of respondents) say that they spend an average of eight hours each week monitoring access. The primary reasons for not monitoring third parties’ access is reliance on the business reputation of the third party (59 percent of respondents), the third party is subject to contractual terms and not having the internal resources to monitor (both 58 percent of respondents).

 Poorly written security and privacy policies and procedures is the number one indicator of risk.  Only 41 percent of respondents say their third-party management program defines and ranks levels of risk. Sixty-three percent of respondents say poorly written security and privacy policies and procedures followed by a history of frequent data breach incidents (59 percent of respondents) are the primary indicators of risk. Only 35 percent say they view the third party’s use of a subcontractor that has access to their organizations’ information as an indicator.

To read the full report, including charts and graphs, visit SecureLink’s website here

‘Data broker’ Oracle misleads billions of consumers, lawsuit alleges, enables privacy end-arounds

Bob Sullivan

At least one Big Tech firm has glided mostly under the radar during the recent techlash — Oracle — but that relative obscurity might be coming to an end. A class-action lawsuit filed against the data giant by some heavy-hitters in the privacy world alleges that Oracle combines some of the worst qualities of Google and Facebook, at a scale even those firms have trouble matching.  Oracle has incredibly intimate information on 5 billion people around the planet — and the lawsuit alleges that the firm trades on that information largely without anyone’s consent.

Oracle combines a variety of data it collects through its own cookies, data it buys from third parties, and data it acquires from real-world retailers, to harmonize billions of data points into single identities that can be targeted with political or commercial messages, the lawsuit says.  This “onboarding” of offline with online data creates uniquely detailed profiles of consumers.

“The regularly conducted business practices of defendant Oracle America amount to a deliberate and purposeful surveillance of the general population,” the lawsuit alleges. “In the course of functioning as a worldwide data broker, Oracle has created a network that tracks in real-time and records indefinitely the personal information of hundreds of millions of people.”

Oracle holds data on 300 million Americans, or about 80% of the population, according to the suit. Those individual consumers can be tracked “seamlessly across devices.” In a video posted by the plaintiffs, Oracle founder Larry Ellison boats that Oracle data can track consumers into stores, micro-target them right to the location where they stand in an aisle, and connect that to store inventory in that very aisle.

“By collecting this data and marrying it to things like micro-location information, Internet users’ search histories, website visits and product comparisons along with their demographic data, and past purchase data, Oracle will be able to predict purchase intent better than anyone,” Ellison boasts in the video.

The firm also builds extensive profiles of individuals, then places them into marketable categories.

“Oracle then infers from this raw data that, for example, a person isn’t sleeping well, or is experiencing headaches or sore throats, or is looking to lose weight, and thousands of other invasive and highly personalized inferences,” the suit says.

One of the plaintiffs is Johnny Ryan, Senior Fellow of the Irish Council for Civil Liberties, who I interviewed extensively for our recent “Too Big to Sue” podcast with Duke University.

“Oracle has violated the privacy of billions of people across the globe. This is a Fortune 500 company on a dangerous mission to track where every person in the world goes, and what they do. We are taking this action to stop Oracle’s surveillance machine,” Ryan said in a statement about the lawsuit.

One serious claim the lawsuit makes: Oracle goes to great trouble to avoid consumers’ stated preferences *not* to be tracked — the firm combines various cookies to avoid third-party cookie blocking tools, for example.

“Data brokers participating in Oracle’s Data Marketplace freely portray themselves as able to defeat users’ anti-tracking precautions, a pitch at odds with Oracle’s privacy policies and its professed respect for the right of individuals to opt out,” the suit alleges. It cited a study that found “even when users specifically decline consent to be tracked, various adtech participants—including Oracle—ignore those expressions of consent and place trackers on users’ devices. The same study discovered that Oracle places tracking cookies on a user’s device before the user even has a chance to decline consent.”

The lawsuit also claims that Oracle also uses categories with clever names as an evasive maneuver to sell data the firm claims not to share.

“Oracle segments people based on intimate information, including a person’s views on their weight, hair type, sleep habits, and type of insurance,” it says. “Other categories appear to be proxies for medical information that Oracle purports not to share, like “Emergency Medicine,” “Imaging & Radiology,” “Nuclear Medicine,” “Respiratory Therapy,” “Aging & Geriatrics” “Pain Relief,” and “Allergy & Immunology.” ”

Oracle’s data marketplace also enabled racially-targeted advertising, even after Facebook took steps to stop it, the suit claims: “Oracle facilitates the creation of proxies for protected classes like race, and allows its clients to exclude on that basis. For example, one Oracle customer website describes how, after Facebook made it more difficult to target ads based on race in the employment and credit areas, Oracle helped it achieve the same result.”

Oracle’s data marketplace also permits activity that many would find a threat to democracy, the suit claims: “During the summer of 2020, Mobilewalla tracked mobile devices to collect data on 17,000 Black Lives Matter protesters including their home addresses and demographics. Mobilewalla also released a report entitled ‘George Floyd Protester Demographics: Insights Across 4 Major US Cities,’ which prompted a letter and investigation by Senator Elizabeth Warren and other Congress members.”

Some categories sold by data partners are incredibly intimate:

“OnAudience, a ‘data provider’ that profiles Internet users by ‘observing user activity based on websites visited, content consumed and history paths to find clear behavior patterns and proper level of intent,’ lets customers target individuals categorized as interested in ‘Brain Tumor,’ ‘AIDS & HIV,’ ‘Substance Abuse’ and ‘Incest & Abuse Support.’ ”

The suit alleges violation of California’s Unfair Competition Law and various other counts.  A good analysis of the plaintiff’s legal strategy can be found at this Twitter thread by Robert Bateman.

It’s good that Oracle’s time under the radar might be ending; the firm should be standing next to Google, Facebook, Apple, Microsoft and the other Big Tech names finally getting the scrutiny they deserve.

Email Data Loss Prevention: The Rising Need for Behavioral Intelligence

The purpose of this study is to learn what practices and technologies are being used to reduce one of the most serious risks to an organization’s sensitive and confidential data. The study finds that email is the top medium for data loss and the primary pathways are employees’ accidental and negligent data exfiltration through email. According to the research, 59 percent of respondents say their organizations experienced data loss and exfiltration that involved a negligent employee or an employee accidentally sending an email to an unintended recipient. On average, organizations represented in this research had 25 of these incidents each month.

To reduce these risks, organizations should consider technologies that leverage machine learning and behavioral capabilities. This approach enables organizations to proactively prevent data loss vulnerabilities so organizations can stop email data loss and exfiltration before they happen. Thirty-six percent of respondents say their organizations use behavior-based machine learning and artificial intelligence technology. Seventy-seven percent of these respondents report that it is very effective.

Sponsored by Tessian, Ponemon Institute surveyed 614 IT and IT security practitioners who are involved in the use of technologies that address the risks created by employees’ negligent email practices and insider threats. They are also familiar with their organizations’ data loss protection (DLP) solutions.

Current solutions and efforts to minimize risks caused by employees’ misuse of emails are ineffective. Respondents were asked to rate the effectiveness of their organizations’ ability in preventing data loss and exfiltration caused by vulnerabilities in employees’ use of emails. Only 41 percent of respondents say their current data loss prevention solutions are effective or very effective in preventing data loss caused by misdirected emails. As one consequence of not having the right solutions, and only 32 percent of respondents say their organizations are effective or very effective in preventing these incidents.

The following recommendations are based on the research findings. 

  • Data is most vulnerable in email. Employee negligence when using email is the primary cause of data loss and exfiltration. According to the research, 65 percent of respondents say data is most vulnerable in emails. In the allocation of resources, organizations should consider technologies that reduce risk in this medium. On average, enterprises have 13 full-time IT and IT security personnel assigned to securing sensitive and confidential data in employees’ emails.
  • Organizations should assess the ability of their current technologies to address employee negligence risks related to email. Forty percent of respondents say email data loss and exfiltration incidents were due to employee negligence or by accident. Additionally, 27 percent of respondents say it was due to a malicious insider. As revealed in this research, many current email data loss technologies are not considered effective in mitigating these risks. Accordingly, organizations should consider investing in technologies that incorporate machine learning and artificial intelligence to understand data loss vulnerabilities through a behavioral intelligence approach.
  • Identify the highest risk functions in the organization. According to respondents, the practices of the marketing and public relations functions are most likely to cause data loss and exfiltration (61 percent of respondents). Accordingly, organizations need to ensure they provide training that is tailored to how these functions handle sensitive and confidential information when emailing. As shown in this research, organizations are most concerned about data loss involving customer and consumer data, which is very often used by marketing and public relations as part of their work. Other high-risk functions are production and manufacturing (58 percent of respondents) and operations (57 percent of respondents). Far less likely to put data at risk are client services and relationship management functions (19 percent of respondents).
  • Despite the risk, many organizations do not have training and awareness programs with a focus on the sensitivity and confidentiality of data transmitted in employees’ email. Sixty-one percent of respondents say their organizations have training and awareness programs for employees and other stakeholders who have access to sensitive or confidential personal information. Only about half (54 percent of the 61 percent of respondents with programs) say the programs address the sensitivity and confidentiality of data in employees’ emails.
  • Sensitive and confidential information are at risk because of the lack of visibility and the ability to detect employee negligence and errors. Fifty-four percent of respondents say the primary barrier to securing sensitive data is the lack of visibility of sensitive data that is transferred from the network to personal email. Fifty-two percent of respondents say the greatest DLP challenges are the inability to detect anomalous employee data handling behaviors and the inability to identify legitimate data loss incidents.
  • On average, it takes 18 months to deploy and find value from the DLP solution. Organizations spend an average of slightly more than a year (12.3 months) to complete deployment of the DLP solution and more than half a year (6.5 months) to realize the value of the solution. The length of time to deploy and realize value can affect the ability for organizations to achieve a more mature approach to preventing email-related compromises by employees.
  • The length of time spent in detecting and remediating email compromises puts sensitive and confidential data at risk. According to the research, security and risk management teams spend an average of 72 hours to detect and remediate a data loss and exfiltration incident caused by a malicious insider on email and an average of almost 48 hours to detect and remediate an incident caused by a negligent employee. This places a heavy burden on these teams who must triage and investigate these incidents and become unavailable to address other security issues and incidents. 

Other takeaways

  • Regulatory non-compliance is the number one consequence of a data loss and exfiltration incident followed by a decline in reputation. These top two consequences can be considered interrelated because non-compliance with regulations (57 percent of respondents) will impact an organization’s reputation (52 percent of respondents). Regulatory non-compliance is considered to have the biggest impact on organizations’ decision to increase the budget for DLP solutions.
  • Organizations consider end-user convenience very important. Seventy-five percent of respondents say end-user convenience in DLP solutions is very important.

To read the full report, please visit Tessian’s website.

Data brokers, in bed with scammers, aimed their algorithms at millions of elderly, vulnerable

Bob Sullivan

Several large data brokers profited for years by selling what are known, cruelly, as “suckers lists” to criminals who used them to fine-tune scams designed to cheat elderly and vulnerable people, a new report on LawfareBlog explains. It’s a stomach-churning analysis which shines a harsh light on an open secret about many industries: Stealing from the elderly is good business, and rarely comes with much risk.

The Lawfare story — written by Justin Sherman and  Alistair Simmons, describes the prosecution of three large data brokers — Epsilon, Macromark, and KBM Group — during the past couple of years. Details in the guilty pleas are harrowing.  Much more below, but first, a quick step onto the soap box:

Medium-sized crime gangs, or even small-time criminals, are usually behind the scams I’ve written about for several decades — fake sweepstakes, fraudulent grant programs, and so on.  Many are life-altering for the victims. Often, their entire life savings is stolen. For the elderly, there is no time to recover from such a scam.  Some get sick, or even commit suicide after a bout with a scam like this.  The criminals who take their money should be vigorously prosecuted, of course. But for many years, I have seen that a slate of legitimate, multi-national companies facilitate these crimes. Sometimes, they even profit from these crimes.  And sometimes, their very business model depends on this dirty business. Yet, these companies that remain an arm’s length from the victims often suffer little to no consequence. That has to change.  Matt Stoller, a loud advocate for antitrust reforms, has a habit of yelling “Jail Time!” when obvious corporate malfeasance is largely ignored by our judicial system.  It’s a cry more should join. Stealing from the elderly and vulnerable should not be an acceptable business model, or even an acceptable by-product of a business model. People who help criminals steal from the elderly should go to jail.

Onto the details. Readers might remember Epsilon from an incident that’s a decade old, when the then-obscure data hoarding firm suffered what some called the largest data breach in history. Starting before that incident, and lasting through July 2017 — for more than a decade — Epsilon employees helped criminals send mail stuffed with all manner of obvious scams, according to court documents. There were fake sweepstakes, alleged personal astrology invitations, auto-warranty solicitations, dietary-supplement scams, and fraudulent government grant offers. Epsilon employees knew these were scams.  Clients would occasionally get arrested. In one case, a worker lamented that one client, “brought us rev[enue] for 5 years but the law caught up with them and shut them down.”

The solicitations were fraudulent on their face. Sweepstakes mailer recipients were told they were one of a kind; it was obviously impossible they could all be winners. Yet Epsilon continued to work with such firms. It earned money from selling targeted lists of those who were most likely to respond.  In fact, it had special names for the characters in this scam: targeted consumers were called euphemistically “opportunity seekers,” before they were victims. Clients who sent the fraudulent mailers were called “opportunistic.” The Justice Department leaves no doubt what these terms really meant — “opportunity seekers frequently fell within the same demographic pool: elderly and vulnerable Americans.”

During this decade, Epsilon helped criminals attack 30 million American consumers by selling these companies data that was used to facilitate “fraudulent mass-mailing schemes,” according to the Department of Justice.

Meanwhile, there was a devilish feedback loop also. Data from the criminal enterprises was used to hone Epsiolon’s algorithms, as Sherman and Simmons explain in their piece:

“Two employees ‘collaborated on a model in February 2016 ‘for clients engaged in fraud that used data from’ one of Epsilon’s clients. They expanded Epsilon’s databases by getting information back from scammers, and then used that information to determine which people would be most susceptible to future targeting. In other words, those who fell for a scam once would be documented in Epsilon’s database, so it could provide other scammers with lists of people who were identified to be … receptive to that kind of marketing.”

Epsilon agreed to “deferred prosecution” in its case, which means it essentially pled guilty and agreed to pay $150 million in fines and restitution.  Separately, two former Epsilon employees have been charged criminally, a welcome development. One year later, their federal cases are slowly moving their way through a Colorado federal court. The most recent filing action in the case involved Epsilon trying to quash a subpoena issued by the defendants, who seem to believe corporate documents could exonerate them by showing they were just following orders. Epsilon denies that and says the defendants are on an evidentiary fishing scheme.

Macromark’s prosecution followed similar lines, court documents say.   That firm also spent more than a decade helping criminals steal millions of dollars from thousands of victims who were targeted because they were likely to respond to a fraudulent psychic scam.

“In general, the most effective mailing lists for any particular fraudulent mass mailing were lists made up of victims of other mass-mailing campaigns that used similarly deceptive letters,” the Macromark guilty plea says.

There was no doubt Macromark knew what clients were doing, according to the plea document: “A Macromark executive sent a client a link to a newspaper article with the headline ‘Feds: Mail fraud schemes scam seniors,’ together with materials connecting the client’s own letters to the subject of the newspaper article.” The guilty plea says a Macromark employee actually helped a client change names to evade law enforcement.

“List brokers and service providers such as Macromark who facilitate these schemes are especially dangerous,” said Inspector in Charge Delany DeLeon-Colon of the U.S. Postal Inspection Service’s Criminal Investigations Group, which investigated the crime.  “Data firms such as this have extraordinary access to consumer’s personal information, not just their mailing address.  The sale and distribution of this data exponentially magnifies the scale and impact of these schemes. Macromark pleaded guilty to wire fraud, and admitted that the lists it provided to scammers led to losses of $9.5 million from victims. The company was sentenced to three years of probation and a $1  million fine.

Two Macromark executives were also indicted for mail and wire fraud as part of that investigation.

At KBM Group, an employee enjoyed a laugh at the expense of victims, court documents say. One solicitation sent using KBM data said recipients were entitled to $45,000 from an old dormant account, which would be released if a small fee was paid. A general manager at KBM said in an email, “Who responds to this stuff?? Obviously, we have those people.” Later, that same manager fought for a client that another employee had flagged as fraudulent, leading to the sale of 100,000 consumers’ data.

KBM pled guilty and paid agreed to pay victim compensation penalties totaling $42 million.

Fines are fine. Occasionally, victims of these frauds do get some money back thanks to restitution funds, and that’s fine, too, though often years late and many dollars short. Still, these examples show how brazen companies can be when providing a platform for criminals to connect with vulnerable people. Platform accountability calls for swift justice and jail time.  Each week as host of The Perfect Scam, I listen to people talk about their lives torn apart by crimes like these.

When your actions logically begin a chain of events that leads to ruined lives, well, your life should be ruined, too.

I’ll let Shermer and Simmons have the last word:

“Data brokers are extremely profitable and can overcome imposed fines while continuing their operations. The more money they make, the more money they will have to spend on legal defenses. In the three mentioned cases, the data brokers’ internal compliance measures were ineffective, because these companies already knew that they were partnering with scammers and continued to do so because they saw it as financially advantageous. If controls were in place, they were ignored. And in the one case where controls were enforced, the controls were overridden by data broker employees pushing for profit above all else. This raises a series of critical policy questions about the effectiveness of company controls today and how much company controls should be prioritized as part of a policy solution when there is evidence that they can be overridden.

Comprehensive legislation, at the federal if not state level, to regulate data brokerage and prevent and mitigate its harms is necessary to protect all Americans. This should include a focus on stopping the algorithmic revictimization of people who fall for scams. It should also include a focus on controlling the sale and licensing of data on vulnerable Americans—particularly when data brokers knowingly use that information to help scammers prey on the elderly, cognitively impaired, and otherwise vulnerable.

 

The secrets of high-performing security organizations

As the threat landscape becomes more sinister, the ability to close the IT security gap is more critical than ever.  Sponsored by HPE, this study has been tracking organizations’ efforts to close gaps in their IT security infrastructure that allow attackers to penetrate their defenses since 2018.

The IT security gap is defined as the inability of an organization’s people, processes and technologies to keep up with a constantly changing threat landscape. It diminishes the ability of organizations to identify, detect, contain and resolve data breaches and other security incidents. The consequences of the gap can include financial losses, diminishment in reputation and the inability to comply with privacy regulations such as the EU’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). Only 30 percent of respondents say their organizations are highly effective in keeping up with a constantly changing threat landscape and closing the IT security gap.

Ponemon Institute surveyed 1,848 IT and IT security practitioners in North America, the United Kingdom, Germany, Australia and Japan. This report presents the global findings and compares them to the 2020 global findings.  All respondents are knowledgeable about their organizations’ IT security and strategy and are involved in decisions related to the investment in technologies.

Few respondents are confident that their organizations can prevent a persistent threat below the platform that would result in data stolen, modified or viewed by unauthorized entities according to 35 percent of respondents. Similar to the last study, 48 percent of respondents believe attacks that have reached inside the network have the potential to do the greatest damage. Forty-two percent of respondents say that attacks inside the IT infrastructure can be detected quickly before they break out and cause a cybersecurity breach resulting in data stolen, modified, or viewed by unauthorized entities.

Best practices from organizations that are effective in closing the IT security gap

Thirty percent of respondents self-reported that their organizations are highly effective in keeping up with a constantly changing threat landscape. We refer to these organizations as “high performers” and compare their responses to the non-high performer. We refer to these organizations as “other” respondents.

Following are the nine best practices of high-performing organizations.

High performers are more likely to have visibility and control into users’ activities and devices. Only 33 percent of high performers believe their security teams lack visibility and control into all activity of every user and device. In contrast, 80 percent of those in the other category say their teams lack visibility and control. High performers are also more likely to get value from their security investments (59 percent vs. 42 percent of respondents). However, both groups agree that the IT infrastructure has gaps that allow attackers to penetrate its defenses (60 percent of high performers and 61 percent of respondents in the other category).

High performers are more likely to agree that attacks that have reached inside the network have the potential to do the greatest damage. Fifty-six percent of high performers recognize the potential damage from attacks that have reached inside the network vs. 45 percent of respondents in the other category. Forty-seven percent of high performers are confident that their organizations have not experienced a persistent threat below the platform software that has resulted in data stolen, modified or viewed by unauthorized entities vs. 30 percent in the other category.

High-performing organizations are more likely to implement a Zero Trust Model. Sixty-four percent of high-performing organizations have a Zero Trust Model because government policies required it (25 percent), have a Zero Trust Model for other reasons (24 percent of respondents) or selected elements from the Zero-Trust framework to improve security (15 percent). Thirty-six percent of organizations in the other category are not interested in a Zero Trust approach (25 percent of respondents) or have chosen not to implement (11 percent of respondents).

High performers say as compute and storage moves from the data center to the edge it requires a combination of traditional security solutions and secure infrastructure (61 percent). The other respondents are more likely to say a new type of security will be required (59 percent).

IoT security is more of a concern for high performers. Eighty-five percent of respondents say identifying and authenticating IoT devices accessing the network is critical to their organization’s security strategy. Only slightly more than half (55 percent) of other respondents agree with this. In addition, high performers are more likely to say legacy IoT technologies are difficult to secure (80 percent vs. 69 percent of respondents in the other category. Forty percent of high-performer respondents say their IoT devices are appropriately secured with a proper security strategy in place vs. 15 percent of respondents in the other sample.

High-performing organizations say security technologies are very important for their digital transformation strategy. Seventy-seven percent of high-performing organizations say it is important (35 percent of respondents) or highly important (42 percent of respondents) to have security technologies to support digital transformation. In contrast, 53 percent of the other respondents say it is important or highly important. 

High performers take a different approach to server security and backup and recovery. Eighty-eight percent of high performer respondents say backup and recovery is a key component of their security strategy and 68 percent of high performers say their organizations make server decisions based on the security inherent within the platform.

 High-performing organizations are more aware of the benefits of automation. The most important benefits are the ability to find attacks before they do damage or gain persistence (78 percent of high performers) and reduction in the number of false positives that analysts must investigate (74 percent of high performers). They also say automation is critical when implementing an effective Zero Trust Security Model (71 percent of respondents).

High-performing organizations are more likely to see the important connection between privacy and security. Ninety-four percent of respondents in high-performing organizations say it is not possible to have privacy without a strong security posture. Eighty-seven percent of high performers believe a strong cybersecurity posture reduces the privacy risk to employees, business partners and customers. High performers are less likely to believe human error is a risk to privacy.

To read the rest of this report, download it from HPE.com

A million appeals for justice, and 14 cases overturned — Facebook Oversight Board off to a slow start

Bob Sullivan

A million appeals for justice, and 14 reversals.  That’s the scorecard from the Facebook Oversight Board’s first annual report, released this week. The creative project has plenty going for it, and I think some future oversight board can benefit greatly from the experience of this experiment, launched by Facebook parent Meta in 2020. Still, it’s hard to see how this effort is making a big impact on the problems dogging Facebook and Instagram right now.

A few months ago, I interviewed Duke University law student Alexys Ogorek about her ongoing research into the Oversight Board for our podcast, “Defending Democracy from Big Tech.”  Her conclusion: There are plenty of interesting ideas in the organization, but in practice, it’s not accomplishing much.  Only a tiny fraction of cases are considered, she found, and decisions take many months. Not very practical for people who feel like their innocent comment about a political candidate was wrongly removed a month before an election.  You can hear our discussion of this on Apple Podcasts, or by clicking play below.  The Oversight Board’s annual report confirmed most of Ogorek’s research, but there are plenty of interesting nuggets in it. I’ve cobbled them together below.

Facebook removes user posts all the time — perhaps it’s happened to you — with little or no explanation.  After years of public frustration with this practice, the firm launched an innovative project called the Facebook Oversight Board. It’s billed as an independent, outside entity that can make binding decisions — mainly, tasked with telling Facebook to restore posts it has removed incorrectly.  Most of the time, these takedown decisions are made by automated tools designed to detect hate speech, harassment, violence, or nudity.  In a typical scenario, a user posts a comment that contains language that is judged to include racial slurs, or language that encourages violence, or adult content, or medical misinformation, and the post is removed. Users who disagree can file an appeal, which might be judged by a person at Facebook.  If that appeal fails, users now have the option to appeal to this outside Oversight Board.

This is a good idea.  We should all be uncomfortable that a large corporation like Facebook gets to make decisions about what stays and what goes in the digital public square. Yes, the First Amendment doesn’t apply to Facebook in most of these cases, but because it’s such a powerful entity when Meta acts as judge and jury, it offends our notions of free speech. So, the experiment is worthwhile and like Ogorek, I’ve tried to look at it with an open mind.

One big problem revealed in the report is the tiny, tiny fraction of cases the board can take up, combined with the 83 days it took to decide cases.  About 1.1 million people appealed to the board from  October 2020 to December 2021, and only 20 cases were completed. Of them, the board overturned Facebook’s choice 14 times. To be fair, the board says it tried to choose cases that had wider impact, and could set precedent.  Still, the numbers show the board process, to put it politely,  doesn’t scale.

“I am struggling with this due to a cognitive disconnect. They had 1.1 million requests but only examined 20 cases. In those 20 cases they found that Meta was wrong 70% of the time. So, is it likely that over 700,000 mistakes by Meta have gone unexamined,” said Duke professor David Hoffman.  “The small number of decisions when compared to the demand indicates to me that the (board) is at best a sampling mechanism to see how Meta is doing, and based on this sample it appears that Meta’s efforts at enforcing their own policies are a dismal failure. It all begs the question, what additional structure is necessary so that all 1.1 million claims can be analyzed and resolved.”

Reading through the cases Facebook did pick, one can gain sympathy for the complexity of the task at hand. I’ve pasted a chart above to show a sample of cases that rose to the top of the heap. But here’s one example of competing interests that require nuanced decisions: in one case, a video of political protestors in Columbia included homophobic slurs in some chants. Facebook initially removed the video; the board restored it because it was newsworthy.  In another case, an image involving a women’s breast was removed for violating nudity rules, but the image was connected to health care advocacy. It was also restored.

Other items in the report I found interesting: the board openly criticized Facebook’s lack of transparency in many situations.  It urges the firm to explain initial takedown decisions, and notes that moderators “are not required to record their reasoning for individual content decisions.”

There are other critical comments:

  • “It is concerning that in just under 4 out of 10 shortlisted cases Meta found its decision to have
    been incorrect. This high error rate raises wider questions both about the accuracy of Meta’s content moderation and the appeals process Meta applies before cases reach the board.”
  • “The board continues to have significant concerns, including around Meta’s transparency and
    provision of information related to certain cases and policy recommendations.”
  • “We have raised concerns that some of Meta’s content rules are too vague, too broad, or unclear, prompting recommendations to clarify rules or make secretive internal guidance on interpretation of those rules public.”
  • “We made one recommendation to Meta more times than any other, repeating it in six decisions: when you remove people’s content, tell them which specific rule they broke.” Facebook has partly addressed this suggestion.

The board also briefly took up the issues raised by Facebook whistleblower Frances Haugen. Among her revelations, she exposed a practice by the company to “whitelist” certain celebrities, making them exempt from most content moderation rules.  The board mentions this issue, and its demands for more information from Facebook about it, but only in passing. Combine this issue with other references to secret or unknown internal moderation policies that Facebook maintains, and it’s easy to see how the Oversight Board has a very difficult job to do. One wonders if its work might end one day with members resigning in frustration. Until then, it’s still worth learning whatever lessons this experiment might teach.  There are plenty of good ideas being tested.