Monthly Archives: July 2022

The secrets of high-performing security organizations

As the threat landscape becomes more sinister, the ability to close the IT security gap is more critical than ever.  Sponsored by HPE, this study has been tracking organizations’ efforts to close gaps in their IT security infrastructure that allow attackers to penetrate their defenses since 2018.

The IT security gap is defined as the inability of an organization’s people, processes and technologies to keep up with a constantly changing threat landscape. It diminishes the ability of organizations to identify, detect, contain and resolve data breaches and other security incidents. The consequences of the gap can include financial losses, diminishment in reputation and the inability to comply with privacy regulations such as the EU’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA). Only 30 percent of respondents say their organizations are highly effective in keeping up with a constantly changing threat landscape and closing the IT security gap.

Ponemon Institute surveyed 1,848 IT and IT security practitioners in North America, the United Kingdom, Germany, Australia and Japan. This report presents the global findings and compares them to the 2020 global findings.  All respondents are knowledgeable about their organizations’ IT security and strategy and are involved in decisions related to the investment in technologies.

Few respondents are confident that their organizations can prevent a persistent threat below the platform that would result in data stolen, modified or viewed by unauthorized entities according to 35 percent of respondents. Similar to the last study, 48 percent of respondents believe attacks that have reached inside the network have the potential to do the greatest damage. Forty-two percent of respondents say that attacks inside the IT infrastructure can be detected quickly before they break out and cause a cybersecurity breach resulting in data stolen, modified, or viewed by unauthorized entities.

Best practices from organizations that are effective in closing the IT security gap

Thirty percent of respondents self-reported that their organizations are highly effective in keeping up with a constantly changing threat landscape. We refer to these organizations as “high performers” and compare their responses to the non-high performer. We refer to these organizations as “other” respondents.

Following are the nine best practices of high-performing organizations.

High performers are more likely to have visibility and control into users’ activities and devices. Only 33 percent of high performers believe their security teams lack visibility and control into all activity of every user and device. In contrast, 80 percent of those in the other category say their teams lack visibility and control. High performers are also more likely to get value from their security investments (59 percent vs. 42 percent of respondents). However, both groups agree that the IT infrastructure has gaps that allow attackers to penetrate its defenses (60 percent of high performers and 61 percent of respondents in the other category).

High performers are more likely to agree that attacks that have reached inside the network have the potential to do the greatest damage. Fifty-six percent of high performers recognize the potential damage from attacks that have reached inside the network vs. 45 percent of respondents in the other category. Forty-seven percent of high performers are confident that their organizations have not experienced a persistent threat below the platform software that has resulted in data stolen, modified or viewed by unauthorized entities vs. 30 percent in the other category.

High-performing organizations are more likely to implement a Zero Trust Model. Sixty-four percent of high-performing organizations have a Zero Trust Model because government policies required it (25 percent), have a Zero Trust Model for other reasons (24 percent of respondents) or selected elements from the Zero-Trust framework to improve security (15 percent). Thirty-six percent of organizations in the other category are not interested in a Zero Trust approach (25 percent of respondents) or have chosen not to implement (11 percent of respondents).

High performers say as compute and storage moves from the data center to the edge it requires a combination of traditional security solutions and secure infrastructure (61 percent). The other respondents are more likely to say a new type of security will be required (59 percent).

IoT security is more of a concern for high performers. Eighty-five percent of respondents say identifying and authenticating IoT devices accessing the network is critical to their organization’s security strategy. Only slightly more than half (55 percent) of other respondents agree with this. In addition, high performers are more likely to say legacy IoT technologies are difficult to secure (80 percent vs. 69 percent of respondents in the other category. Forty percent of high-performer respondents say their IoT devices are appropriately secured with a proper security strategy in place vs. 15 percent of respondents in the other sample.

High-performing organizations say security technologies are very important for their digital transformation strategy. Seventy-seven percent of high-performing organizations say it is important (35 percent of respondents) or highly important (42 percent of respondents) to have security technologies to support digital transformation. In contrast, 53 percent of the other respondents say it is important or highly important. 

High performers take a different approach to server security and backup and recovery. Eighty-eight percent of high performer respondents say backup and recovery is a key component of their security strategy and 68 percent of high performers say their organizations make server decisions based on the security inherent within the platform.

 High-performing organizations are more aware of the benefits of automation. The most important benefits are the ability to find attacks before they do damage or gain persistence (78 percent of high performers) and reduction in the number of false positives that analysts must investigate (74 percent of high performers). They also say automation is critical when implementing an effective Zero Trust Security Model (71 percent of respondents).

High-performing organizations are more likely to see the important connection between privacy and security. Ninety-four percent of respondents in high-performing organizations say it is not possible to have privacy without a strong security posture. Eighty-seven percent of high performers believe a strong cybersecurity posture reduces the privacy risk to employees, business partners and customers. High performers are less likely to believe human error is a risk to privacy.

To read the rest of this report, download it from

A million appeals for justice, and 14 cases overturned — Facebook Oversight Board off to a slow start

Bob Sullivan

A million appeals for justice, and 14 reversals.  That’s the scorecard from the Facebook Oversight Board’s first annual report, released this week. The creative project has plenty going for it, and I think some future oversight board can benefit greatly from the experience of this experiment, launched by Facebook parent Meta in 2020. Still, it’s hard to see how this effort is making a big impact on the problems dogging Facebook and Instagram right now.

A few months ago, I interviewed Duke University law student Alexys Ogorek about her ongoing research into the Oversight Board for our podcast, “Defending Democracy from Big Tech.”  Her conclusion: There are plenty of interesting ideas in the organization, but in practice, it’s not accomplishing much.  Only a tiny fraction of cases are considered, she found, and decisions take many months. Not very practical for people who feel like their innocent comment about a political candidate was wrongly removed a month before an election.  You can hear our discussion of this on Apple Podcasts, or by clicking play below.  The Oversight Board’s annual report confirmed most of Ogorek’s research, but there are plenty of interesting nuggets in it. I’ve cobbled them together below.

Facebook removes user posts all the time — perhaps it’s happened to you — with little or no explanation.  After years of public frustration with this practice, the firm launched an innovative project called the Facebook Oversight Board. It’s billed as an independent, outside entity that can make binding decisions — mainly, tasked with telling Facebook to restore posts it has removed incorrectly.  Most of the time, these takedown decisions are made by automated tools designed to detect hate speech, harassment, violence, or nudity.  In a typical scenario, a user posts a comment that contains language that is judged to include racial slurs, or language that encourages violence, or adult content, or medical misinformation, and the post is removed. Users who disagree can file an appeal, which might be judged by a person at Facebook.  If that appeal fails, users now have the option to appeal to this outside Oversight Board.

This is a good idea.  We should all be uncomfortable that a large corporation like Facebook gets to make decisions about what stays and what goes in the digital public square. Yes, the First Amendment doesn’t apply to Facebook in most of these cases, but because it’s such a powerful entity when Meta acts as judge and jury, it offends our notions of free speech. So, the experiment is worthwhile and like Ogorek, I’ve tried to look at it with an open mind.

One big problem revealed in the report is the tiny, tiny fraction of cases the board can take up, combined with the 83 days it took to decide cases.  About 1.1 million people appealed to the board from  October 2020 to December 2021, and only 20 cases were completed. Of them, the board overturned Facebook’s choice 14 times. To be fair, the board says it tried to choose cases that had wider impact, and could set precedent.  Still, the numbers show the board process, to put it politely,  doesn’t scale.

“I am struggling with this due to a cognitive disconnect. They had 1.1 million requests but only examined 20 cases. In those 20 cases they found that Meta was wrong 70% of the time. So, is it likely that over 700,000 mistakes by Meta have gone unexamined,” said Duke professor David Hoffman.  “The small number of decisions when compared to the demand indicates to me that the (board) is at best a sampling mechanism to see how Meta is doing, and based on this sample it appears that Meta’s efforts at enforcing their own policies are a dismal failure. It all begs the question, what additional structure is necessary so that all 1.1 million claims can be analyzed and resolved.”

Reading through the cases Facebook did pick, one can gain sympathy for the complexity of the task at hand. I’ve pasted a chart above to show a sample of cases that rose to the top of the heap. But here’s one example of competing interests that require nuanced decisions: in one case, a video of political protestors in Columbia included homophobic slurs in some chants. Facebook initially removed the video; the board restored it because it was newsworthy.  In another case, an image involving a women’s breast was removed for violating nudity rules, but the image was connected to health care advocacy. It was also restored.

Other items in the report I found interesting: the board openly criticized Facebook’s lack of transparency in many situations.  It urges the firm to explain initial takedown decisions, and notes that moderators “are not required to record their reasoning for individual content decisions.”

There are other critical comments:

  • “It is concerning that in just under 4 out of 10 shortlisted cases Meta found its decision to have
    been incorrect. This high error rate raises wider questions both about the accuracy of Meta’s content moderation and the appeals process Meta applies before cases reach the board.”
  • “The board continues to have significant concerns, including around Meta’s transparency and
    provision of information related to certain cases and policy recommendations.”
  • “We have raised concerns that some of Meta’s content rules are too vague, too broad, or unclear, prompting recommendations to clarify rules or make secretive internal guidance on interpretation of those rules public.”
  • “We made one recommendation to Meta more times than any other, repeating it in six decisions: when you remove people’s content, tell them which specific rule they broke.” Facebook has partly addressed this suggestion.

The board also briefly took up the issues raised by Facebook whistleblower Frances Haugen. Among her revelations, she exposed a practice by the company to “whitelist” certain celebrities, making them exempt from most content moderation rules.  The board mentions this issue, and its demands for more information from Facebook about it, but only in passing. Combine this issue with other references to secret or unknown internal moderation policies that Facebook maintains, and it’s easy to see how the Oversight Board has a very difficult job to do. One wonders if its work might end one day with members resigning in frustration. Until then, it’s still worth learning whatever lessons this experiment might teach.  There are plenty of good ideas being tested.