Monthly Archives: December 2020

Rethinking Firewalls: Security and Agility for the Modern Enterprise

The purpose of the research sponsored by Guardicore is to learn how enterprises perceive their legacy firewalls within their security ecosystems on premises and in the cloud. Ponemon Institute surveyed 603 IT and IT security practitioners in the United States who are decision makers or influencers of technology decisions to protect enterprises’ critical applications and data. Respondents also are involved at different levels in purchasing or using the firewall technologies.

Legacy firewalls are ineffective in securing applications and data in the data center. Respondents were asked how effective their legacy firewalls are on a scale from 1 = not effective to 10 = very effective. Figure 1 shows the 7+ responses. According to the Figure, only 33 percent of respondents say their organizations are very or highly effective in securing applications and data in the data center. Legacy firewalls are also mostly ineffective at preventing a ransomware attack. Only 36 percent of respondents say their organizations are highly effective in preventing such an attack.

The findings of the report show the number one concern of firewall buyers is whether they can actually get next-gen firewalls to work in their environments. As organizations move into the cloud, legacy firewalls do not have the scalability, flexibility or reliability to secure these environments, driving up costs while failing to reduce the attack surface. As a result, organizations are reaching the conclusion that firewalls are simply not worth the time and effort and they’re actually negatively impacting digital transformation initiatives. This is driving a move toward modern security solutions like micro-segmentation, that can more effectively enforce security at the edge.

Following are research findings that reveal why legacy firewalls are ineffective.

Legacy firewalls are ineffective in preventing cyberattacks against applications. Only 37 percent of respondents say their organizations’ legacy firewalls’ ability to prevent cyberattacks against critical business and cloud-based applications is high or very high.

Organizations are vulnerable to a data breach. Only 39 percent of respondents say their organizations are confident that it can contain a breach of its data center perimeter.

Legacy firewalls do not protect against ransomware attacks. Only 36 percent of respondents say their legacy firewalls are highly effective at preventing a ransomware attack. Only 33 percent of respondents say their organizations are very or highly effective in securing applications and data in the data center.

Legacy firewalls are failing to enable Zero Trust across the enterprise. Only 37 percent of respondents rate their organizations’ legacy firewalls at enabling Zero Trust across the enterprise as very or highly effective.

Legacy firewalls are ineffective in securing applications and data in the cloud. Sixty-four percent of respondents say cloud security is essential (34 percent) or very important (30 percent). However, only 39 percent of respondents say the use of legacy firewalls are very or highly effective in securing applications and data in the cloud.

Legacy firewalls kill flexibility and speed. Organizations are at risk because of the lack of flexibility and speed in making changes to legacy firewalls. On average, it takes at least three weeks or more than a month to update legacy firewall rules to accommodate an update or a new application. Only 37 percent of respondents say their organizations are very flexible in making changes to its network or applications and only 24 percent of respondents say their organizations have a high ability to quickly secure new applications or change security configurations for existing applications.

Legacy firewalls limit access control and are costly to implement. Sixty-two percent of respondents say access control policies are not granular enough and almost half (48 percent of respondents) say legacy firewalls take too long to implement.

The majority of organizations in this study are ready to get rid of their legacy firewalls because of their ineffectiveness. Fifty-three percent of respondents say their organizations are ready to purchase an alternative or complementary solution. The two top reasons are the desire to have a single security solution for on-premises and cloud data center security (44 percent of respondents) and to improve their ability to reduce lateral movement and secure access to critical data (31 percent of respondents).

Firewall labor and other costs are too high. Sixty percent of respondents say their organizations would consider reducing their firewall because of the high costs. Fifty-one percent of organizations are considering a reduction in its firewall footprint because labor and other costs are too high (60 percent of respondents). In addition, 52 percent of respondents say it is because current firewalls do not provide adequate security for internal data center east-west traffic.

“The findings of the report reflect what many CISOs and security professionals already know – digital transformation has rendered the legacy firewall obsolete,” said Pavel Gurvich, co-founder and CEO, Guardicore. “As organizations adopt cloud, IoT, and DevOps to become more agile, antiquated network security solutions are not only ineffective at stopping attacks on these properties, but actually hinder the desired flexibility and speed they are hoping to attain.”

To read a full copy of the report, please visit Guardicore’s website.


Facebook needs a corrections policy, viral circuit breakers, and much more

I strongly recommend you read the entire report. (PDF)

Bob Sullivan

“After hearing that Facebook is saying that posting the Lord’s Prayer goes against their policies, I’m asking all Christians please post it. Our Father, who art in Heaven…”

Someone posted this obvious falsehood on Facebook recently, and it ended up on my wall. Soon after, a commenter in the responses broke the news to the writer that this was fake. The original writer than did what many social media users do: responded with a simple, “Well, someone else said it and I was just repeating it.” No apology. No obvious shame. And most important, no correction.

I don’t know how many people saw the original post, but I am certain far fewer people saw the response and link to Snopes showing it was a common hoax.

If there’s one thing that people rightly hate about journalists, it’s our tendency to put mistakes on the front page and corrections on the back page. That’s unfair, of course. If you make a mistake, you should try to make sure just as many people see the correction as the mistake. Journalists might be bad at this, but Facebook is awful at it. This is one reason social media is such a dastardly tool for sharing misinformation.

Fixing this is one of the novel recommendations in a great report published last week by the Forum on Information and Democracy, an international group formed to make recommendations about the future of social media. The report suggests that, when an item like this Our Father assertion spreads on social media and is determined to be misinformation by an independent group of fact checkers, a correction should be shown to each and every user exposed to it. So, not just a lame “I dunno, a friend said it” that’s buried in later comments. Rather, it would require an attempt to undo the actual spread of incorrect information.

Anyone concerned about the future of democracy and discourse should take a look at this report. I’ve summarized a few of the highlights below.

Circuit breakers

The report calls for creation of friction to prevent misinformation from spreading like wildfore. My favorite part of this section calls for creation of “circuit breakers” that would slow down viral content after it reaches a certain threshold, giving fact-checkers a chance to examine it. The concept is borrowed from Wall Street. When trading in a certain stock falls dramatically out of pattern — say there’s a sudden spike in sales — circuit breakers kick in to give traders a moment to breathe and digest whatever news might exist. Sometimes, that short break re-introduces rationality to the market. Note: This wouldn’t infringe on anyone’s ability to say what they want, it would merely slow down the automated augmentation of this speach.

A digital ‘building code’ and ‘agency by design’

You can’t buy a toaster that hasn’t been tested to make sure it’s safe, but you can use software that might, metaphorically, burn your house down. This metaphor was used a lot after the mortgage meltdown in 2008, and it applies here, too. “In the same way that fire safety tests are conducted prior to a building being opened to the public, such a ‘digital building code’ would also result in a shift towards prevention of harm through testing prior to release to the public,” the report says.

The report adds a few great nuggets. While being critical of click-to-consent agreements, it says: “In the case of civil engineering, there are no private ‘terms and conditions’ that can override the public’s presumption of safety.” The report also calls for a concept called “agency by design” that would require software engineers to design opt-ins and other moments of permission so users are most likely to understand what they are agreeing to. This is “proactively choice-enhancing,” the report argues.

Abusability Testing

I’ve long been enamored of this concept. Most software is now tested by security professionals hired to “hack” it, a process called penetration testing. This concept should be expanded to include any kind of misuse of software by bad guys. If you create a new messenger service, could it be abused by criminals committing sweetheart scams? Could it be turned into a weapon by a nation-state promoting disinformation campaigns? Could it aid and abet those who commit domestic violence? Abusability testing should be standard, and critically, it should be conducted early on in software development.

Algorithmic undue influence

In other areas of law. contracts can be voided if one party has so much influence over the other that true consent could not be given. The report suggests that algorithms create this kind of imbalance online, so the concept should be extended to social media.

“(Algorithmic undue influence) could result in individual choices that would not occur but for the duplicitous intervention of an algorithm to amplify or withhold select information on the basis of engagement metrics created by a platform’s design or engineering choice.”

“Already, the deleterious impact of algorithmic amplification of COVID-19 misinformation has been seen, and there are documented cases where individuals took serious risks to themselves and others as a result of deceptive conspiracy theories presented to them on social platforms. The law should view these individuals as victims who relied on a hazardously engineered platform that exposed them to manipulative information that led to serious harm.”


Social media has created content bubbles and echo chambers. In the real world, it can be illegal to engineer residential developments that encourage segregation. The report suggests similar limitations online.

“Platforms … tacitly assume a role akin to town planners. … Some of the most insidious harms documented from social platforms have resulted from the algorithmic herding of users into homogenous clusters.. What we’re seeing is a cognitive segregation, where people exist in their own informational ghettos,” the report says.

In order to promote a shared digital commons, the report makes these suggestions.

Consider applying equivalent anti-segregation legal principles to the digital commons, including a ban on ‘digital redlining’, where platforms allow groups or advertisers to prevent particular racial or religious groups from accessing content.

Create legal tests focused on the ultimate effects of platform design on racial inequities and substantive fairness, regardless of the original intent of design.

Create specific standards and testing requirements for algorithmic bias.

Disclose conflict of interests

The report includes a lot more on algorithms and transparency, but one item I found noteworthy: If a platform promotes or demotes content in a way that is self-serving, it has to disclose that. If Facebook’s algorithm makes a decision about a new social media platform, or about a government’s efforts to regulate social media, Facebook would have to disclose that.

These are nuggets I pulled out, but to grasp the larger concepts, I strongly recommend you read the entire report. (PDF)