Monthly Archives: January 2021

The State of Breach and Attack Simulation and the Need for Continuous Security Validation: A Study of US and UK Organizations

The purpose of this research, sponsored by Cymulate, is to better understand how the rapidly evolving threat landscape and the frequency of changes in the IT architecture and in security are creating new challenges. The research focuses on the testing and validation of security controls in this extremely dynamic environment. We also seek to understand the issues organizations have in their ability to detect and remediate threats through assessments and testing of security controls.

Although change has always been a constant in both IT and cybersecurity, COVID-19 has accelerated business digitization and security adaptations. Seventy-nine percent of respondents say that they have had to modify security policies to accommodate working from home.

Sixty-two percent of respondents say their organizations had to acquire new security technologies to protect WFH, and yet 62 percent of respondents say their organizations did not validate these newly deployed security controls.

Ponemon Institute surveyed 1,016 IT and IT security practitioners in the United States and United Kingdom who are familiar with their organizations’ testing and evaluation of security controls. An average of 13 individuals staff the security team in organizations represented in this research.

Following are key takeaways from the research.

  • Sixty-one percent of respondents say the benefit of continuous security validation or frequent security testing is the ability to identify security gaps due to changes in the IT architecture followed by 59 percent of respondents who say it is the ability to identify security gaps caused by human error and misconfigurations.
  •  Sixty percent of respondents say their organizations are making frequent changes to security controls; daily (27 percent of respondents) and weekly (33 percent of respondents). Sixty-seven percent of respondents say that it is very important to test that the changes applied to the security controls have not created security gaps such as software bugs or vulnerabilities, misconfigurations and human error.
  • Seventy percent of respondents say it is important to validate the effectiveness of security controls against new threats and hacker techniques and tactics.

 The following findings are based on a deeper analysis of the research.

 Vigilance in testing the effectiveness of security controls increases confidence that security controls are working as they are supposed to.

  • Organizations that self-reported their organization is vigilant in testing the effectiveness of their security controls (38 percent respondents) have a much higher level of confidence that their organization’s security controls are working as they are supposed to. Of the 22 percent of respondents who rate their level of confidence as high, almost half (47 percent) of respondents say they are vigilant in their effectiveness testing. 

High confidence in security controls increases the security posture in an evolving threat landscape.

  • Organizations that have a high level of confidence that their organization’s security controls are working as they are supposed to are applying changes to security controls (e.g., configuration setting, software or signature update policy rules, etc.) daily or weekly.
  • These organizations have a much lower percentage of security controls that fail pen testing and/or attack simulation within each cycle. Specifically, 25 percent of respondents with high confidence say less than 10 percent of security controls fail pen testing and/or attack simulation.

“It is clear from the report that security experts see the need for continuous security validation. Given that the primary methodology for security testing is limited in scope, manual and a lengthy process, it does not meet the pace of new threats and business-driven IT change. It comes as no surprise that threat actors are free to exploit remote access, remote desktop, and virtual desktop vulnerabilities, as companies expanded the use of these technologies without security validation, to support employees working from home.” Said Eyal Wachsman, Co-Founder and CEO at Cymulate.

The report is organized according to the following topics.

  • The impact of current approaches to the testing of security controls on an organization’s security posture
  • Security control validation and Breach and Attack Simulation (BAS)
  • Steps taken to address possible security risks due to COVID-19
  • Perceptions about the effectiveness of Managed Security Service Providers (MSSPs)
  • Differences between organizations in the US and UK

Read the entire report on Cymulate’s website


The ‘de-platforming’ of Donald Trump and the future of the Internet

Bob Sullivan


That’s the word of the week in tech-land, and it’s about time. After the storming of the U.S. Capitol by a criminal mob on Wednesday, a host of companies have cut off President Trump’s digital air supply. His Tweets and Facebook posts fell first; next came second-order effects, like payment processors cutting off the money supply. Finally, upstart right-wing social media platform Parler was removed from app stores and then denied Internet service by Amazon.

Let the grand argument of our time begin.

The story of Donald Trump’s de-platforming involves a dramatic collision of Net Neutrality, and Section 230 “immunity,” and free speech, and surveillance capitalism, even privacy. I think it’s the issue of our time. It deserves much more than a story; it deserves an entire school of thought. But I’ll try to nudge the ball forward here.

This is not a political column, though I realize that everything is political right now. In this piece, I will not examine the merits of banning Trump from Twitter; you can read about that elsewhere.

The deplatforming of Trump is an important moment in the history of the Internet, and it should be examined as such. Yes, it’s very fair to ask, if this can happen to Trump, if it can happen to Parler, can’t it happen to anyone?

But let’s examine it the way teen-agers do in their first year of college. Let’s not scream “free speech” or “slippery slope” at each other and then pop open a can of beer, assuming that that’s some kind of mic drop. Kids do that. Adults live on planet Earth, where decisions are complex, and evolve, and have real-life consequences.

I’ll start here. You can sell guns and beer in most places in America. You can’t sell a gun to someone who walks into your store screaming, “I’m going to kill someone,” and you can’t sell beer to someone who’s obviously drunk and getting into the driver’s seat. You can’t keep selling a car — or even a toaster! — that you know has a defect which causes fires. Technology companies are under no obligation to allow users to abuse others with the tools they build. Cutting them off is not censorship. In fact, forcing these firms to allow such abuse because someone belongs to a political party IS censorship, the very thing that happens in places like China. Tech firms are, and should be, free to make their own decisions about how their tools are used. (With…some exceptions! This is the real world.)

I’ll admit freely: This analogy is flawed. When it comes to technology law — and just technology choices — everyone reaches for analogies, because we want so much for there to be a precedent for our decisions. That takes the pressure off us. We want to say, “This isn’t my logic. It’s Thomas Jefferson’s logic! He’s the reason Facebook must permit lies about the election to be published in my news feed!” Sorry, life isn’t like that. We’re adults. We have to make these choices. They will be hard. They’re going to involve a version 1, and a version 2 and a version 3, and so on. The technology landscape is full of unintended, unexpected consequences, and we must rise to that challenge. We can’t cede our agency in an effort to find silver bullet solutions from the past. We have to make them up as we go along.

That’s why the best thing I read the past few days about the Trump deplatforming was this piece by Techdirt’s Mike Masnick. He raises an issue that many tech folks want to avoid: Twitter and Facebook have tried really hard to explain their choices by shoehorning them into standing policy violations. That has left everyone unhappy. (Why didn’t they do this months ago? Wait, what policy was broken?) Masnick gets to the heart of the matter quickly.

So, Oremus is mostly correct that they’re making the rules up as they go along, but the problem with this framing is that it assumes that there are some magical rules you can put in place and then objectively apply them always. That’s never ever been the case. The problem with so much of the content moderation debate is that all sides assume these things. They assume that it’s easy to set up rules and easy to enforce them. Neither is true. Radiolab did a great episode a few years ago, detailing the process by which Facebook made and changed its rules. And it highlights some really important things including that almost every case is different, that it’s tough to apply rules to every case, and that context is always changing. And that also means the rules must always keep changing.

A few years back, we took a room full of content moderation experts and asked them to make content moderation decisions on eight cases — none of which I’d argue are anywhere near as difficult as deciding what to do with the President of the United States. And we couldn’t get these experts to agree on anything. On every case, we had at least one person choose each of the four options we gave them, and to defend that position. The platforms have rules because it gives them a framework to think about things, and those rules are useful in identifying both principles for moderation and some bright lines.

But every case is different.

For a long time, I have argued that tech firms’ main failing is they don’t spend anywhere near enough money on safety and security. They have, nearly literally, gotten away with murder for years while the tools they have made cause incalculable harm in the world. Depression, child abuse, illicit drugs sales, societal breakdowns, income inequality…tech has driven all these things. The standard “it’s not the tech, it’s the people” argument is another “pop open a beer” one-liner used by techie libertarians who want to count their money without feeling guilty, but we know it’s a dangerous rationalization. Would so many people believe the Earth is flat without YouTube’s recommendation engine? No. Would America be so violently divided without social media? No. You built it, you have to fix it.

If a local mall had a problem with teen-age gangs hanging around at night causing mayhem, the mall would hire more security guards. That’s the cost of doing business. For years, big tech platforms have tried to get away with “community moderation” — i.e., they’ve been cheap. They haven’t spent anywhere near enough money to stop the crime committed on their platforms. Why? Because I think it’s quite possible the entire idea of Faceook wouldn’t exist if it had to be safe for users. Safety doesn’t scale. Safety is expensive. It’s not sexy to investors.

How did we get here? In part, thanks to that Section 230 you are hearing so much about. You’ll hear it explained this way: Section 230 gives tech firms immunity from bad things that happen on their platforms. Suddenly, folks who’ve not cared much about it for decades are yelling for its repeal. As many have expressed, it would be better to read up on Section 230 before yelling about it (Here’s my background piece on it). But in short, repealing it wholesale would indeed threaten the entire functioning of the digital economy. Section 230 was designed to do exactly what it is I am hinting at here — to give tech firms the ability to remove illegal or offensive content without assuming business-crushing liability for everything they do. Again, it’s a law written from an age before Amazon existed, so it sure could use modernization by adults. But pointing to it as some untouchable foundational document, or throwing out the baby with the bathwater, are both the behavior of children, not adults. We’re going to have to make some things up as we go along.

Here’s the thing about “free speech” on platforms like Twitter and Facebook. As a private company, Twitter can do whatever it wants to do with the tool it makes. Forcing it to carry this or that post is a terrible idea. President Trump can stand in front of a TV camera, or buy his own TV station (as seems likely) and say whatever he wants. Suspending his account is not censorship. As I explain in my Section 230 post, however, even that line of logic is incomplete. social media firms use public resources, and some are so dominant that they are akin to a public square. We just might need a new rule for this problem! I suspect the right rule isn’t telling Twitter what to post, but perhaps making sure there is more diversity in technology tools available to speakers.

But here’s the thing I find myself saying most often right now: The First Amendment guarantees your right to say (most) things; it doesn’t guarantee your right to algorithmic juice. I believe this is the main point of confusion we face right now, and one that really sends a lot of First Amendment thinking for a loop. Information finds you now. That information has been hand-picked to titillate you like crack cocaine, or like sex. The more extreme things people say, the more likely they are to land on your Facebook wall, or in your Twitter feed, or wherever you live. It’s one thing to give Harold the freedom to yell to his buddies at the bar about children locked in a basement by Democrats at a pizza place in Washington D.C. It’s quite another to give him priority access to millions of people using a tool designed to make them feel like they are viewing porn. That’s what some people are calling free speech right now. James Madison didn’t include a guaranteed right to “virality” in the Bill of Rights. No one guaranteed that Thomas Paine’s pamphlets were to be shoved under everyone’s doors, let alone right in front of their eyeballs at the very moment they were most likely to take up arms. We’re going to need new ways to think about this. In the aglorithmic world, the beer-popping line, “The solution to hate speech is more speech,” just doesn’t cut it.

I’m less interested in the Trump ban than I am in the ban of services like Parler. Trump will have no trouble speaking; but what of everyday conservatives who are now wondering if tech is out the get them? If you are liberal: Imagine if some future government decides Twitter is a den of illegal activity and shuts it down. That’s an uncomfortable intellectual exercise, and one we shouldn’t dismiss out of hand.

Parler is a scary place. Before anyone has anything to say about its right to exist, I think you really should spend some time there, or at least read my story about the lawsuit filed by Christopher Krebs. Ask yourself this question: What percentage of a platform’s content needs to be death threats, or about organizing violence, before you’ll stop defending its right to exist? Let’s say we heard about a tool named “Anteroom” which ISIS cells used to radicalize young Muslims, spew horrible things about Americans, teach bomb-making, and organize efforts to…storm the Capitol building in D.C.. Would you really be so upset if Apple decided not to host Anteroom on its play store?

So, what do we do with Parler? Despite all this, I’m still uncomfortable with removing its access to network resources the way Amazon has. I think that feels more like smashing a printing press than forcing books into people’s living rooms. Amazon Web Services is much more like a public utility than Twitter is. The standard for removal there should be much higher, I think. And if it makes you uncomfortable that a private company made that decision, rather than some public body that is responsible to voting citizens, it should. At the same time, if you can think of no occasion for banning a service like Parler, then you just aren’t thinking.

These are complex issues and we will need our best, most creative legal minds to ponder them in the years to come. Here’s one: Matt Stoler, in his Big newsletter about tech monopolies, offers a thoughtful framework for dealing with this problem. I’d like to hear yours. But don’t hit me with beer-popping lines or stuff you remember from Philosophy 101. This is no time for academic smugness. We have real problems down here on planet Earth.

Should conservative social network Parler Be Removed from AWS, Google, and Apple app stores? This is an interesting question, because Parler is where some of the organizing for the Capitol hill riot took place. Amazon just removed Parler from its cloud infrastructure, and Google and Apple removed Parler from their app stores. Removing the product may seem necessary to save lives, but having these tech oligarchs do it seems like a dangerous overreach of private authority. So what to do?

My view is that what Parler is doing should be illegal, because it should be responsible on product liability terms for the known outcomes of its product, aka violence. This is exactly what I wrote when I discussed the problem of platforms like Grindr and Facebook fostering harm to users. But what Parler is doing is *not* illegal, because Section 230 means it has no obligation for what its product does. So we’re dealing with a legal product and there’s no legitimate grounds to remove it from key public infrastructure. Similarly, what these platforms did in removing Parler should be illegal, because they should have a public obligation to carry all customers engaging in legal activity on equal terms. But it’s not illegal, because there is no such obligation. These are private entities operating public rights of way, but they are not regulated as such.

In other words, we have what should be an illegal product barred in a way that should also be illegal, a sort of ‘two wrongs make a right’ situation. I say ‘sort of’ because letting this situation fester without righting our public policy will lead to authoritarianism and censorship. What we should have is the legal obligation for these platforms to carry all legal content, and a legal framework that makes business models fostering violence illegal. That way, we can make public choices about public problems, and political violence organized on public rights-of-way certain is a public problem.