Monthly Archives: June 2020

The 2020 Study on the State of Industrial Security

Larry Ponemon

Ponemon Institute is pleased to present the findings from The 2020 State of Industrial Security Study, sponsored by TÜV Rhineland. The purpose of the research is to understand cyber risks across a broad spectrum of industries and the steps organizations are taking to reduce cyber risk in the operational technology (OT) environment.

Ponemon Institute surveyed 2,258 cybersecurity practitioners in the following industries: automotive, oil and gas, energy and utilities, health and life science, industrial manufacturing and logistics and transportation. All respondents are responsible for securing or overseeing cyber risks in the OT environment and are aware of how cybersecurity threats could affect their organization.

In the context of this research, Operational Technology (OT) is the hardware and software dedicated to detecting or causing changes in physical processes through direct monitoring and/or control of physical devices. Simply put, OT is the use of computers to monitor or alter the physical state of a system, such as the control system for a power station. The term has become established to demonstrate the technological and functional differences between traditional IT systems and industrial control systems environment.

The OT environment is vulnerable to cyberattacks: 57 percent of respondents say their organizations’ security operations and/or business continuity management teams believe there will be one or more serious attacks within the OT environment. Almost half (49 percent and 48 percent of respondents) say it is difficult to mitigate cyber risks across the OT supply chain and cyber threats present a greater risk in the OT than the IT environment.

The following findings reveal the cybersecurity vulnerabilities in the OT environment.  

  • OT and IT security risk management efforts are not aligned. Sixty-three percent of respondents say OT and IT security risk management efforts are not coordinated making it difficult to achieve a strong security posture in the OT environment. The management of OT security is painful because of the lack of enabling technologies in OT networks, complexity and insufficient resources. 
  • On average, organizations had four security compromises that resulted in the loss of confidential information or disruption to OT operations. Forty-seven percent of respondents say OT technology-related cybersecurity threats have increased in the past year. The top three cybersecurity threats are phishing and social engineering, ransomware and DNS-based denial of service attacks. One-third of respondents say such exploits have resulted in the loss of OT-related intellectual property. 
  • The majority of organizations have not achieved a high degree of cybersecurity effectiveness. Less than half of respondents say they are very effective in responding to and containing a security exploit or breach (48 percent), continually monitoring the infrastructure to prioritize threats and attacks (47 percent) and pinpointing sources of attacks and mobilizing the right set of technologies and resources to remediate the attack (47 percent of respondents). 
  • To minimize OT-related risks organizations need to replace outdated and aging connected control systems in facilities, according to 61 percent of respondents. More than half (52 percent of respondents) say vulnerable software is creating risks in the OT environment. 
  • Not enough expertise and budget are often cited as reasons for not having a strong security posture in the OT environment. Organizations represented in this research are spending annually an average of $64 million on cybersecurity operations and defense (OT and IT combined). An average of 26 percent of this budget or approximately $17 million is allocated to the security of OT assets and infrastructure and an average of 17 percent or approximately $10 million is allocated specifically to OT cybersecurity. Respondents say their OT budgets are inadequate to properly execute their cybersecurity strategy. 
  • Accountability for executing a successful cybersecurity strategy. Respondents were asked who is most accountable for executing a successful cybersecurity strategy. Only 20 percent of respondents say it is the OT security leader followed by the CIO/CTO (18 percent) and the IT security leader (17 percent). 
  • Organizations are lagging behind in adopting advanced security technologies. Only 38 percent of respondents say their organizations are using automation, machine learning and artificial intelligence to monitor OT assets. The majority of companies are not integrating security and privacy by design in the engineering of OT control systems.

To read the full report, visit TUV Rheinland’s website.

If we’re going to talk about Section 230, let’s get it right

Now we’ve started something

Bob Sullivan

With President Donald Trump threatening retribution against Twitter with an executive order, you’re going to hear a lot about Section 230 this week — and maybe for many weeks. The ensuing discussion could shake the Internet to its very roots.  That’s going to make legal scholars very happy, but it might seem like a dizzying discussion for most.  That’s by design. Interested parties are conflating all kinds of big ideas to muddy the waters here: the First Amendment, innovation, bias, abuse, millions of followers, billions of dollars.  I’m going to try to sort it out for you here.  Who am I to do that? Well, I’m old enough to remember when the Communications Decency Act and its Section 230 was passed into law.

But if you are going on this journey with me, here’s the rules: Nothing is as simple nor as absolute as it sounds.  Free speech isn’t limitless.  “Speech” isn’t even what you think it is. Immunity isn’t limitless. The First Amendment doesn’t generally apply to private companies…most of the time.  But in a rare confluence of events, there are reasons for both conservatives and liberals to take a good long look at updating and fixing Section 230, which has been the source of much profit for corporations and much pain for Internet users since it became law in 1996.

(And if you really want to understand Section 230, I recommend reading this very readable 25-page academic paper titled The Internet As a Speech Machine and Other Myths Confounding Section 230 Speech Reform. Authors Danielle Keats Citron and Mary Ann Franks do a great job explaining the history of the law and the myths that hold America back from reasonable reform. Or, even better, consider The Twenty-Six Words That Created the Internet, a book by Jeff Kosseff, all about Section 230.)

Section 230 was written at the time of Prodigy and Compuserve, when online services were mainly text-based chat tools, and virtually no consumers used websites.  These services had a problem: Were they liable for everything users said? Could they be sued for defamation, or charged criminally, if users misbehaved? To use the kind of shorthand that journalists love but lawyers hate, should they be treated like publishers of the content — akin to a newspaper editor or book publisher — or mere distributors, akin to a newstand owner?  Courts were split on the matter, and that terrified tech firms. Imagine the liability a company like Google, or Facebook, or America Online, would face if it could be charged with every crime committed on their service.

The defensive shorthand I was taught at my startup, inaccurate as it might be, was this: When a tech company actively moderates user content, it becomes a publisher and increases liability. When a tech company just shoves the stuff automatically out into the world, it’s merely a newsstand, a distributor.  So: Don’t touch!

That free-for-all worked about as well as you might imagine (Porn! Stolen goods! Harassment!) so lawmakers tried to help by passing Section 230.  It sounds straightforward: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” The idea was to shield online service providers who tried to do the right thing (stop harassment and other crimes) from liability.  The law was actually meant to encourage content moderation. It gave service providers a shield against responsibility for third-party content.  But what a winding road it’s been since then.

First, the good: Plenty of folks see Section 230 as the First Amendment of the Internet. Scholar Eric Goldman actually argues that it’s better than the First Amendment. It’s inarguable that online services have thrived since then, and plenty of them credit Section 230.

However, this simple you’re-not-responsible-for-third-party-content rule has been extended by courts and corporations far beyond its original intention. Recall, it was written right about the time Amazon was invented.  The Internet was nearly 100% text-based speech, digital conversation, at the time.  Today, it’s Zoom and car buying and television and it elects a U.S. president.

So that leads us to the moment at hand. The Internet is awash in disinformation, harassment, crime, racism…the dark side of humanity thrives there.  Plenty of people have been driven from its various platforms through doxxing, gender abuse, or simple exhaustion from nasty arguments. I argue all the time that it has made us dumber as a people, offering the Flat Earth Movement as proof. In short, the Internet sucks (More than a decade later, this is still a great read). As Citron and Frank say:

“Those with political ambitions are deterred from running for office. Journalists refrain from reporting on controversial topics. Sexual assault victims are discouraged from holding perpetrators accountable…. An overly capacious view of Section 230 has undermined equal opportunity in employment, politics, journalism, education, cultural influence, and free speech. The benefits Section 230’s immunity has enabled surely could have been secured at a lesser price.”

For better and worse, this is a good time to reconsider what Section 230 hath wrought.

For a quick moment, here are some obstacles to the discussion, forged by confusion. You, and I, and President Trump, can’t have our First Amendment right to free speech suppressed by Twitter or Facebook or Instagram. Generally, the First Amendment applies to governments, not private enterprises.  Facebook, as any true conservative or libertarian would tell you, is free to do what it wants with its company, and the president is free to not use it.  In fact, the government compelling a social media company to say certain things or not say other things — to argue it could not add a link for fact-checking — is a rather obscene violation of the company’s First Amendment rights.

Even on this fairly clear point, there is some room for discussion, however.  In Canada, courts have ruled that social media is so ubiquitous that it can be akin to a public square, according to Sinziana Gutui, a Vancouver privacy lawyer.  So might the US some day feel that cutting off someone’s Twitter account is akin to cutting off their telephone line or electricity? Perhaps.  It sure seems less strained to suggest President Trump simply find another platform to use for his 320-character messages.

And even on this issue of speech, there is confusion. U.S. courts have broadly expanded the definition of speech far beyond talking, publishing pamphlets, or writing posts on an electronic bulletin board.  Commercial activity can be considered speech now.  And that expanded definition has helped websites argue for Section 230 immunity when their members are committing illegal acts — such as facilitating the sale of counterfeit goods, or guns to criminals known to be evading background checks.

Immunity often encourages bad behavior, a classic “moral hazard,” as Franks has written. Set aside fake autographs and illegally purchased domestic violence murder weapons for the moment — the Internet is drowning in antagonism, bots, and harassment that has made it inhospitable for women and men of good faith. It rewards extremism.  It is unhealthy for people and society. It’s not going to fix itself. Citron and Franks again:

“Market forces alone are unlikely to encourage responsible content moderation. Platforms make their money through online advertising generated when users like, click, and share,. Allowing attention-grabbing abuse to remain online often accords with platforms’ rational self-interest. Platforms “produce nothing and sell nothing except advertisements and information about users, and conflict among those users may be good for business.” On Twitter, ads can be directed at users interested in the words “white supremacist” and “anti-gay.” If a company’s analytics suggest that people pay more attention to content that makes them sad or angry, then the company will highlight such content. Research shows that people are more attracted to negative and novel information. Thus, keeping up destructive content may make the most sense for a company’s bottom line.”

Facebook profits massively off all this social destruction. We learned this week that employees inside Facebook have come up with some very clever technological solutions to this problem, only to be kneecapped by Mark Zuckerberg, clearly drunk on conveniently-profitable take-no-responsibility libertarian ideals.

What’s the solution? For sure, that’s much harder.  Citron and Franks suggest adding a simple “reasonable” requirement on companies like Facebook, meaning they have to take reasonable steps to police users in order to maintain Section 230 immunity. Reasonable is a difficult standard, possibily leading to endless ’round-the-rosie’ debate, but it is a common standard in U.S. law. Facebook’s engineers came up with notions worth trying, detailed in this Wall Street Journal story, such as shifting extreme discussions into sub-groups.  The firm could also stop giving extra algorithm juice to obsessives who post 1,000 times a day.

As always, a mix of innovation and smart rules that balance interests is needed.

It won’t be easy, but we have to try. So, it’s good that President Trump has shined a light on Section 230. The discussion is long overdue, as is the will to act. Will the discussion be productive? Probably not if it happens on Twitter. Definitely not if it’s focused on an imaginary social media bias against Trump or Trump’s 80 million followers, who clearly have no trouble finding each other. Instead, let’s focus on making the world safe again for reasonable people.