Author Archives: BobSulli

A tale of two security operations centers – 78% say the job is ‘very painful’

The 2020 Devo SOC Performance Report tells a tale of two SOCs. Based on the results of an independent survey of IT and IT security practitioners, the second annual report looks at the latest trends in security operations centers (SOCs), both positive and negative. The report presents an unvarnished view of the current state of SOC performance and effectiveness based on responses from people with first-hand knowledge of SOC operations, identifies areas of change from the prior year’s survey, and highlights the challenges that continue to hinder many SOCs from achieving their performance goals.

Devo commissioned the Ponemon Institute to conduct a comprehensive, independent survey in March and April 2020 of professionals working in IT and security. The survey posed a broad range of questions designed to elicit insights into several key aspects of SOC operations, including:

  • The perceived value of SOCs to organizations
  • Areas of effectiveness and ineffectiveness
  • The ongoing challenge of SOC analyst burnout, its causes, and effects

The picture painted by the data from nearly 600 respondents shows that while some aspects of SOC performance show modest year-over-year improvement, major problems persist that continue to adversely affect organizational cybersecurity efforts and the well-being of SOC analysts.

A Tale of Two SOCs
Overall, the survey results tell a tale of two SOCs. One is a group of high-performing SOCs that are, for the most part, doing reasonably well in delivering business value. This group generally enjoys sufficient talent, tools, and technology to have a fighting chance of overcoming the relentless challenges that commonly afflict many SOCs.

Sharply contrasting with the high performers are the low-performing SOCs. This group struggles greatly because they are unable to overcome the myriad problems hindering their ability to deliver better performance. These SOCs generally lack the people, technology, and budget resources to overcome these challenges, resulting in them sinking even lower in effectiveness, putting their organizations at ever-greater risk of cyberattacks.

This report examines the specific areas where high- and low-performing SOCs most diverge, while also shining a light on the challenges with which both groups struggle. By identifying the differences and similarities between the two classes of SOCs, it illuminates the variable return on investment these SOCs are delivering to their organizations.

The Good(-ish) News
Before delving into the most significant—and in many cases, disturbing—findings from the survey, let’s start by looking at how organizations rate the value their SOC provides. This year, 72% of respondents said the SOC is a key component of their cybersecurity strategy. That’s up from 67% in 2019. This increase reflects more respondents feeling their SOC plays an important role in helping the organization understand the external threat landscape.

Other findings with a somewhat positive take on SOC performance include:

There is an eight-percentage-point increase among respondents who say their SOC is highly effective in gathering evidence, investigating, and identifying the source of threats. So far, so good. However, when you realize that last year only 42% of respondents felt that way, this year’s “jump” to 50% means that half of those surveyed still don’t believe their SOC is performing particularly well.

Respondents see improvements in their SOC’s ability to mitigate risks. This is another example of good news/bad news. Last year only 40% of respondents felt their SOC was doing a good job reducing risks. In 2020, a still-modest 51% say their SOC is getting the job done in this area. That’s a nice increase, but it still means that almost half of all respondents find their SOC lacking in this important capability.

Contributing to this rise, more SOCs (50%, up from 42% in 2019) are providing incident-response capabilities including attack mitigation and forensic services. The brightest spot in this aspect of SOC performance is that in 2020, 63% of respondents say SOCs are helpful in understanding the external threat environment by collecting and analyzing information on attackers and their tactics, techniques, and procedures (TTPs), up from 56% last year.

There was a slight bump in the alignment between the SOC and the objectives and needs of the business. This year 55% of respondents say their SOCs are fully aligned (21%) or partially aligned (34%), a slight increase from 51% in 2019. One possible reason for the improved alignment is that more lines of business are leading the SOC team (27% this year vs. 18% in 2019). But that practice also could be contributing to the rise in turf battles and silo issues. More on that later.

Organizations are investing in SOC technologies. Seventy percent of respondents say it is very likely (34%) or likely (36%) that their organization will open up their wallets to introduce new tools designed to improve SOC operations.

The SOC forecast is cloudy. A majority of organizations, 60%, now operate their SOC mostly (34%) or partly (26%) in the cloud. In 2019, only 53% of organizations identified as mostly cloud (29%) or operating a hybrid environment (24%). SOCs with limited cloud presence are declining, with only 40% of organizations identifying as mostly on-premises, down from 47% in 2019. This trend toward more cloud-based SOC operations reflects the overall move of IT and other business operations technologies taking advantage of the scale and cost benefits of cloud deployments.

The Really-Not-So-Good News

The first Devo SOC Performance Report in 2019 showed that the issue of analyst turnover due to stress-related burnout was significant. Unfortunately, it’s become an even bigger problem in 2020:

  • 78% say working in the SOC is very painful, up from 70% last year
  • An increased workload is the number-one reason for burnout according to 75% of respondents, up from 73%
  • Information overload is an even bigger problem this year (67%) than in 2019 (62%)
  • And 53% say “complexity and chaos” in the SOC is a major pain point, up from 49%

For all of these reasons, and many more as you’ll see, organizations must find ways to reduce the stress of working in the SOC—now.

Respondents are concerned that frustrated, stressed, and burnt-out analysts will vote with their feet and quit their jobs. An appalling 60% say the stress of working in the SOC has caused them to consider changing careers or leaving their jobs. Even worse, 69% of respondents say it is very likely or likely that experienced security analysts would quit the SOC, more discouraging than the 66% who felt that way last year.

Turf tussles and silo skirmishes are killing SOC effectiveness. This is another problem that’s getting worse. This year, 64% of respondents say these internal battles over who is in charge of what are a huge obstacle to their SOC’s success, a disheartening increase from 57% in 2019. 27% of respondents say lines of business are in charge of the SOC, an increase from 18% in 2019. However, 17% of respondents say no single function has clear authority and accountability for the SOC. And it’s not a stretch to connect the dots and realize that an organization infected with in-fighting among its technology teams is likely to be more vulnerable to the potentially devastating effects of a successful cyberattack.

Budgets are not adequate to support a more effective SOC. SOC budgets increased slightly year over year, but not enough to close the gaps in effectiveness and performance. The average annual cybersecurity budget for the survey respondents’ organizations rose to $31 million this year, a slight bump from $26 million. The average funding allocation for the SOC is 32% of the total cybersecurity budget or $9.9 million, a slight increase from 30% or $7.8 million in 2019. These figures are heading in the right direction, but they’re still insufficient to fund the important work of an effective SOC team.

You can’t stop what you can’t see. SOC teams are handcuffed by limited visibility into the attack surface, which 65% of respondents cite as one of the primary causes of SOC analyst pain.

The mean time to resolution remains unacceptably high. MTTR is one of the benchmark metrics for SOC performance, and the responses to the survey show it is another significant problem area. According to 39% of respondents, MTTR can take months or even… years! Less than a quarter of respondents, 24%, say their SOC can resolve security incidents within hours or days. Compare these unsettling metrics with the industry estimate that it takes skilled hackers less than 19 minutes to move laterally after compromising the first machine in an organization. This points to a significant gap for the vast majority of SOCs, as only 8% have an estimated MTTR that is “within hours,” which is even worse than the 9% of organizations in 2019.

Is it time for the rise of the machines? It’s obvious from these survey results that the trend of SOC analyst stress, burnout, and turnover is getting worse. The question is what can organizations do to turn the tide? Well, if you listen to 71% of those surveyed, a big step in the right direction would be to introduce automation to the analyst workflow, and 63% state that  implementing advanced analytics/machine learning would help. Respondents feel organizations should invest in technologies that would reduce analyst workloads. They believe automation and machine learning are even more important than a normalized work schedule in reducing SOC pain. The idea is to automate many of the repetitive, pressure-packed tasks typically performed by Tier-1 analysts who often have had enough of SOC work before they ever make it to Tier-2.

READ THE FULL REPORT AT DEVO’S WEBSITE.

 

I was asked to help both the defense and the prosecution at Father of ID theft’s sentencing

This is quite a business card. One of my prized possessions from James’ better days. ‘Because there should be only one you.’

Bob Sullivanh

I ended up in a press release issued by the Department of Justice last week — I believe that was a first for me. Fortunately, I was not the subject of the release. My book, Your Evil Twin, was used by prosecutors to help put a notorious identity thief behind bars for 17 years. That criminal was James Rinaldo Jackson, whom I had named “The Father of Identity Theft” in my book almost 20 years ago.

Thus ended — for now — a crazy episode in my life that involved an old prison pen pal and a federal case in which I was asked to help both the prosecution and the defendant.

Most recently, James had lit fires in his house and kept a woman and her three children hostage while trying to destroy evidence after police surrounded his place …and soon after, tried to use a dead man’s identity to buy a Corvette.

To James, that was a pretty typical Tuesday.

James’ story is so convoluted, episodic, tragic, and amorphous that I can only hope to offer you a glimpse in this email. I’m hard at work looking for broader ways to tell this crazy story. While he’s now going to be in a federal prison for 207 months, likely the rest of his life (he’s 58), I can’t help thinking his story isn’t really over.

I hadn’t thought about James for nearly a decade when I received an email from the DOJ about his case last December. James had been in and out of jail and managed to squirt back out into public life again and again. This time, DOJ wanted to throw the book at him — MY book — and a federal agent wanted to know if I had any additional evidence I could share.

I had spent a couple of years writing letters back and forth to James when he was in jail for previous crimes. In thousands of single-spaced, typed pages, he had disclosed amazing “how I did it” details about his early days committing insurance fraud, and then trail-blazing in ID theft.

Like all journalists, I was in a strange spot. Generally, reporters don’t share information with prosecutors unless compelled to do so by a court. On the other hand, it really is best for James and the rest of the world that he be protected from society and vice versa.

While I pondered the situation and made plans to cover his sentencing hearing in Tennessee, I was contacted by James’ court-appointed defense attorney. James had told his legal team that I could be a character witness for him at sentencing. His letters to me were always framed as an effort to warn the world about a coming wave of ID theft — and he was right about that. He thought perhaps I could help the judge go easy on him.

I called journalism ethics experts to discuss my next steps and stalled. Then, Covid hit. James’ sentencing was repeatedly delayed. I suspected he might somehow get out of a long jail sentence. But last week, he was put away for a long time without my involvement.

“Aggravated identity theft and schemes to defraud or compromise the personal and financial security of vulnerable and deceased victims will not be tolerated,” said U.S. Attorney D. Michael Dunavant in the press release. “This self-proclaimed ‘Father of Identity Theft’ will now have to change his name to ‘Father Time’, because he will be doing plenty of it in federal prison.”

James has done some very bad things, and hurt a lot of people. Still, I felt a strange sadness. I thought about all the opportunities he had to set his life straight; all the second chances wasted. He just couldn’t NOT be a criminal. I’ve met other criminals like this in my life. One rather pointedly told me, “I just get too comfortable.” For some people, it seems, comfort is intolerable.

If you could indulge me for a bit, let me go back in time, to when I was first contacted by prosecutors about James:


When a federal prosecutor sends you an email with a subject line that’s the title of a book you wrote almost 20 years ago, you call immediately.

“This is probably the least surprising call you’ve ever received, but James is in trouble again,” the prosecutor said to me.

James Jackson had recently tried to burn his house down with his female friend and her kids held hostage inside, sort of.  Then after that, he was arrested at a car dealership while trying to buy a car with a stolen identity. A Corvette. James doesn’t do anything small.

The prosecutor had found my book when they executed a search of James’ home and his life.  Of course he did.

Two decades ago, James Rinaldo Jackson — the man often credited with ‘inventing’ identity theft — was my prison pen pal.  I was a cub technology reporter at MSNBC and I had latched onto a new, scary kind of financial fraud. It was so new, the term identity theft hadn’t really been coined yet.  James was my teacher. We spent years corresponding; I often received 2-3, 4 missives a week. He’s hopelessly OCD, and the letters were often dozens of pages, single-spaced, impeccably typed.  Slowly, but surely, hidden inside pages of rambling prose, James unraveled for me all the tricks he used to steal millions of dollars, to amass a fleet of luxury cars, to impersonate a Who’s Who of famous CEOs and Hollywood personality of the 1990s – often armed only with a payphone and some coins.

At one point, James stole Warner Bros CEO Terry Semel’s identity, and sent Semel the evidence to him home via FedEx, in the form of a movie pitch — “Really, sir, it would be an important film. People are at great risk,” he wrote.  For good measure, he included evidence of stolen IDs from famous actors he hoped star in the movie.

James  Jackson’s misadventures became the core of my book about identity theft, Your Evil Twin, published in 2003. In it, I dubbed James The Father of Identity Theft.  The name stuck.

Years later, James served his debt to society, got out, and we finally met. He was beyond charming, and I liked him. It was easy to see why people would give him hundreds of thousands of dollars.  He became mini-famous for a while, He starred in infomercials about tech crimes. The last time I saw him, we spoke on a panel together in New York at a bank fraud conference.  I remember riding up a glorious escalator with him in the Heart of Times Square, and he beamed that someone else was paying for his $400-a-night hotel room. James could easily have become another been Frank Abignale, the real-life criminal-turned-hero from the film Catch Me if You Can, who now nets 5-figure payouts for speeches.

Instead, he couldn’t even be James Jackson.

James insatiable desire to be someone more important than himself — not to mention his desire for Corvettes – couldn’t be tamed.  James took that escalator and just kept going down, so low that he eventually found himself once again surrounded by police. A new fleet of luxury cars had attracted law enforcement attention. It’s crazy, but when I heard about the fire, I was sad for James. I know what happened. He panicked and started lighting computers and paperwork on fire, hoping to destroy the evidence.

To James, crime was always just a game. He never “hurt” anyone; he just talked his way into the nicer things in life. In fact, he usually targeted people who’d recently died – stealing their money is rather trivial – so where’s the victim? The way he flaunted the proceeds, it was also obvious to me that James was always desperate to be caught. Who tries to buy a Corvette while on the run for trying to burn down your house with your family inside?

The prosecutor called me for help putting James behind bars for good. He wanted more evidence that would convince a judge and jury that James is beyond reform, that 20 years ago James had told me things that still possess him today.   And I have (had? It was a long time ago) mountains of letters that might sound like confessions to a jury today.

This is, to say it mildly, an unusual request. Journalists don’t share information with prosecutors. But then, James is a most unusual case. It would be good for James, and the rest of the world, for James to be kept away from telephones and computers forever.  But that’s not my job. So, now what?

An excerpt of my book detailing James Jackson’s original crimes, originally published by NBC News, can be found here:

http://www.nbcnews.com/id/5763781/ns/technology_and_science-security/t/id-thief-stars-tells-all/#.XXfrUyhKjIU

Eventually the NYTimes covered some of his story:

https://www.nytimes.com/2002/05/29/nyregion/identity-theft-and-these-were-big-identities.html

Here is James on an infomercial, about as close as he ever came to straightening out his life:

Here’s a local story about the more recent fire at James house and his arrest

https://wreg.com/2015/02/28/man-accused-of-stealing-from-the-dead-abusing-children/

And here’s a local story about his sentencing, with plenty of details from Your Evil Twin

https://www.charlotteobserver.com/news/nation-world/national/article246155270.html

Bouncing back from a breach; it’s getting better

As the threat landscape continues to worsen, it is more important than ever for organizations to be able to withstand or recover quickly from the inevitable data breach or security incident. Ponemon Institute and IBM Security are pleased to release the findings of the fifth annual study on the importance of cyber resilience to ensure a strong security posture. In this year’s study, we look at the positive trends in organizations improving their cyber resilience but also the persistent barriers that exist to achieving cyber resiliency.

The use of cloud services supports cyber resilience. As part of this research, we identified respondents in this study that self-reported their organizations have achieved a high level of cyber resilience and are better able to mitigate risks, vulnerabilities and attacks. We refer to these organizations as high performers.

As shown in Figure 1, 74 percent of respondents in high performing organizations vs. 58 percent of respondents in other organizations understand the importance of cyber resilience to achieving a strong cybersecurity posture. High performing organizations are also more likely to recognize the value of cloud services to achieving a high level of cyber resilience (72 percent of high performing respondents vs. 55 percent of respondents in the other organizations). According to these high performing organizations, cyber resilience is improved because of the cloud services’ distributed environment and economies of scale.

[NOTE: We define cyber resilience as the alignment of prevention, detection and response capabilities to manage, mitigate and move on from cyberattacks. This refers to an enterprise’s capacity to maintain its core purpose and integrity in the face of cyberattacks. A cyber resilient enterprise is one that can prevent, detect, contain and recover from a myriad of serious threats against data, applications and IT infrastructure.]

In this section of the report, we provide an analysis of the global consolidated key findings. Ponemon Institute surveyed more than 4,200 IT and IT security professionals in the following countries: The United States, Australia, the United Kingdom, France, Germany, Middle East (UAE/Saudi Arabia), ASEAN, Canada, India and Japan. Most of them are involved in securing systems, evaluating vendors, managing budgets and ensuring compliance.

The complete audited findings are presented in the Appendix of the full report, published on IBM’s website. The findings there are organized into the following topics:

  1. Since 2015, organizations have significantly improved their cyber resilience
  2. The steps taken that support the improvement in cyber resilience
  3. More work needs to be done to become cyber resilient
  4. Lessons from organizations with a high degree of cyber resilience
  5. Country differences

Here is a sample of section 1:

Since 2015, organizations have significantly improved their cyber resilience

More organizations have achieved a high level of cyber resilience. In 2016, less than one-third of respondents said their organizations had achieved a high level of cyber resilience and in this year’s research the majority of organizations say they have achieved a high level of cyber resilience. With the exception of the ability to contain a cyber attack, significant improvements have been made in the ability to prevent and detect an attack.

 Improvement in cyber resilience has steadily increased since 2016. Since 2016, the percentage of respondents who say their cyber resilience has significantly improved or improved has significantly increased from 27 percent of respondents in 2016 to almost half (47 percent of respondents) in 2020.

The number of cyber attacks prevented is how organizations measure improvement in cyber resilience. Of the 47 percent of respondents who say their organizations’ cyber resilience has improved, 56 percent say improvement is measured by the ability to prevent a cyber attack.

As discussed previously, since 2015 the ability to prevent such an incident has increased significantly from 38 percent of respondents who said they have a high ability to 53 percent in this year’s research. Another indicator of improvement is the time to identify the incident, according to 51 percent of respondents. Similar to prevention, the ability to detect an attack has improved since 2015.

Expertise, governance practices and visibility are the reasons for improvement. To achieve a strong cyber resilience posture, the most important factors are hiring skilled personnel (61 percent of respondents), improved information governance practices (56 percent of respondents) and visibility into applications and data assets (56 percent of respondents). These are followed by such enabling technologies as analytics, automation and use of AI and machine learning.

Least important is C-level buy-in and support for the cybersecurity function and board-level reporting on the organizations’ cyber resilience. Currently, only 45 percent of respondents say their organizations issue a formal report on the state of cyber resilience to C-level executives and/or the board.

Having skilled staff is the number reason cyber resilience improves and the loss of such expertise prevents organizations from achieving a high level of cyber resilience, according to 41 percent of respondents. Respondents also cite the need for an adequate budget, the ability to overcome silo and turf issues, visibility into applications and data assets and properly configured cloud services are essential to improving their organizations’ cyber resilience.

To read the full report, visit IBM’s website.

Tracking the Covid tracker apps — dangerous permissions and ‘legitimizing surveillance’

Bob Sullivan

One app requires permission to disable users’ screen locks. Another claims it doesn’t collect detailed location information, but accesses GPS data anyway.  Still another breaks its own privacy policy by sharing personal information with outside companies. And nearly all of them request what Google defines as “dangerous permissions.”

Is this the latest cache of hacker apps sold in the computer underground? No. These stories arise from the 121 Covid-19 apps that governments around the world have released in an attempt to track and control the virus. Security researchers are worried the apps can be used to track and control populations — long after the pandemic has passed. And even if governments have the best intentions in mind, cybercriminals might be able to access the treasure trove of data collected by these apps. After all, they’ve been built hastily, under pressure as Covid-19 has raged around the globe.

Megan DeBlois

It makes sense to use technology to fight the virus. Contact tracing — identifying anyone a sick patient might have infected — is a staple technique to stem outbreaks. It’s easy to imagine a system that uses smartphones to ease this complicated task. But balancing public health with privacy concerns is tricky, if not impossible.

 

Volunteers who are worried about these dark possibilities recently launched Covid19AppTracker.org.  Contributors keep track of security analyses completed of each app and have made their database available for free download. Qatar’s Ehteraz app – which is mandatory, and has been already downloaded 1 million times — allows the developer to unlock users’ smartphones, according to the organization’s database.  Amnesty International’s analysis discovered a vulnerability in Qatar’s app that would have allowed hackers to access highly sensitive information collected by the app.

“The speed at which this technology is being deployed …should terrify people,” said Megan DeBlois, Covid19AppTracker.org’s volunteer product manager.  “I would argue in a lot of cases (this is) legitimizing surveillance with the lens of a public good, but without a lot of transparency.”

Most of the apps in Covid19Tracker’s database are made by governments outside the U.S. Contact tracers have been released rapidly across the E.U. and in places like Saudi Arabia and India. In the U.S., states have been slow to push out tracker apps, partly out of privacy and security concerns.

DeBlois recently presented the group’s findings at the virtual DefCon hacker convention in a talk titled “Who Needs Spyware When You Have Covid-19 Apps?

There were some obvious patterns. While EU apps were less invasive that apps generated by other governments, nearly all of them requested permissions that Google defines as “dangerous,” such as precise location information – in fact 74% of the apps in the database ask for GPS data. Fully 16 request microphone access and 44 ask for camera access. Seven try to access phone contacts.

The group’s database includes purely information apps, symptom trackers, and contact tracing. It’s not going to be easy to build a contract tracing app that respects people’s privacy, DeBlois cautioned.

“It’s really about the nature of contact tracing … The whole point is to track people, to associate linkages,” she said. “That makes it difficult to build and engineer something that works in the way everyone needs it to work.”

Contact tracing apps fall roughly into two categories — those that share all users’ location with a central, government-controlled database, and those that work by merely allowing phones to talk to each other through Bluetooth. In that model, data is only shared with a government agency after a confirmed infection. Google and Apple have recently tweaked their smartphone operating systems to encourage development of this kind of app.

“I’m cautiously optimistic about this minimalistic approach — that model has a lot of potential,” DeBlois said.

View the presentation

Still, she has other concerns.

“I’m a little bit nervous about the way the technology decisions were made,” she said. “A lot of the technology has been dictated by companies. They aren’t part of our democratically-elected government.”

The proliferation of such apps around the world should concern U.S. citizens, too, even those who don’t plan to download a U.S. tracker app, she said. The Qatar app is mandatory even for visitors, for example. That could have implications for business travelers for years to come.

“There absolutely will be implications that cross national boundaries,” she said. “For folks who do international travel, this should be on their radar.”

In the U.S. and western democracies, where use of tracker apps is expected to be voluntary, the apps will be useless unless a large percentage of citizens download them. That’s going to require a lot of trust – a trust that seems lacking in the U.S. right now.  DeBlois cited revelations made by Edward Snowden as one reason: Snowden confirmed some of Americans’ worst fears about government abuse of surveillance technology, she said.

How could U.S. health agencies overcome this lack of trust?

“It starts with transparency,” she said. “Making clear who has access to the information, for how long.  All those questions need to be answered,  And those answers need to be verified.”

Consumers very worried about privacy, but disagree on who’s to blame

Privacy and Security in a Digital World: A Study of Consumers in the United States was conducted to understand the concerns consumers have about their privacy as more of their lives become dependent upon digital technologies. Based on the findings, the report also provides recommendations for how to protect privacy when using sites that track, share and sell personal data. Sponsored by ID Experts, we surveyed 652 consumers in the US. For the majority of these consumers, privacy of their personal information does matter.

Consumers are very concerned about their privacy when using Facebook, Google and other online tools. Consumers were asked to rate their privacy concerns on a scale of 1 = not concerned to 10 = very concerned when using online tools, devices and online services. Figure 1 presents the very concerned responses (7+ responses).

The survey found that 86 percent of respondents say they are very concerned when using Facebook and Google, 69 percent of respondents are very concerned about protecting privacy when using devices and 66 percent of respondents say they are very concerned when shopping online or using online services.

When asked if they believe that Big Tech companies like Google, Twitter and Facebook will protect their privacy rights through self-regulation, 40 percent of consumers say industry self-regulation will suffice. However, 60 percent of consumers say government oversight is required (34 percent) or a combination of government oversight and industry self-regulation (26 percent) is required.

Following are the most salient findings:

 The increased use of social media and awareness about the potential threat to their digital privacy has consumers more concerned about their privacy. In fact, social media websites are the least trusted (61 percent of consumers) followed by shopping sites (52 percent of consumers).

  • Consumers are most concerned about losing their civil liberties and having their identity stolen if personal information is lost, stolen or wrongfully acquired by outside parties (56 percent and 54 percent of respondents, respectively). Only 25 percent of consumers say they are concerned about marketing abuses if their personal information is lost or stolen.
  • Seventy-four percent of consumers say they rarely (24 percent) or never (50 percent) have control over their personal data. Despite this belief, 54 percent of consumers say they do not limit the data they provide when using online services. Virtually all consumers believe their email addresses and browser settings & histories are collected when using their devices, according to 96 percent and 90 percent of consumers, respectively.
  • Home is where the trust is. Forty-six percent of consumers, when asked the one location they trust most when shopping online, banking and other financial activities online, say it is their home. Only 10 percent of consumers say it is when using public WiFi.
  • Consumers believe search engines, social media and shopping sites are sharing and selling their personal data, according to 92 percent, 78 percent and 63 percent of consumers. To increase trust in online sites, consumers want to be explicitly required to opt-in before the site shares or sells their personal information, according to 70 percent of consumers.
  • Consumers reject advertisers’ use of their personal information to market to them. Seventy-three percent of consumers say advertisers should allow them to “opt-out” of receiving ads on any specific topic at any time, and 68 percent of consumers say they should not be able to serve ads based on their conversations and messaging. Sixty-four percent of consumers say they do not want to be profiled unless they grant permission.
  • Online ads and the “creepy” factor. Sixty-six percent of consumers say they have received online ads that are relevant but not based on their online search behavior or publicly available information frequently (41 percent of consumers) or rarely (25 percent of consumers). Sixty-four percent of consumers say they think it is “creepy” when that happens.
  • Forty-five percent of consumers are not aware that their devices have privacy controls they can use to set their level of information sharing. Of the 55 percent of consumers who are aware, 60 percent say they review and update settings on their computers and 56 percent say they review and update settings on their smartphones.
  • Fifty-four percent of consumers say online service providers should be held most accountable for protecting consumers’ privacy rights when going online. Forty-five percent of consumers say they themselves should be most accountable.

Download the full report at the ID Experts website.

Is smartphone contact tracing doomed to be a privacy killer? Or can tech really help?

An app that tells you if you were exposed to someone with Covid? Sounds great. But, as usual, tech-as-silver-bullet ideas come full of booby-traps. There’s been a lot of scattershot discussion around smartphone contact tracing during the past several months, with privacy advocates saying the harms far outweigh the benefits, but many governments and technology are plowing ahead anyway.

But if tech *could* make us safer during this crisis, shouldn’t we try? Under what conditions might it actually be feasible, and fair? Prof. Jolynn Dellinger (Duke and UNC law professor, @MindingPrivacy) has put it all together in a thoughtful analysis, creating a 5-part test that could be considered before implementing contact tracing. Will it *really* work? Will it do more harm than good? Is there enough trust in institutions to ensure it won’t be abused later? Her structure would be useful for the launch of almost any new technology, and it deserves a careful reading on its own. It also deserves more discussion, so I reached out to Prof. Dellinger and Prof. David Hoffman at Duke’s Sanford School of Public Policy and invited them to a brief email dialog with me. I hope you’ll find it illuminating.

Disclosure: I was recently a visiting scholar at Duke, invited by Prof. Hoffman.



FROM: Bob
TO: David
CC: Jolynn

David: Jolynn’s piece is such an excellent state-of-play analysis. Not to put words in her mouth, but I read it as a polite and smart “this’ll never work.” We can’t even get Covid test results in less than a week, why are we even talking about some kind of sci-fi solution like smartphones that warn each other (or, gulp, tell on each other)? Every dollar and moment of attention spent on contract tracing apps should be redirected to finding more testing reagents, if you ask me. Still, this discussion is inevitable, because the apps – working or not – are coming. So I really welcome her criteria for use.

One thing I’ve thought a lot about, which she mentions in passing: Alert fatigue. I’d *definitely* want a text message if someone I spent time with got Covid, were that possible. But if I got five of these in one day I’d turn it off, especially if they proved to be false alarms. Or if I got none in the first 10 days, I’d probably turn it off, or it would age off my smartphone. Fine-tuning the alert criteria will be a hell of a job.

Meanwhile, my confidence level that data like this would *never* be used to hunt for immigrants, or deadbeat dads, or terrorists, or journalists, is about zero. It’s hard to imagine a technology more ripe for unintended consequences than an app that makes such liberal use of location information.

That being said, I sure wish something like this *could* work. Let’s imagine an alternative universe where the trust, law, and technology were already in place when Covid hit, so tech was ready and willing to ride in and save the day. How do we create that world, if not for now, but at least in time for the next pandemic/terrorist attack/asteroid strike/etc. ? We might have to reach back to the days after 9/11, as Jolynn hints, and start a 20-year effort at lawmaking and trust building. The best way to start a journey of 1,000 miles is with a single step. How would we get started?


FROM: David
TO: Bob
CC: Jolynn

David Hoffman

Thanks Bob, with any of these uses of technology the first question that should be asked is “what problem are we trying to solve?”  Are we using the technology to trace infections? Or are we allowing people to increase their chances that they will be notified if they have had exposure to the virus? Or are we using the technology to have individuals track whether they are having symptoms? Or to enforce a quarantine? Or to have people volunteer to donate plasma? Or just to provide people with up to date information about the virus? Depending on the problem we are attempting to solve, we will want to design very different technology implementations. For many of these problems we will likely need to merge other data with whatever data is collected through the technology. Based on what we have seen done in other countries these other data feeds can include information from manual contact tracers, credit card data, CCTV camera feeds and clinical health care data. Once we define what problem we are trying to solve and what data is necessary to solve it, then we can conduct a privacy assessment to determine the level of the risks.

Many of the smartphone apps that have been created have been described as “contact tracing apps”, but it is not clear to me that they will actually help much with contact tracing. To properly do contact tracing through manual efforts, with technology, or using a combination of both we will need to have enough data about whether people have contracted COVID-19 (this presumes broad and quick testing) and a mechanism to accurately measure whether people have been in close contact with each other for long enough to warrant a recommendation that they quarantine themselves, get tested, or both. Unfortunately, solutions that rely just on Bluetooth data from smartphones is likely to result in a large number of both false negatives and false positives. However, a system that integrates Bluetooth data with information learned from manual contact tracers has a higher likelihood of success. Manual contact tracing though suffers from an issue of a lack of centralized guidance, is under resourced and in most areas has not made clear what privacy protections will be put in place for the collected data. The US urgently needs a national strategy on contact tracing, with clear recommendations on what data to collect, what technology to use, and what cybersecurity and privacy protections to put in place.


FROM: Jolynn
TO: Bob
CC: David

Jolynn Dellinger

Bob, Thank you so much for reading the post and for your thoughtful comments and questions. The covid crisis highlights the numerous ways data and emerging tech could be used to benefit society.  Benefitting society while preventing harm to individuals is not an unobtainable goal, but it will take concerted effort. We have long recognized as a society the sensitivity of health information and we are getting there (slowly but surely) on location data. Acting on what we know by taking proactive (as opposed to merely reactive) steps to protect the privacy of personal information – through design, policy and law – is the place to start. A reactive step at this moment is passing a limited law dealing with the privacy of information collected for covid-19 purposes — and this is absolutely better than nothing.  A proactive step would be passing comprehensive privacy legislation that circumscribes collection and use of data more generally and contributes to the creation of an environment in which people can trust companies and governments not to repurpose, exploit or misuse their personal data. (Arguably, because we have waited so long to take obvious necessary legislative action, even a comprehensive privacy law could be broadly characterized as “reactive” at this point, but that is a topic for another post).

Regarding the original post, my personal view is that voluntary digital contact tracing apps are not likely to be worth the existing privacy and security risks at this time given our failure to implement the other necessary elements of a comprehensive, holistic response to the health crisis and the likelihood that they will not be used by sufficient numbers of citizens to make the notifications helpful or reliable. You mentioned in your introductory comments “feasibility” and the relevance of the dollars spent on contact tracing.  I did not cover this topic adequately in my original post but certainly think it is a crucial consideration. Budgets are limited and strained, and every response we choose to invest in necessarily represents another option we do not pursue. So the question of whether to pursue digital contact tracing apps should not be considered in a vacuum but rather should be analyzed in terms of bang for the buck, so to speak. Is an investment in such apps the best, most effective use of our limited funds? And what potentially more useful responses are we foregoing? This question further highlights one of the downsides of the state by state approach the US is currently taking. How much more economically efficient might it be to have regional approaches or, sigh, leadership at the federal level? I strongly agree with David’s comment that the US needs a national strategy with clear recommendations on what data to collect, what technology to use, and what cybersecurity and privacy protections to put in place. I would add that these guidelines, like the 5 question analysis proposed in the blog post, should be applied to any and all personal data collected for the purposes of managing the covid crisis.


FROM: Bob
TO: Jolynn
CC: David
So is there one thing that readers might urge their leaders to do, or urge technology companies to do, during the next couple of months that might bring us closer to these goals?
It seems like a federal privacy law is probably off the table between now and election day, so that won’t come in time to help with Covid.
Is there something else that might? Could a state pass a law? Could a tech firm adopt a model privacy policy around contact tracing apps?  What kind of steps might any of these interested parties that would at least move us a bit in the right direction? Sadly, I’m quite sure we’ll be dealing with Covid long after November.

 

FROM: Jolynn
TO: Bob
CC: David

Jolynn Dellinger

State legislatures could pass laws or, in the alternative, Governors might issue executive orders to accomplish immediate goals. States can work to ensure that all local and state level health departments are on the same page and are employing similar privacy and security protections for data collected by manual contact tracers and any digital contact tracing apps or other technologies designed to manage Covid issues.

Tech firms and app developers should certainly have privacy policies in place, but those entities should also make explicit, affirmative guarantees that any data collected for purposes of responding to the Covid crisis (health, location or other personal data) will not be used for any other purpose or monetized, and will not be sold to or shared with any third parties, including law enforcement of any kind. Google and Apple could also bar apps from inclusion in the Google Play Store or App Store if they do not make such explicit commitments.

Want to participate in this dialog? Leave your comments below. We’ll keep the conversation going.

Digital transformation & cyber risk: what you need to know to stay safe

Larry Ponemon

CyberGRX and Ponemon Institute surveyed 581 IT security and 302 C-suite executives to determine what impact digital transformation is having on cybersecurity and how prepared organizations are to deal with that impact. All 883 respondents are involved in managing digital transformation and cybersecurity activities within their organizations. The results show that while digital transformation is widely accepted as critical, the rapid adoption of it is creating significant vulnerabilities for most organizations—and these are only exacerbated by misalignment between IT security professionals and the C-suite.

The full report can be downloaded from the CyberGRX website.

Our research think tank is dedicated to advancing privacy and data protection practices—and these report findings underscore a growing need for such mitigation tools, at a time when we see rapid digital transformation across industries. We chose to study both IT security professionals and C-suite executives to tap into the intersection of two groups making the biggest impact on organizations as they adopt new digital practices.

Here are the key themes that will be reviewed in this report.

Digital transformation is increasing cyber risk.

  • IT security has very little involvement in directing efforts to ensure a secure digital transformation process. Only 37 percent of respondents say the CIO is most involved and only 24 percent of respondents say the CISO is most involved. Both roles are trailing behind general managers, lines of business managers and data sciences.
  • Eighty-two percent of respondents believe their organizations experienced at least one data breach as a result of digital transformation. Forty-two percent of respondents believe they experienced at least two to five cyber events and 55 percent of respondents say with certainty that at least one of these breaches was caused by a third party.

Digital transformation has significantly increased reliance on third parties, specifically cloud providers, IoT and shadow IT.

  •  Fifty-eight percent of respondents say the primary change to their organizations is increased migration to the cloud, which relies upon third parties. This is followed by the increased use of IoT and increased outsourcing to third parties. Despite the increasing risk, 58 percent of respondents say their organizations do not have a third-party cyber risk management program.

Conflicting priorities between IT security teams and the C-suite create vulnerabilities and risk.

  • Seventy-one percent of IT security respondents say the rush to achieve digital transformation increases the risk of a data breach and/or a cybersecurity exploit compared to 53 percent of the C-level respondents. Sixty-three percent of C-level respondents vs. only 41 percent of IT security respondents do not want the security measures used by IT security to prevent the free flow of information and an open business model.

Unless things change, the future doesn’t look any more secure

  • Currently only 29 percent of respondents say their organizations are very prepared to address top threats related to digital transformation in two years. Only 43 percent of respondents are very optimistic their organizations will be prepared to reduce the risk of these threats.
  • Organizational size and industry differences have an impact on the consequences of digital transformation. Most industries do not have a security budget for protecting data assets during the digital transformation process.

“If there’s one major takeaway from our research, it’s that digital transformation is not going anywhere. In fact, organizations should expect—and plan for—digital transformation to become more of an imperative over time,” says Dave Stapleton, Chief Information Security Officer, CyberGRX. “For this reason, organizations must consider the security implications of digital transformation and shift their strategy to build in resources that mitigate risk of cyberattacks. Based on these findings, we recommend involving organizations’ IT security teams in the digital transformation process, identifying the essential components for a successful process, educating colleagues on cyber risk and prevention, and creating a strategy that protects what matters most.”

Key findings overview:

The rush towards digital transformation has increased cyber risks.

IT security respondents who are in the trenches are far more cognizant than C-level respondents of the risk if not enough time and resources are allocated to the digital transformation process. Most respondents say their corporate leaders are not aware of how the inability to secure digital assets could significantly hurt their organization’s brand and reputation. Less than half of C-level respondents (49 percent) say senior management recognizes the potential harm to brand and reputation.

Conflicting priorities between IT security teams and the C-suite create vulnerabilities and risk. Only 16 percent of respondents say IT security and lines of business are fully aligned with respect to achieving security during the digital transformation process. As a result, there are gaps in perceptions about risk to the digital transformation process. Specifically, far more IT security respondents (64 percent) than C-level respondents (41 percent) say that the digital economy significantly increases risk to high value assets such as IP and trade secrets. Sixty-three percent of C-level respondents vs. only 41 percent of IT security respondents do not want the security measures used by IT to prevent the free flow of information and an open business model.

Organizations are not protecting what matters most. Analytics and private communications are the digital assets most difficult to secure according to 51 percent and 44 percent of respondents, respectively. However, only 35 percent of respondents say analytics is appropriately secured and only 38 percent of respondents say private communications are secured. Surprisingly, only 25 percent of respondents say consumer data, which is considered highly sensitive and confidential, is appropriately secured. However, the difficulty to secure this data is very low. Only 10 percent of respondents say such data is difficult to secure.

A secure digital transformation process is affected by a lack of expertise and a lack of visibility. Fifty-three percent of respondents say the most significant barrier to achieving a secure digital transformation process followed by insufficient visibility of people and business processes (51 percent of respondents).

Organizations have experienced multiple data breaches as a result of digital transformation. Eighty-two percent of respondents believe their organizations experienced at least one data breach during the digital transformation process. Forty-two percent of respondents say their organizations could have experienced between two and five data breaches and 22 percent say their organizations could have experience between six to ten data breaches. Fifty-five percent of respondents say with certainty that at least one of these breaches was caused by a third party.

Digital transformation has significantly increased reliance on third parties, specifically cloud providers, IoT and shadow IT.

Current tools or solutions to manage third-party risk are still not considered effective. Slightly more than half (51 percent) of organizations represented in this research have a strategy for achieving digital transformation and of these 73 percent of respondents say their strategy involves assessing third-party relationships and vulnerabilities. Forty-two percent of respondents say their organizations have a third-party risk management program and assessments are the most commonly used solution. However, when asked if they are effective, 53 percent say the tools and solutions used are only somewhat effective (28 percent) or not effective (25 percent).

A secure cloud environment is a significant challenge to achieving a secure digital transformation process. Sixty-three percent of respondents say their organizations have difficulty in ensuring there is a secure cloud environment and 54 percent of IT security say the ability to avoid security exploits is a challenge. Fifty-six percent of C-level executives say their organizations find it a challenge to ensure third parties have policies and practices that ensure the security of their information.

Challenges for securing the future of digital transformation

Budgets are and will continue to be inadequate to secure the digital transformation process. Only 35 percent of respondents say they have such a budget. If they do, these budgets are and will continue to be inadequate to secure the digital transformation process. Because of the risks created by digital transformation, respondents believe the percentage of IT security allocated to digital transformation today should almost be doubled from an average of 21 percent to 37 percent. In two years, the average percentage will be only 37 percent and respondents say ideally it should be 45 percent.

More progress needs to be made in the ability to mitigate cyber threats. The top three threats respondents are most concerned about are system downtime, cybersecurity attacks and data breaches caused by third parties. Currently, only 29 percent say they are very prepared to address these threats. In two years, only 43 percent are very optimistic their organizations will be very prepared to reduce the risk of these threats.

A secure digital transformation process is dependent upon the expertise of the IT security team and they are not very influential. Today, only 35 percent of respondents say IT security is very influential. In the next two years, their influence increases only slightly. In two years, 43 percent of respondents say IT security will be very influential.

Digital transformation impacts industries differently.

Across industries digital transformation has significantly increased reliance on third parties, specifically cloud providers, IoT and shadow IT. Respondents in healthcare, industrial and retail say the most significant change caused by digital transformation is the increased migration to the cloud. The public sector and healthcare industries are less likely to say the increased use of IoT has changed their organizations. Retail and financial services respondents are most likely to say increased outsourcing to third parties as a result of digital transformation has had an impact.

Industrial manufacturing is most likely to have a strategy for achieving digital transformation. Healthcare is least likely to have a strategy. As part of their strategy, retailers are most likely to include assessing third-party relationships and vulnerabilities, including supply chain partners.

Perceptions of digital transformation risk vary among industries. Leaders in services and financial services are most likely to recognize that digital transformation creates IT security risk. Respondents in the industrial manufacturing sector are least likely to say their leaders recognize the risk.

Retail, public sector and services are the industries most concerned about the rush to achieve digital transformation. Sixty-eight percent of respondents in retail, 65 percent of respondents in services and public sector say the rush to achieve digital transformation increases the risk of a data breach and/or a cybersecurity exploit.

A successful digital transformation process requires IT security to balance the securing of digital assets without stifling innovation. IT security faces the challenge of a secure process without stifling innovation. Because digital transformation is considered essential, most industries say that IT security should support innovation with a minimal impact on the goals of digital transformation. Eighty-three percent of respondents in financial services say such a balance is essential.

Most industries do not have a security budget for protecting data assets during the digital transformation process. Despite the need to have the necessary expertise and technologies to ensure a secure digital transformation process, industries are not allocating funds specifically to digital transformation. Healthcare organizations are most likely to have funds for protecting data assets during the digital transformation process.

Organizational size affects the digital transformation process

Following are the most salient differences according to organizational size. Our analysis looked at organizations with a headcount of less than 5,000 and greater than 10,000.

The increased migration to the cloud and the use of IoT are having the greatest impact during the global transformation on smaller organizations. Larger organizations are seeing the greatest impact due to increased outsourcing to third parties.

More larger organizations have a strategy for digital transformation. Larger organizations (54 percent of respondents) are more likely than smaller organizations (43 percent of respondents) to have a strategy for achieving digital transformation. As part of their strategy, 80 percent of respondents in larger organizations vs. 69 percent of respondents in smaller organizations are assessing third-party relationships and vulnerabilities, including supply chain partners, as part of their digital strategy.

Larger organizations are far more likely to recognize the risk of digital transformation. Seventy-nine percent of respondents in larger organizations vs. 61 percent of respondents in smaller organizations believe the rush to achieve digital transformation increases the risk of a breach and/or cybersecurity exploit. Larger organizations are less likely to say that it is important to balance security with the need to enable the free flow of information. Seventy-two percent of respondents in larger organization say digital transformation increases risk to high value assets such as intellectual property, trade secrets and so forth.

Smaller organizations are more likely to be vulnerable to a cyberattack or data breach following digital transformation. Seventy-one percent of respondents in smaller organizations and 64 percent of respondents in larger organizations believe the risk of digital transformation makes it more likely to have a data breach or cyberattack. Larger organizations are more likely to say the rush to produce and release apps, the increased use of shadow IT and increased migration to the cloud have made their organizations more vulnerable following digital transformation.

Characteristics of organizations with mature digital transformation programs

In this study, we analyzed the responses from those organizations that self-reported they have a achieved a mature digital transformation process. Twenty-three percent or 131 respondents self-reported that their organizations’ core digital transformation activities are deployed, maintained and refined across the enterprise. We compare the findings from this group to the 77 percent of the other 450 respondents.

Mature organizations are more likely to have strategies to protect data assets and assess third-party relationships. Fifty-six percent of the most mature organizations have a strategy for achieving digital transformation. In contrast, 47 percent of the other respondents say they have such a strategy. Those in mature organizations say their strategies are more likely to protect data assets and assess third-party relationships and vulnerabilities, including supply chain partners.

Mature organizations are more likely to understand and anticipate the risks associated with digital transformation. Respondents in mature organizations are far more likely to make reducing the third-party risk a priority than the other organizations (78 percent vs. 51 percent). Mature organizations are also more likely to recognize the digital economy increases the risk to high value assets such as intellectual property, trade secrets and so forth (78 percent vs. 60 percent). Mature organizations are also more likely to believe in the importance of balancing the security of our high value assets while enabling the free flow of information and an open business model.

Digital transformation is considered essential to the company’s business. More mature organizations are likely to believe in the importance of IT security to supporting innovation with minimal impact on the goals of digital transformation (90 percent vs. 81 percent) and that digital transformation is essential to the company’s business (84 percent vs. 79 percent).

All organizations struggle with having an adequate budget for protecting data assets during the digital transformation process. Forty-three percent of respondents of mature organizations vs. 34 percent of other organizations say their budgets are adequate for protecting data assets during the digital transformation process.

For more detailed findings, please download the full report from the CyberGRX website at https://get.cybergrx.com/ponemon-report-digital-transformation-2020/

How to detect fake anything in a zero trust world

Bob Sullivan

Fake News is stoking violence and helping destroy our democracy. Fake pills make people sick and can even kill them. Fake foods, like fake olive oil, or mislabeled fish, rip consumers off and steal profits from honest companies. The world is becoming overrun by fake everything, says Avivah Litan, renowned fraud analyst at consultancy firm Gartner. Counterfeit products are a $3 trillion problem, she says…But today’s topic is even bigger than fraud. It’s about a threat to reality itself.

In a new paper, called How to Detect Fake Anything in a Zero Trust World, Litan argues that a mix of technology and human intelligence can beat back this problem of fake everything. But only if someone — consumers? government regulators? corporations? — is willing to pay the price. I spoke with her recently: You can listen to our conversation at the link below.

A few highlights from our talk:

Imagine being able to scan a barcode on a piece of salmon at the supermarket and being able to see the fish’s journey from the river where it was caught, to the port where it was dropped off, to the plane that took it to your city, to the truck that took it to your store.  That’s the promise of blockchain, which could help consumers decide they prefer fish caught from a specific place. They could also demand it be caught in a certain way, and report fraud or mislabeled products.  Litan gets most fired up talking about fake Olive Oil. She thinks blockchain public audit trails could help stop that, too.

Using these tools to cast a wider net — pun intended — Litan thinks tech could help consumers/citizens regain the grasp of reality they are losing. Fake news and fake cures have been a problem for years, but the Covid-19 pandemic has brought the issue into sharp relief.

“The hope is there is no shortage of innovation in this space,” she tells me. “The problem is (companies) won’t do it unless (consumers) pay a premium.”

Litan is perhaps the media’s most-quoted expert on credit card fraud and identity theft, dating back to the early years of credit card database hacking and the rise of fraud-fighting software.  She sees some parallels between the race for banks and retailers to stop credit card hacking — which costs the companies billions — and their relative indifference to identity theft — in which consumers bear a lot of the cost.

But the rise of fake everything, and the collapse of trust worldwide, is a far bigger problem. I’ve started calling it the trust market crash.  It’s an enormous challenge in a world of commerce that’s built on trust.

Innovation will be critical, she said, because government regulators — well-intentioned as they may be — won’t be able to keep up.

“The world is moving way too fast for our political systems,” she said.  Current solutions fall far short. What is an Italian olive oil consumer to do, outside of grow their own olives, Litan joked.   “Hopefully the technology will evolve where you have the solutions at your fingertips.”

Her paper offers the “Gartner Model for Truth Assessment,” with different blended technology and human solutions for the problem of fake.  But much more needs to be done.

“The best hope is a consumer revolution,” she said. “We’ve had enough of this fake news, (people) shoving all this fake stuff down our throats.”

Avivah has posted a short blog entry about her paper. The paper itself is behind Gartner’s paywall.

The 2020 Study on the State of Industrial Security

Larry Ponemon

Ponemon Institute is pleased to present the findings from The 2020 State of Industrial Security Study, sponsored by TÜV Rhineland. The purpose of the research is to understand cyber risks across a broad spectrum of industries and the steps organizations are taking to reduce cyber risk in the operational technology (OT) environment.

Ponemon Institute surveyed 2,258 cybersecurity practitioners in the following industries: automotive, oil and gas, energy and utilities, health and life science, industrial manufacturing and logistics and transportation. All respondents are responsible for securing or overseeing cyber risks in the OT environment and are aware of how cybersecurity threats could affect their organization.

In the context of this research, Operational Technology (OT) is the hardware and software dedicated to detecting or causing changes in physical processes through direct monitoring and/or control of physical devices. Simply put, OT is the use of computers to monitor or alter the physical state of a system, such as the control system for a power station. The term has become established to demonstrate the technological and functional differences between traditional IT systems and industrial control systems environment.

The OT environment is vulnerable to cyberattacks: 57 percent of respondents say their organizations’ security operations and/or business continuity management teams believe there will be one or more serious attacks within the OT environment. Almost half (49 percent and 48 percent of respondents) say it is difficult to mitigate cyber risks across the OT supply chain and cyber threats present a greater risk in the OT than the IT environment.

The following findings reveal the cybersecurity vulnerabilities in the OT environment.  

  • OT and IT security risk management efforts are not aligned. Sixty-three percent of respondents say OT and IT security risk management efforts are not coordinated making it difficult to achieve a strong security posture in the OT environment. The management of OT security is painful because of the lack of enabling technologies in OT networks, complexity and insufficient resources. 
  • On average, organizations had four security compromises that resulted in the loss of confidential information or disruption to OT operations. Forty-seven percent of respondents say OT technology-related cybersecurity threats have increased in the past year. The top three cybersecurity threats are phishing and social engineering, ransomware and DNS-based denial of service attacks. One-third of respondents say such exploits have resulted in the loss of OT-related intellectual property. 
  • The majority of organizations have not achieved a high degree of cybersecurity effectiveness. Less than half of respondents say they are very effective in responding to and containing a security exploit or breach (48 percent), continually monitoring the infrastructure to prioritize threats and attacks (47 percent) and pinpointing sources of attacks and mobilizing the right set of technologies and resources to remediate the attack (47 percent of respondents). 
  • To minimize OT-related risks organizations need to replace outdated and aging connected control systems in facilities, according to 61 percent of respondents. More than half (52 percent of respondents) say vulnerable software is creating risks in the OT environment. 
  • Not enough expertise and budget are often cited as reasons for not having a strong security posture in the OT environment. Organizations represented in this research are spending annually an average of $64 million on cybersecurity operations and defense (OT and IT combined). An average of 26 percent of this budget or approximately $17 million is allocated to the security of OT assets and infrastructure and an average of 17 percent or approximately $10 million is allocated specifically to OT cybersecurity. Respondents say their OT budgets are inadequate to properly execute their cybersecurity strategy. 
  • Accountability for executing a successful cybersecurity strategy. Respondents were asked who is most accountable for executing a successful cybersecurity strategy. Only 20 percent of respondents say it is the OT security leader followed by the CIO/CTO (18 percent) and the IT security leader (17 percent). 
  • Organizations are lagging behind in adopting advanced security technologies. Only 38 percent of respondents say their organizations are using automation, machine learning and artificial intelligence to monitor OT assets. The majority of companies are not integrating security and privacy by design in the engineering of OT control systems.

To read the full report, visit TUV Rheinland’s website.

If we’re going to talk about Section 230, let’s get it right

Now we’ve started something

Bob Sullivan

With President Donald Trump threatening retribution against Twitter with an executive order, you’re going to hear a lot about Section 230 this week — and maybe for many weeks. The ensuing discussion could shake the Internet to its very roots.  That’s going to make legal scholars very happy, but it might seem like a dizzying discussion for most.  That’s by design. Interested parties are conflating all kinds of big ideas to muddy the waters here: the First Amendment, innovation, bias, abuse, millions of followers, billions of dollars.  I’m going to try to sort it out for you here.  Who am I to do that? Well, I’m old enough to remember when the Communications Decency Act and its Section 230 was passed into law.

But if you are going on this journey with me, here’s the rules: Nothing is as simple nor as absolute as it sounds.  Free speech isn’t limitless.  “Speech” isn’t even what you think it is. Immunity isn’t limitless. The First Amendment doesn’t generally apply to private companies…most of the time.  But in a rare confluence of events, there are reasons for both conservatives and liberals to take a good long look at updating and fixing Section 230, which has been the source of much profit for corporations and much pain for Internet users since it became law in 1996.

(And if you really want to understand Section 230, I recommend reading this very readable 25-page academic paper titled The Internet As a Speech Machine and Other Myths Confounding Section 230 Speech Reform. Authors Danielle Keats Citron and Mary Ann Franks do a great job explaining the history of the law and the myths that hold America back from reasonable reform. Or, even better, consider The Twenty-Six Words That Created the Internet, a book by Jeff Kosseff, all about Section 230.)

Section 230 was written at the time of Prodigy and Compuserve, when online services were mainly text-based chat tools, and virtually no consumers used websites.  These services had a problem: Were they liable for everything users said? Could they be sued for defamation, or charged criminally, if users misbehaved? To use the kind of shorthand that journalists love but lawyers hate, should they be treated like publishers of the content — akin to a newspaper editor or book publisher — or mere distributors, akin to a newstand owner?  Courts were split on the matter, and that terrified tech firms. Imagine the liability a company like Google, or Facebook, or America Online, would face if it could be charged with every crime committed on their service.

The defensive shorthand I was taught at my startup, inaccurate as it might be, was this: When a tech company actively moderates user content, it becomes a publisher and increases liability. When a tech company just shoves the stuff automatically out into the world, it’s merely a newsstand, a distributor.  So: Don’t touch!

That free-for-all worked about as well as you might imagine (Porn! Stolen goods! Harassment!) so lawmakers tried to help by passing Section 230.  It sounds straightforward: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” The idea was to shield online service providers who tried to do the right thing (stop harassment and other crimes) from liability.  The law was actually meant to encourage content moderation. It gave service providers a shield against responsibility for third-party content.  But what a winding road it’s been since then.

First, the good: Plenty of folks see Section 230 as the First Amendment of the Internet. Scholar Eric Goldman actually argues that it’s better than the First Amendment. It’s inarguable that online services have thrived since then, and plenty of them credit Section 230.

However, this simple you’re-not-responsible-for-third-party-content rule has been extended by courts and corporations far beyond its original intention. Recall, it was written right about the time Amazon was invented.  The Internet was nearly 100% text-based speech, digital conversation, at the time.  Today, it’s Zoom and car buying and television and it elects a U.S. president.

So that leads us to the moment at hand. The Internet is awash in disinformation, harassment, crime, racism…the dark side of humanity thrives there.  Plenty of people have been driven from its various platforms through doxxing, gender abuse, or simple exhaustion from nasty arguments. I argue all the time that it has made us dumber as a people, offering the Flat Earth Movement as proof. In short, the Internet sucks (More than a decade later, this is still a great read). As Citron and Frank say:

“Those with political ambitions are deterred from running for office. Journalists refrain from reporting on controversial topics. Sexual assault victims are discouraged from holding perpetrators accountable…. An overly capacious view of Section 230 has undermined equal opportunity in employment, politics, journalism, education, cultural influence, and free speech. The benefits Section 230’s immunity has enabled surely could have been secured at a lesser price.”

For better and worse, this is a good time to reconsider what Section 230 hath wrought.

For a quick moment, here are some obstacles to the discussion, forged by confusion. You, and I, and President Trump, can’t have our First Amendment right to free speech suppressed by Twitter or Facebook or Instagram. Generally, the First Amendment applies to governments, not private enterprises.  Facebook, as any true conservative or libertarian would tell you, is free to do what it wants with its company, and the president is free to not use it.  In fact, the government compelling a social media company to say certain things or not say other things — to argue it could not add a link for fact-checking — is a rather obscene violation of the company’s First Amendment rights.

Even on this fairly clear point, there is some room for discussion, however.  In Canada, courts have ruled that social media is so ubiquitous that it can be akin to a public square, according to Sinziana Gutui, a Vancouver privacy lawyer.  So might the US some day feel that cutting off someone’s Twitter account is akin to cutting off their telephone line or electricity? Perhaps.  It sure seems less strained to suggest President Trump simply find another platform to use for his 320-character messages.

And even on this issue of speech, there is confusion. U.S. courts have broadly expanded the definition of speech far beyond talking, publishing pamphlets, or writing posts on an electronic bulletin board.  Commercial activity can be considered speech now.  And that expanded definition has helped websites argue for Section 230 immunity when their members are committing illegal acts — such as facilitating the sale of counterfeit goods, or guns to criminals known to be evading background checks.

Immunity often encourages bad behavior, a classic “moral hazard,” as Franks has written. Set aside fake autographs and illegally purchased domestic violence murder weapons for the moment — the Internet is drowning in antagonism, bots, and harassment that has made it inhospitable for women and men of good faith. It rewards extremism.  It is unhealthy for people and society. It’s not going to fix itself. Citron and Franks again:

“Market forces alone are unlikely to encourage responsible content moderation. Platforms make their money through online advertising generated when users like, click, and share,. Allowing attention-grabbing abuse to remain online often accords with platforms’ rational self-interest. Platforms “produce nothing and sell nothing except advertisements and information about users, and conflict among those users may be good for business.” On Twitter, ads can be directed at users interested in the words “white supremacist” and “anti-gay.” If a company’s analytics suggest that people pay more attention to content that makes them sad or angry, then the company will highlight such content. Research shows that people are more attracted to negative and novel information. Thus, keeping up destructive content may make the most sense for a company’s bottom line.”

Facebook profits massively off all this social destruction. We learned this week that employees inside Facebook have come up with some very clever technological solutions to this problem, only to be kneecapped by Mark Zuckerberg, clearly drunk on conveniently-profitable take-no-responsibility libertarian ideals.

What’s the solution? For sure, that’s much harder.  Citron and Franks suggest adding a simple “reasonable” requirement on companies like Facebook, meaning they have to take reasonable steps to police users in order to maintain Section 230 immunity. Reasonable is a difficult standard, possibily leading to endless ’round-the-rosie’ debate, but it is a common standard in U.S. law. Facebook’s engineers came up with notions worth trying, detailed in this Wall Street Journal story, such as shifting extreme discussions into sub-groups.  The firm could also stop giving extra algorithm juice to obsessives who post 1,000 times a day.

As always, a mix of innovation and smart rules that balance interests is needed.

It won’t be easy, but we have to try. So, it’s good that President Trump has shined a light on Section 230. The discussion is long overdue, as is the will to act. Will the discussion be productive? Probably not if it happens on Twitter. Definitely not if it’s focused on an imaginary social media bias against Trump or Trump’s 80 million followers, who clearly have no trouble finding each other. Instead, let’s focus on making the world safe again for reasonable people.