Two-thirds of security workers consider quitting because of burnout

Larry Ponemon

Security Operations Centers (SOC) are an increasingly important part of organizations’ efforts to keep ahead of the latest cybersecurity threats. However, for a variety of reasons revealed in this research, organizations are frustrated with their SOC’s lack of effectiveness in detecting attacks.

A SOC is defined as a team of expert individuals and the facility in which they work to prevent, detect, analyze and respond to cybersecurity incidents. Critical to the SOC’s success is support from the organization’s senior leaders, investment in technologies, and the ability to hire and retain a highly skilled and motivated team. The purpose of this research is to understand the barriers and challenges to having an effective SOC and what steps can be taken improve its performance.

Sponsored by Devo Technology, Ponemon Institute surveyed 554 IT and IT security practitioners in organizations that have a SOC and are knowledgeable about cybersecurity practices in their organizations. Their primary tasks are implementing technologies, patching vulnerabilities, investigating threats and assessing risks.

While respondents consider the SOC as essential or important, most respondents rate their SOC’s effectiveness as low and almost half say it is not fully aligned with business needs. Problems such as a lack of visibility into the network and IT infrastructure, a lack of confidence in the ability to find threats and workplace stress on the SOC team are diminishing its effectiveness.

“The survey findings clearly highlight that a lack of visibility and having to perform repetitive tasks are major contributors to analyst burnout and overall SOC ineffectiveness,” said Julian Waits, General Manager of Cyber, Devo. “It is critical that businesses make the SOC a priority and evolve its effectiveness by empowering analysts to focus on high-impact threats and improving the speed and accuracy of triage, investigation, and response.”

The following findings reveal why organizations have SOC frustration 

  • The visibility problem: The top barrier to SOC success, according to 65 percent of respondents, is the lack of visibility into the IT security infrastructure and the top reason for SOC ineffectiveness, according to 69 percent, is lack of visibility into network traffic.
  • The threat hunting problem: Threat hunting teams have a difficult time identifying threats because they have too many IOCs to track, too much internal traffic to compare against IOCs, lack of internal resources and expertise and too many false positives. More than half of respondents (53 percent) rate their SOC’s ability to gather evidence, investigate and find the source of threats as ineffective. The primary reasons are limited visibility into the network traffic, lack of timely remediation, complexity and too many false positives.
  • The interoperability problem: SOCs do not have high interoperability with the organization’s security intelligence tools. Other challenges are the inability to have incident response services that can be deployed quickly and include attack mitigation and forensic investigation services.
  • The alignment problem: SOCs are not aligned or only partially aligned with business needs, which makes it difficult to gain senior leadership’s support and commitment to providing adequate funding for investments in technologies and staffing. Further, the SOC budget is inadequate to support the necessary staffing, resources, and investment in technologies. On average, less than one-third of the IT security budget is used to fund the SOC and only four percent of respondents say more than 50 percent of the cybersecurity budget will be allocated to the SOC.
  • The problem of SOC analyst pain: IT security personnel say working in the SOC is painful because of an increasing workload and being on call 24/7/365. The lack of visibility in to the network and IT infrastructure and current threat hunting processes also contribute to the stress of working in the SOC. As a result, 65 percent say these pain factors would have caused them to consider changing careers or leave their job and many respondents say their organizations are losing experienced security analysts to other careers or companies.
  • As a result of these problems, the mean time to resolution (MTTR) can be months. Only 22 percent of respondents say resolution can occur within hours or days. Forty-two percent of respondents say the average time to resolve is months or years.

Read the rest of this report at Devo Technologies.

Has tech killed attention? Why listening with your whole body helps, with Annie Murphy Paul

Bob Sullivan

One of my favorite subjects is the problem of shortened attention spans and the fallacy of multitasking in the digital age.  Tech competes for our eyes and ears perhaps thousands of times each day.  The average worker only gets a few moments to focus on something without being interrupted.  Even lovers look at smartphones during intimate conversations.

This is not a world I want to live in, and I bet you don’t, either.  With rare exceptions, multitasking isn’t multitasking at all — rather, it’s rapid task switching. Plenty of studies show (including my own research conducted with Carnegie  Mellon University) that people who are doing two things at once simply underperform at both tasks.

Into this complex subject steps Annie Murphy Paul, one of the great science writers of our time. We were lucky to have Annie on our latest episode of “So, Bob…” She’s done extensive research into the science of being smart, and if you listen to her, I believe you will actually feel smarter. You will definitely feel that she is both a great speaker and a great listener.  In case you can’t listen at this moment, I’ve included a couple of highlights below — but when you can, please listen to the podcast. As long as I’m not interrupting something.

On listening with your whole self

“One thing that we get away from in the use of technology is the body,” Annie told me. “We become this disembodied head that you know, is just looking at a screen. And so I find that when I talk to someone that I’m close to or, even when I interview someone I try to be in my own body and aware of the feelings and the sensations that are coming up in me as I talk to that other person and I try to assume a state of being both calm and alert and being open to whatever I’m feeling from the other person. And that’s the basis of, of empathy, when you are using your own body as an instrument to understand the other person.

On the myth of multitasking

“Looking at several streams of information or entertainment while students are studying is, seems almost universal. My own children’s elementary school classes do it and I know that the students, the college students that I’ve taught do it and they all think they can do it well, and that’s the rub because we don’t have a very good sense of our own proficiency at paying attention and we may not be aware, but it is the case that when we’re trying to pay attention to many things at once, we work more slowly, we, we make more errors and we don’t perform at the same level that we would if we were paying attention to just one thing. So I think in terms of what teachers and parents and others who are concerned about kids should be thinking about it’s, it’s instilling in them the habit of mono tasking of just doing one thing at a time.

On taking ‘tech breaks’ – giving kids set times to check their phones, then put them away

The idea is to have an expanding length of time between tech breaks. So it might be 15 minutes at the start and then half an hour and then 45 minutes. And, the idea behind it is first of all, to break the habit of checking every 30 seconds or every minute and sort of lengthen that amount of time that kids are able to go without checking or even thinking of checking.

On why book are better

The fact that paper books have no notifications and no dings and beeps or anything, it’s actually makes it a superior form of equipment. And I think that that was something humans got right a long time ago.

 

The State of Web Application Firewalls

Larry Ponemon

Web application firewalls (WAF) are essential to securing web-based applications and, as shown in this research sponsored by Cequence Security, are a necessary or critical piece of an organization’s security arsenal and infrastructure. Unlike traditional firewalls, WAFs analyze traffic and make decisions based on a set of predefined business rules. Traditional firewalls base their decision to allow or block traffic on simple parameters such as IP address or port number. WAFs mostly base their decision on an in-depth analysis of the HTML data.

Ponemon Institute surveyed 595 IT and IT security practitioners who are responsible for the deployment of a WAF in their organizations. Fifty-three percent of respondents are either responsible for application security (30 percent) or are application owners (23 percent).

The research clearly reveals WAF dissatisfaction in three areas. First, organizations are frustrated that so many attacks are bypassing their WAFs and compromising business-critical applications. In addition, they’re experiencing the pain of continuous, time-consuming WAF configuration, and administration tasks. Lastly, they’re dealing with significant annual costs associated with WAF ownership and staffing.

Attacks on the application layer are bypassing organizations’ WAFs. Sixty-five percent of respondents say attacks on the application tier are bypassing the WAF frequently or sometimes.

As a result, most organizations represented in this survey do not think their WAFs are effective in securing their web-based applications and are not satisfied with them.

When asked to rate satisfaction with their organization’s WAF on a scale of 1 = not satisfied to 10 = very satisfied, only 40 percent are very satisfied (7+ responses) due to the fact that only 43 percent of respondents say their WAF is very effective (7+ responses on the 10-point scale).

Part 2. Key findings

 In this section, we provide a deeper analysis of the research findings. The complete audited findings are presented in the Appendix of this report. We have organized the report according to the following themes:

  • The difficulty in protecting Web, mobile and API apps
  • The challenge of WAF deployment and management
  • Features that improve the WAF’s effectiveness

The difficulty in protecting Web, mobile and API apps

 Organizations prioritize the protection of Web and mobile applications. Organizations represented in this research protect an average of 158 Web, mobile and API apps. The primary focus of application security is on Web (67 percent of respondents) and mobile (58 percent of respondents) applications. Thirty-seven percent of respondents say their organizations are protecting API services.

Organizations are more effective at protecting mobile applications. When asked to rate their organization’s effectiveness in protecting mobile applications and API services, 54 percent of respondents say they are very effective in protecting mobile apps versus only 38 percent of respondents who say their effectiveness in protecting API services is very high.

Mobile client applications are most likely to interact with organizational applications. Some 55 percent of respondents say mobile apps interact with their organizations’ applications followed by partners using APIs (36 percent of respondents).

Attacks are bypassing the WAF. In the past 12 months, 65 percent of respondents say attacks on their organizations’ application tiers have bypassed the WAF frequently (23 percent) or sometimes (42 percent).

The challenge of WAF deployment and management

 Security is the primary reason to invest in a WAF.  Organizations are spending an average of $419,100 on WAF products and/or services and an additional average of $200,500 for staff to manage WAF-related security issues. Organizations typically have 2.5 full-time employees to manage the WAF. On average, the staff spends 45 hours per week responding to alerts and 16 hours per week to creating and/or updating rulesets.

The top three reasons to invest in a WAF are the protection of the IT infrastructure (60 percent of respondents), prevention of attacks (56 percent of respondents) and the protection of data (54 percent of respondents).

Most WAFs used only for attack detection. Only 22 percent of WAFs deployed in the organizations represented in this study both detect and block attacks.

Currently, most WAFs are either an on-premises hardware appliance or managed appliance. About one third of respondents say their WAF is an on-premises hardware appliance and 21 percent of respondents say this is the ideal deployment. Twenty percent of respondents say an on-premises virtual appliance is ideal and 18 percent of respondents say a cloud-based WAF is ideal.

Read the rest of this study at the Cequence website.

 

 

Is work killing you? Should we blame our tech, ourselves, or our culture? A So, Bob podcast

Bob Sullivan

“Working too hard can give you a heart attack-ack-ack-ack-ack-ack. You oughta know by now.”

Summer is well under way, and if you haven’t planned your vacation yet, you aren’t alone.  Americans are terrible at taking vacations, terrible at relaxing — terrible at shutting down and rebooting. I think I know why, and I bet you do, too.

Always-on gadgets mean always-on employees, and this is driving many of us mad.  Five years ago, I began a series of stories called The Restless Project to examine all the ways Americans are struggling with constant pressure from tech, and from a broken economy.

People are working themselves sick, even dying at the office.  I thought I might write a book about overwork: But then good friend Annie Murphy Paul (more from her soon!) introduced me to then-Washington Post report Brigid Schulte, and I learned she had already written that book. It’s called Overwhelmed: Work, Love, and Play When No One Has The Time.

Instead of a book, I’ve now made a podcast about this subject, with Alia Tavakolian and Spoke Media. Click play below or listen on iTunes, on Stitcher, or wherever you get your podcasts.

Maybe this is nothing new. When Billy Joel sang about working too hard in 1977, he wasn’t signing about smartphones.  OTOH, tech and all its trappings make keeping up with life hard and harder with each passing email. New gadgets and new communications tools (Snapchat! Messenger! Instagram DMs!) continuously add to our pile things to check on.

Brigid is one of the first guests in our So, Bob series, and we talked about the intersection of technology and overwork (Spoiler: She’s doesn’t blame tech nearly as much as I do!).  She is fascinating. Here’s a taste of our discussion.

(Brigid now works for New America and is director of the Better Life Lab.)


BRIGID: There’s a fascinating phenomenon that, that behavioral scientists to found, they call it tunneling.…you kind of have this tunnel vision and then what you’re only able to do is focus on just the few things right in front of you. You’re not able to stop and ask yourself bigger questions. You’re not able to see the bigger picture. You can’t get out of the tunnel and ask yourself that question, do I even want to be in this tunnel?

BOB: …So for you now, it’s almost like a sensation. You’re like, oh my God, I’m going in the tunnel.

BRIGID: Yeah, I can feel it closing in. Yeah. You know, and I, it was somebody else once said because we have this crazy, achievement culture and it’s all about productivity and all of these tips and tricks and life hacks and tech. It’s all supposed to, you know, they, on the one hand we say it’s to make our life easier, but let’s face it in this kind of busy-ness as a badge of honor culture, it’s about cramming more crap into your day and then somehow feeling awesome about just how insanely busy you were and somehow you will manage to end the day standing up.

{snip}

BRIGID: I would talk to these researchers, this one woman who studies busy-ness and the fast pace of life in North Dakota of all places. And she’s come to the conclusion that busy-ness we’ve made it such a badge of honor that it’s a choice, but she also calls it a non choice choice because you feel like you can’t make any other choice if you want to fit in or if you want to have status. And so, um, I do try to pull out of that like what a sick way to get status. You know, by like making ourselves, you know, ill and unhealthy and not making time for things that you enjoy, that there’s something to be, you know, to be proud about that you have work life conflict or never go on vacation or don’t sleep well. That’s crazy. I do feel like, uh, jobs have become incredibly complicated. I do feel like technology as a part of that. Um, and I think that we haven’t figured out how to manage that well as human beings. And, and so those are things that can be challenging that uh, figuring out how much is enough when you are a knowledge worker and there isn’t a whistle that goes off at the end of the day, you don’t have any visual markers. Like I’ve, you know, created my pile of widgets and I can check the box. It’s very difficult to figure out when you’re done and when is it good enough. Um, so that’s really a challenge of modern work. And I don’t think we have good answers and I’m here to say I’m trying to figure it out myself.

DDoS attacks are relentless, and 5G will only make things worse

Larry Ponemon

The State of DDoS Attacks against Communication Service Providers, sponsored by A10 Networks, specifically studies the threats to Internet Services Providers (ISPs) Mobile and/or Cloud Services Providers (CSPs). Ponemon Institute surveyed 325 IT and IT security practitioners in the United States who work in communication service provider companies and are familiar with their defenses against DDoS. (Click here to access the full report at A10 Networks)

According to the research, communication service providers (CSPs) are increasingly vulnerable to DDoS attacks. In fact, 85 percent of respondents say DDoS attacks against their organizations are either increasing or continuing at the same relentless pace and 71 percent of respondents say they are not or only somewhat capable of launching measures to moderate the impact of DDoS attacks. The increase in IoT devices due to the advent of 5G will also increase the risk to CSPs.

Respondents were asked to estimate the number of DDoS attacks their organizations experienced in the past year from a range of 1 to more than 10. On average, CSPs experience 4 DDoS attacks per year. Based on the findings, the most common DDoS attacks target the network protocol, flood the network with traffic to starve out the legitimate requests and render the service unavailable. As a result, these companies will face such serious consequences as diminished end user and IT staff productivity, revenue losses and customer turnover.

 The most serious barriers to mitigating DDoS attacks are the lack of actionable threat intelligence, the lack of in-house expertise and technologies. As a result of these challenges, confidence in the ability to detect and prevent DDoS attacks is low. Only 34 percent of respondents say their organizations are very effective or effective in preventing the impact of the attack and only 39 percent of respondents say they are effective in detecting these attacks.

Following are the most salient findings from the research.

The most dangerous DDoS attackers are motivated by money. The DDoS attacker who uses extortion for financial gain represents the greatest cybersecurity risk to companies, according to 48 percent of respondents. These criminals make money offering their services to attack designated targets or to demand ransomware for not launching DDoS attacks. Forty percent of respondents fear the attacker who executes a DDoS attack to distract the company from another attack. Only 25 percent of respondents say a thrill seeker and 21 percent of respondents say an angry attacker pose the greatest cybersecurity risk.

Attacks targeting the network layer or volumetric floods are the most common attacks experienced. The most common types of DDoS attacks are network protocol level attacks (60 percent of respondents) and volumetric floods (56 percent of respondents). In a volumetric flood, the attacker can simply flood the network with traffic to starve out the legitimate requests to the DNS or web server.

DDoS attacks pose the greatest threat at the network layer. Respondents were asked to allocate a total of 100 points to seven layers in the IT security stack. The layer most at risk for a DDoS attack is the network layer followed by the application layer. The findings suggest how organizations should allocate resources to prevent and detect DDoS attacks.

DDoS attacks can have severe financial consequences because they cause a loss of productivity, customer turnover and damage to property, plant and equipment. DDoS attacks affect the bottom line. Respondents consider the most severe consequences are diminished productivity for both end users and IT staff.

Threat intelligence currently used to mitigate the threat of a DDoS attack is stale, inaccurate, incomplete and does not integrate well with various security measures. Seventy percent of respondents believe their DDoS-related threat intelligence is often too stale to be actionable and 62 percent of respondents say it is often inaccurate and/or incomplete. Other issues include the difficulty in integrating DDoS threat intelligence with various security measures and the high false positive rate, say 60 percent and 58 percent of respondents respectively.

To improve prevention and detection of DDoS attacks, organizations need actionable threat intelligence. Sixty-three percent of respondents say the biggest barrier to a stronger cybersecurity posture with respect to DDoS attacks are a lack of actionable intelligence. To address this problem, 68 percent of respondents say the most effective technology in mitigating DDoS threats is one that provides intelligence about networks and traffic.

Scalability, integration and reduction of false positives are the most important features to prevent DDoS attacks. As part of their strategy to address DDoS security risks, companies want the ability to scale during times of peak demand, integrate DDoS protection with cyber intelligence solutions, integrate analytics and automation to achieve greater visibility and precision in the intelligence gathering process and reduce the number of false positives in the generation of alerts.

Most organizations plan to offer DDoS scrubbing services. Sixty-six percent of respondents either have a DDoS scrubbing service (41 percent) or plan to in the future (25 percent). Benefits to offering these services are revenue opportunities, enhanced customer loyalty and lower support tickets with subscribers.

To read the rest of this study, visit A10 Networks.

Milk still expires, but now — mercifully– your passwords won’t

Bob Sullivan

Who hasn’t been interrupted during some important task by a strictly-imposed network requirement to “update” a password?  And who hasn’t solved this modern annoyance by some ridiculous, unsafe naming convention like “CorpPassword1…CorpPassword2…CorpPassword3” and so on. People already have 150 or so passwords they must remember. Forced expiration made this already untenable situation even worse — 150 *new* passwords every month or so?

Those days are, thankfully, coming to a close. Last year, NIST revised its passwords, urging companies to abandon forced expirations. And recently, Microsoft announced it would remove the requirement from Windows 10 standards.

This will finally start a movement to drop forced password updates.

In its announcement, Microsoft was both logical and forceful in its argument.

“Periodic password expiration is an ancient and obsolete mitigation of very low value,” it said. “When humans are forced to change their passwords, too often they’ll make a small and predictable alteration to their existing passwords, and/or forget their new passwords.”

Either a password is compromised, so it should be changed now — why wait 30 or 60 days? — and if it’s not compromised,  why create the extra hassle?

More from MS:

If it’s a given that a password is likely to be stolen, how many days is an acceptable length of time to continue to allow the thief to use that stolen password? The Windows default is 42 days. Doesn’t that seem like a ridiculously long time? Well, it is, and yet our current baseline says 60 days – and used to say 90 days – because forcing frequent expiration introduces its own problems. And if it’s not a given that passwords will be stolen, you acquire those problems for no benefit. Further, if your users are the kind who are willing to answer surveys in the parking lot that exchange a candy bar for their passwords, no password expiration policy will help you.

Gartner cybersecurity analyst Avivah Litan called the move a “most welcome step.”

“Finally a big tech company (that manages much of our daily authentication) is using independent reasoned thinking rather than going along with the crowd mentality when the crowd’s less secure password management practices are – however counterintuitive – less secure,” she wrote on her blog. 

What should companies be doing about passwords instead? Litan hopes this step signals the beginning of the end of traditional passwords.  Meanwhile, Microsoft hints at what better security looks like:

“What should the recommended expiration period be? If an organization has successfully implemented banned-password lists, multi-factor authentication, detection of password-guessing attacks, and detection of anomalous logon attempts, do they need any periodic password expiration? And if they haven’t implemented modern mitigations, how much protection will they really gain from password expiration?”

Coincidentally, this week’s “So, Bob” podcast deals with password managers.  Listen on iTunes,on Stitcher or click play below if a play button appears for you.

 

Third-party IoT risk: companies don’t know what they don’t know

Larry Ponemon

Cyberattacks, data breaches and overall business disruption that can be caused by unsecured IoT devices in the workplace and used by third parties are increasing because companies don’t know the depth and breadth of the risk exposures they face when leveraging IoT devices and other emerging technologies.

This is the third-annual study on third party IoT risks sponsored by Shared Assessments and conducted by Ponemon Institute to better understand how organizations are managing the risks created by known and unknown IoT devices.

Responses from 605 individuals who participate in corporate governance and/or risk oversight activities and are familiar with or have responsibilities in managing third party risks associated with the use of IoT devices in their organization are included in this study. Seventy percent of respondents say their position requires them to manage risk oversight activities. All organizations represented in this research have third party risk management program and an enterprise risk management program.

In this study, we define a data breach as a confirmed incident in which sensitive, confidential or otherwise protected data has been accessed and/or disclosed in an unauthorized fashion. Data breaches may involve protected health information (PHI), personally identifiable information (PII), trade secrets or intellectual property. A cyberattack is an attempt by hackers using malware, ransomware and other techniques to access, damage or destroy a network or system. A successful cyberattack may result in brand damage, business disruption, critical system outages, a data breach, significant financial loses and potential regulatory sanctions.

The following research findings reveal what organizations do not know about the risks caused by IoT devices and applications that are used in the workplace and by third parties.

  • The number of cyberattacks, data breaches and service disruptions that have actually occurred
  • If their security safeguards and practices are adequate to mitigate IoT risk
  • Who is assigned accountability for IoT and how many IoT devices are in the workplace
  • IoT risk assessment and control validation techniques are evolving, but very slowly
  • How third party IoT risk management practices and policies can be used to mitigate the risk
  • Few companies conduct training and awareness programs to minimize risks created by users in the workplace and in their third parties
  • Few companies have sufficient in-house expertise to fully understand IoT risks in the workplace and in their third parties

IoT- related security incident

In the context of this research, IoT is defined as the physical objects or “things” embedded with electronics, software, sensors and network connectivity, which enables these objects to collect, monitor and exchange data. Examples of IoT devices in the workplace include network-connected printers and building automation solutions.

IoT- related security incidents increase

As shown in Figure 1, there has been a dramatic increase in IoT-related data breaches and cyberattacks since 2017. Respondents who report their organization experienced a data breach specifically because of unsecured IoT devices or applications increased from 15 percent to 26 percent in just three years. Cyberattacks increased from 16 percent to 24 percent of respondents. These percentages may be low because, as shown in the research, organizations are not confident that they are aware of all the unsecured IoT devices and applications in their workplaces and in third parties.

 

Most salient trends

 It’s “not if, but when” organizations will have a security exploit caused by unsecured IoT devices or applications. Eighty-seven percent of respondents believe a cyberattack, such as a distributed denial of service (DDoS), is very likely to occur in the next two years, an increase from 82 percent of respondents in last year’s study. Similarly, 84 percent of respondents say it is very likely their company will have a data breach caused by an IoT device or application.

 Third party IoT risk is increasing because of ransomware, the number of third parties and the inability to know if safeguards are sufficient. Fifty-nine percent of respondents say the IoT ecosystem is vulnerable to a ransomware attack. Other reasons for the increase in IoT risks is the inability to determine whether third party safeguards and IoT security policies are sufficient to prevent a data breach (55 percent of respondents) and the difficulty in managing the complexities of IoT platforms because of the number of third parties.

There is a significant gap between the monitoring of IoT devices in the workplace and the IoT of third parties. While just about half of respondents (51 percent) say their organizations are monitoring the devices used in their organizations, less than a third are monitoring their third parties’ use of IoT.

A gap also exists between awareness of IoT risks and the maturity of risk management programs. While 68 percent of respondents say third party risks are increasing because of the rise in IoT, many companies’ risk management practices are not mature. Specifically, only 45 percent of respondents say their risk management process is aligned with its business goals and only 34 percent of respondents say there is an approved risk appetite framework incorporating clearly expressed risk tolerance levels. Moreover, sufficient budget and staffing is not being allocated to manage third party IoT risks.

To read the full study, visit the Shared Assessments website.

The Santa Fe Group, authorities in risk management, is the managing agent of the Shared Assessments Program.

Is the Internet good or bad? So, Bob… podcast, episode 1

Bob Sullivan

I started covering technology in the late 90s, sitting in a cubicle on the Microsoft campus, but working for a separate company named MSNBC.com.  At the time, most publications didn’t have technology sections, or even full-time reporters.  Those who did write about tech were business reporters, worried mainly about revenue and stock price, or gadget reporters, worried mainly about what new, cool thing was coming on the market (wearable computers!).  I was immediately attracted to something different — broken technology. I started writing about computer viruses when nobody really cared about them; then the Melissa Virus and the LoveBug took the entire world offline for a day, and everyone cared. I went to hacker conferences before it was cool. I covered online dating scams, eBay fraud, credit card database thefts, child online safety, and the birth of surveillance capitalism.

At the same time, I would go to press conferences hosted by companies like Apple where (alleged) journalists would applaud each new product release.

It all made me wonder continuously: Is all this tech such a good idea? Is anyone stopping to think about any of this?

Eventually, plenty of other people became worried, too.  This story in the Canadian magazine Macleans from 2006 (titled “The Internet sucks”) captures the growing unease people had with the power of giant tech firms.  Read it; it’s cute what a side note Facebook was back then.

Since then, the pace of change has only accelerated, while our introspection about it has not kept up. Social mores haven’t kept up. Law hasn’t kept up.  The closest thing the U.S. has to a federal privacy law does not even mention cell phones or the Internet — because it is the Privacy Act of 1974.

Fortunately, plenty of people care about this now. Do a Google News search for privacy and you’ll find thousands of stories.  Facebook, for better and worse, has placed these issues top-of-mind for most people. As we discussed at the end of the Breach series on Equifax, privacy may be on life support, but it’s not dead.

And I am thrilled and so grateful that a person named Alia Tavakolian is at the top of the list of people who care. An Iranian-American from Dallas, Alia brings an entirely different perspective on these issues to the podcast. She has an amazing ability to ask the right question to get to the heart of the matter. And her emits empathy and understanding in such a way that people can’t wait to talk to her.  I’m incredibly lucky that she is my partner on this project — and with her, come the incredibly talented and passionate people at Spoke Media.  Soon enough, you’ll become familiar with the Spoke Media Method and why the podcast they make really are a cut above what you are used to hearing.

Please don’t interpret my skepticism of all technology as a distaste for it. Quite the contrary: Computers have been in my house since I was a small child (once upon a time, a remarkable thing to say!).  My father taught computers to high school kids in Newark, N.J. for decades. I played my first “video game” on a teletype.  Wrote my first program on a TRS-80.  Used a radio signal hack to add sound effects to a baseball game on a Commodore Pet.  I love this stuff.  I love that tech saved my father’s life after he had a heart attack. I love that I can communicate with old friends in real time at any time.

But there’s lots to worry about. And we don’t talk enough about it.  Mainly, I hate the kind of tricks that tech allows large companies to play on workers and consumers.  Your cable company makes billions of dollars each year, one hidden $9 fee at a time. Uber will make a few people billionaires while turning drivers into minimum-wage employees via slight of hand, and along the way take down some mass transit systems, too.  Facebook threatens democracy and the very notion of truth, all because it didn’t want to pay people to play hall monitor. Smartphones are great for finding your lost 12-year-old on a class trip!  But they are also altering his mind so he’ll never be able to pay attention to other people the way you did.  Tech is often portrayed as magic, able to make “scalable” businesses that provide investors with unicorn-like 1,000x returns. Often, the only magic is the way it fools people.  Tech sometimes provides amazing, ground-breaking solutions to life’s problems.  Just as often, it’s merely a trick to make early investors rich, consequences be damned.

This is what we’ll talk about on So..Bob.  But we won’t just whine about the downfall of small retailers or the curse of short attention spans. We’re going to arm you with real ideas and real solutions so your gadgets don’t rule you — you rule your gadgets. Alia asks amazing questions, and I have a few answers. But mainly, I’ve been at this long enough that I know hundreds of really smart people who are generous with their time, and they’ll have much better answers. As our first guest, Canadian privacy lawyer Sinziana Gutui, suggested to me, I am an expert of experts.  At least, that’s what I hope to be for you.

So, readers — what questions do you have? Send them along to SoBob@SpokeMedia.io.  Follow us on Twitter or Instagram at @SoBobPod.  Give us 25 minutes — hopefully, every week.

Click play below, if a play button appears, or click on this Stitcher link or this iTunes link.

The impact of automation on cyber resilience

Larry Ponemon

The Ponemon Institute and IBM Resilient are pleased to release the findings of the fourth annual study on the importance of cyber resilience to ensure a strong security posture. For the first time, we feature the importance of automation to cyber resilience. In the context of this research, automation refers to enabling security technologies that augment or replace human intervention in the identification and containment of cyber exploits or breaches. Such technologies depend upon artificial intelligence, machine learning, analytics and orchestration.

Other topics covered in this report are:

  • The impact of the skills gap on the ability to be more cyber resilient
  • How complexity can be the enemy of cyber resilience
  • Lessons learned from organizations that have achieved a high level of cyber resilience
  • The importance of including the privacy function in cyber resilience strategies.

Cyber resilience and automation go hand in hand. When asked to rate the value of automation and cyber resilience to their security posture on a scale of 1 = low value to 10 = high value, 62 percent rate the value of cyber resilience as very high and an even higher percentage of respondents (76 percent) find automation very valuable. Moreover, according to the research, 60 percent of respondents say their organizations’ leaders recognize that investments in automation, machine learning, artificial intelligence and orchestration strengthen their cyber resilience.

How automation supports and improves cyber resilience

In this section, we compare the findings of the 23 percent of respondents who self-reported their organizations use automation extensively (high automation) vs. 77 percent of respondents who use automation moderately, insignificantly or not at all (overall sample). Following are six benefits when automation is used extensively in the organization.

  1. High automation organizations are better able to prevent security incidents and disruption to IT and business processes. Measures used to determine improvements in cyber resilience are cyberattacks prevented and a reduction in the time to identify and contain the incident. 
  1. High automation organizations rate their cyber resilience much higher than the overall sample and also rate their ability to prevent, detect, respond to and contain a cyberattack as much higher.  
  1. Automation increases the importance of having skilled cybersecurity professionals such as security analysts, forensic analysts, developers and SecDevOps. Eighty-six percent of respondents in high automation organizations are more likely to recognize the importance of having cybersecurity professionals in their cybersecurity incident response plan (CSIRP) and are not as likely to have difficulty in hiring these professionals.
  1. High automation organizations are maximizing the benefits of threat intelligence sharing and advanced technologies. In every case, respondents in organizations that are extensive users of automation are more likely to believe threat intelligence and sharing, DevOps and secure SDLC, analytics and artificial intelligence are most effective in achieving cyber resilience.
  1. Automation can reduce complexity in the IT infrastructure. High automation organizations are more likely to say their organizations have the right number of security solutions and technologies. This can be accomplished by aligning in-house expertise to tools so that investments are leveraged properly. Respondents in the overall sample are more likely to have too many security solutions and technologies.
  1. High automation organizations recognize the value of the privacy function in achieving cyber resilience. Most respondents in this research recognize that the privacy role is becoming increasingly important, especially due to the EU’s GDPR and the California Consumer Privacy Act. Moreover, high automation organizations are more likely than the overall sample to recognize the importance of aligning the privacy and cybersecurity roles in their organizations (71 percent vs. 62 percent).

Lessons learned from high performing organizations

 As part of this research, we identified certain organizations represented in this study that self-reported as having achieved a high level of cyber resilience and are better able to mitigate risks, vulnerabilities and attacks.

Of the 3,655 organizations represented in this study, 960 respondents (26 percent of the total sample) self-reported 9+ on a scale of 1 = low resilience to 10 = high resilience. Respondents from these organizations, referred to as high performers, are much more confident in the strength of their security posture compared to those who self-reported they have not achieved a high state of high cyber resilience. They are referred to as average performers. Following are seven benefits from achieving a highly effective cyber resilience security posture. 

  1. High performers are significantly more confident in their ability to prevent, detect, contain and recover from a cyberattack. Of respondents in high performing organizations, 71 percent of respondents in high performing organizations are very confident in their ability to prevent a cyberattack, whereas slightly more than half (53 percent of respondents) from the other organizations believe they have a high ability to prevent a cyberattack.  
  1. High performers are far more likely to have a CSIRP that is applied consistently across the entire enterprise, which makes this group far more likely to prevent, detect, contain and respond to a cyberattack. Only 5 percent of high performers do not have a CSIRP. In contrast, 24 percent of organizations in the overall sample do not have a CSIRP.
  1. Communication with senior leaders about the state of cyber resilience occurs more frequently in high performing organizations. More than half of respondents (51 percent) vs. 40 percent in the overall sample communicate the effectiveness of cyber resilience to the prevention, detection, containment and response of cyberattacks to the C-suite and board of directors.
  1. Senior management in high performing organizations are more likely to understand the correlation between cyber resilience and their reputation in the marketplace. Perhaps because of frequent communication with the C-suite. As a result, high performing organizations are more likely to have adequate funding and staffing to achieve cyber resilience.
  1. Senior management’s awareness about the relationship between cyber resilience and reputation seems to result in greater support for investment in automation, machine learning, AI and orchestration to achieve a higher level of cyber resilience. In fact, 82 percent of respondents in high performing organizations use automation significantly or moderately. In the overall sample of organizations, 71 percent of respondents say their organizations use automation significantly or moderately.
  1. High performers are more likely to value automation in achieving a high level of cyber resilience. When asked to rate the value of automation, 90 percent of respondents in high performing organizations say automation is highly valuable to achieving cyber resilience. However, 75 percent of respondents in the overall sample say they place a high value on automation.
  1. High performers are more likely to have streamlined their IT infrastructure and reduced complexity. More than half of respondents (53 percent) vs. only 30 percent of respondents in the overall sample say their organizations have the right number of security solutions and technologies to be cyber resilient. The average number of separate security solutions and technologies in high performing organizations is 39 vs. 45 in the overall sample.

To read the entire report, visit IBM’s website at https://www.ibm.com/account/reg/us-en/signup?formid=urx-37792

The impact of unsecured digital identities (an expired certificate was partly to blame for Equifax)

Larry Ponemon

The Impact of Unsecured Digital Identities, sponsored by Keyfactor, was conducted to understand the challenges and costs facing organizations in the protection and management (or mismanagement) of cryptographic keys and digital identities. Ponemon Institute surveyed 596 IT and IT security practitioners in the United States who are familiar with their companies’ strategy for the protection of digital identities.

As shown in Figure 1, 74 percent of respondents say digital certificates have caused and still cause unanticipated downtime or outages. Seventy-three percent of respondents are also aware that failing to secure keys and certificates undermines the trust their organization relies upon to operate. And, 71 percent of respondents believe their organizations do not know how many keys and certificates they have.

 

According to the findings, the growth in the use of digital certificates is causing the following operational issues and security threats:

 

  • Operational costs are increasing with the need to add additional layers of encryption of critical data that requires securing keys and the management of digital certificates to comply with data protection regulations.

 

  • Failed audits and lack of compliance are the costliest and serious threats to an organization’s ability to minimize the risk of unsecured digital identities and avoid costly fines.

 

  • The risk of unsecured digital identities is undermining trust with customers and business partners.

 

  • Unanticipated downtime or outages caused by digital certificates are having significant financial consequences in terms of productivity loss, including the diminishment of the IT security team’s ability to be productive.

 

  • Most organizations do not have adequate IT security staff to maintain and secure keys and certificates, especially in the deployment of PKI. Further, most organizations do not know how many keys and certificates that IT security needs to manage.
  • Pricing models can prevent organizations from investing in solutions that cover every identity across the enterprise.

 

  • Organizations have difficulty in securing keys and certificates through all stages of lifecycle from generation, request, renewal, rotation to revocation.

 

The total cost for failed certificate management practices

 

The research reveals the seriousness and cost of the following five cybersecurity risks created by ineffective key or certification management problems. For the following five scenarios, respondents were asked to estimate operational and compliance costs, the cost of security exploits and the likelihood they will occur over the next two years:

 

  • The cost of unplanned outages due to certificate expiration is estimated to average $11.1 million, and there is a 30 percent likelihood organizations will experience these incidents over the next two years.

 

  • The cost of failed audits or compliance due to undocumented or unenforced key management policies or insufficient key management practices is estimated to average $14.4 million, and there is a 42 percent likelihood that organizations will experience these incidents over the next two years.

 

  • The cost of server certificate and key misuse is estimated to average $13.4 million, and there is a 39 percent likelihood that organizations will experience these incidents over the next two years.

 

  • The cost of code signing certificate and key misuse is estimated to average $15 million, and there is a 29 percent likelihood that organizations will experience these incidents over the next two years.

 

  • The cost of Certificate Authority (CA) compromise or rogue CA for man-in-the-middle (MITM) and phishing attacks is estimated to average $13.2 million, and there is a 38 percent likelihood that organizations will experience these incidents over the next two years.

 

Based on respondents’ estimates, the average total cost to a single company if all five scenarios occurred would be $67.2 million over a two-year period. The costliest scenarios would be code signing certificate and key misuse and failed audits or compliance due to undocumented or unenforced key management policies or insufficient key management practices (an average of $15 million and $14.4 million, respectively). The research also reveals how likely these scenarios are to occur and how many times organizations represented in the study have experienced these attacks over a period of 24 months.

CLICK HERE TO DOWNLOAD THE COMPLETE STUDY