Category Archives: Uncategorized

DDoS attacks are relentless, and 5G will only make things worse

Larry Ponemon

The State of DDoS Attacks against Communication Service Providers, sponsored by A10 Networks, specifically studies the threats to Internet Services Providers (ISPs) Mobile and/or Cloud Services Providers (CSPs). Ponemon Institute surveyed 325 IT and IT security practitioners in the United States who work in communication service provider companies and are familiar with their defenses against DDoS. (Click here to access the full report at A10 Networks)

According to the research, communication service providers (CSPs) are increasingly vulnerable to DDoS attacks. In fact, 85 percent of respondents say DDoS attacks against their organizations are either increasing or continuing at the same relentless pace and 71 percent of respondents say they are not or only somewhat capable of launching measures to moderate the impact of DDoS attacks. The increase in IoT devices due to the advent of 5G will also increase the risk to CSPs.

Respondents were asked to estimate the number of DDoS attacks their organizations experienced in the past year from a range of 1 to more than 10. On average, CSPs experience 4 DDoS attacks per year. Based on the findings, the most common DDoS attacks target the network protocol, flood the network with traffic to starve out the legitimate requests and render the service unavailable. As a result, these companies will face such serious consequences as diminished end user and IT staff productivity, revenue losses and customer turnover.

 The most serious barriers to mitigating DDoS attacks are the lack of actionable threat intelligence, the lack of in-house expertise and technologies. As a result of these challenges, confidence in the ability to detect and prevent DDoS attacks is low. Only 34 percent of respondents say their organizations are very effective or effective in preventing the impact of the attack and only 39 percent of respondents say they are effective in detecting these attacks.

Following are the most salient findings from the research.

The most dangerous DDoS attackers are motivated by money. The DDoS attacker who uses extortion for financial gain represents the greatest cybersecurity risk to companies, according to 48 percent of respondents. These criminals make money offering their services to attack designated targets or to demand ransomware for not launching DDoS attacks. Forty percent of respondents fear the attacker who executes a DDoS attack to distract the company from another attack. Only 25 percent of respondents say a thrill seeker and 21 percent of respondents say an angry attacker pose the greatest cybersecurity risk.

Attacks targeting the network layer or volumetric floods are the most common attacks experienced. The most common types of DDoS attacks are network protocol level attacks (60 percent of respondents) and volumetric floods (56 percent of respondents). In a volumetric flood, the attacker can simply flood the network with traffic to starve out the legitimate requests to the DNS or web server.

DDoS attacks pose the greatest threat at the network layer. Respondents were asked to allocate a total of 100 points to seven layers in the IT security stack. The layer most at risk for a DDoS attack is the network layer followed by the application layer. The findings suggest how organizations should allocate resources to prevent and detect DDoS attacks.

DDoS attacks can have severe financial consequences because they cause a loss of productivity, customer turnover and damage to property, plant and equipment. DDoS attacks affect the bottom line. Respondents consider the most severe consequences are diminished productivity for both end users and IT staff.

Threat intelligence currently used to mitigate the threat of a DDoS attack is stale, inaccurate, incomplete and does not integrate well with various security measures. Seventy percent of respondents believe their DDoS-related threat intelligence is often too stale to be actionable and 62 percent of respondents say it is often inaccurate and/or incomplete. Other issues include the difficulty in integrating DDoS threat intelligence with various security measures and the high false positive rate, say 60 percent and 58 percent of respondents respectively.

To improve prevention and detection of DDoS attacks, organizations need actionable threat intelligence. Sixty-three percent of respondents say the biggest barrier to a stronger cybersecurity posture with respect to DDoS attacks are a lack of actionable intelligence. To address this problem, 68 percent of respondents say the most effective technology in mitigating DDoS threats is one that provides intelligence about networks and traffic.

Scalability, integration and reduction of false positives are the most important features to prevent DDoS attacks. As part of their strategy to address DDoS security risks, companies want the ability to scale during times of peak demand, integrate DDoS protection with cyber intelligence solutions, integrate analytics and automation to achieve greater visibility and precision in the intelligence gathering process and reduce the number of false positives in the generation of alerts.

Most organizations plan to offer DDoS scrubbing services. Sixty-six percent of respondents either have a DDoS scrubbing service (41 percent) or plan to in the future (25 percent). Benefits to offering these services are revenue opportunities, enhanced customer loyalty and lower support tickets with subscribers.

To read the rest of this study, visit A10 Networks.

Milk still expires, but now — mercifully– your passwords won’t

Bob Sullivan

Who hasn’t been interrupted during some important task by a strictly-imposed network requirement to “update” a password?  And who hasn’t solved this modern annoyance by some ridiculous, unsafe naming convention like “CorpPassword1…CorpPassword2…CorpPassword3” and so on. People already have 150 or so passwords they must remember. Forced expiration made this already untenable situation even worse — 150 *new* passwords every month or so?

Those days are, thankfully, coming to a close. Last year, NIST revised its passwords, urging companies to abandon forced expirations. And recently, Microsoft announced it would remove the requirement from Windows 10 standards.

This will finally start a movement to drop forced password updates.

In its announcement, Microsoft was both logical and forceful in its argument.

“Periodic password expiration is an ancient and obsolete mitigation of very low value,” it said. “When humans are forced to change their passwords, too often they’ll make a small and predictable alteration to their existing passwords, and/or forget their new passwords.”

Either a password is compromised, so it should be changed now — why wait 30 or 60 days? — and if it’s not compromised,  why create the extra hassle?

More from MS:

If it’s a given that a password is likely to be stolen, how many days is an acceptable length of time to continue to allow the thief to use that stolen password? The Windows default is 42 days. Doesn’t that seem like a ridiculously long time? Well, it is, and yet our current baseline says 60 days – and used to say 90 days – because forcing frequent expiration introduces its own problems. And if it’s not a given that passwords will be stolen, you acquire those problems for no benefit. Further, if your users are the kind who are willing to answer surveys in the parking lot that exchange a candy bar for their passwords, no password expiration policy will help you.

Gartner cybersecurity analyst Avivah Litan called the move a “most welcome step.”

“Finally a big tech company (that manages much of our daily authentication) is using independent reasoned thinking rather than going along with the crowd mentality when the crowd’s less secure password management practices are – however counterintuitive – less secure,” she wrote on her blog. 

What should companies be doing about passwords instead? Litan hopes this step signals the beginning of the end of traditional passwords.  Meanwhile, Microsoft hints at what better security looks like:

“What should the recommended expiration period be? If an organization has successfully implemented banned-password lists, multi-factor authentication, detection of password-guessing attacks, and detection of anomalous logon attempts, do they need any periodic password expiration? And if they haven’t implemented modern mitigations, how much protection will they really gain from password expiration?”

Coincidentally, this week’s “So, Bob” podcast deals with password managers.  Listen on iTunes,on Stitcher or click play below if a play button appears for you.


Third-party IoT risk: companies don’t know what they don’t know

Larry Ponemon

Cyberattacks, data breaches and overall business disruption that can be caused by unsecured IoT devices in the workplace and used by third parties are increasing because companies don’t know the depth and breadth of the risk exposures they face when leveraging IoT devices and other emerging technologies.

This is the third-annual study on third party IoT risks sponsored by Shared Assessments and conducted by Ponemon Institute to better understand how organizations are managing the risks created by known and unknown IoT devices.

Responses from 605 individuals who participate in corporate governance and/or risk oversight activities and are familiar with or have responsibilities in managing third party risks associated with the use of IoT devices in their organization are included in this study. Seventy percent of respondents say their position requires them to manage risk oversight activities. All organizations represented in this research have third party risk management program and an enterprise risk management program.

In this study, we define a data breach as a confirmed incident in which sensitive, confidential or otherwise protected data has been accessed and/or disclosed in an unauthorized fashion. Data breaches may involve protected health information (PHI), personally identifiable information (PII), trade secrets or intellectual property. A cyberattack is an attempt by hackers using malware, ransomware and other techniques to access, damage or destroy a network or system. A successful cyberattack may result in brand damage, business disruption, critical system outages, a data breach, significant financial loses and potential regulatory sanctions.

The following research findings reveal what organizations do not know about the risks caused by IoT devices and applications that are used in the workplace and by third parties.

  • The number of cyberattacks, data breaches and service disruptions that have actually occurred
  • If their security safeguards and practices are adequate to mitigate IoT risk
  • Who is assigned accountability for IoT and how many IoT devices are in the workplace
  • IoT risk assessment and control validation techniques are evolving, but very slowly
  • How third party IoT risk management practices and policies can be used to mitigate the risk
  • Few companies conduct training and awareness programs to minimize risks created by users in the workplace and in their third parties
  • Few companies have sufficient in-house expertise to fully understand IoT risks in the workplace and in their third parties

IoT- related security incident

In the context of this research, IoT is defined as the physical objects or “things” embedded with electronics, software, sensors and network connectivity, which enables these objects to collect, monitor and exchange data. Examples of IoT devices in the workplace include network-connected printers and building automation solutions.

IoT- related security incidents increase

As shown in Figure 1, there has been a dramatic increase in IoT-related data breaches and cyberattacks since 2017. Respondents who report their organization experienced a data breach specifically because of unsecured IoT devices or applications increased from 15 percent to 26 percent in just three years. Cyberattacks increased from 16 percent to 24 percent of respondents. These percentages may be low because, as shown in the research, organizations are not confident that they are aware of all the unsecured IoT devices and applications in their workplaces and in third parties.


Most salient trends

 It’s “not if, but when” organizations will have a security exploit caused by unsecured IoT devices or applications. Eighty-seven percent of respondents believe a cyberattack, such as a distributed denial of service (DDoS), is very likely to occur in the next two years, an increase from 82 percent of respondents in last year’s study. Similarly, 84 percent of respondents say it is very likely their company will have a data breach caused by an IoT device or application.

 Third party IoT risk is increasing because of ransomware, the number of third parties and the inability to know if safeguards are sufficient. Fifty-nine percent of respondents say the IoT ecosystem is vulnerable to a ransomware attack. Other reasons for the increase in IoT risks is the inability to determine whether third party safeguards and IoT security policies are sufficient to prevent a data breach (55 percent of respondents) and the difficulty in managing the complexities of IoT platforms because of the number of third parties.

There is a significant gap between the monitoring of IoT devices in the workplace and the IoT of third parties. While just about half of respondents (51 percent) say their organizations are monitoring the devices used in their organizations, less than a third are monitoring their third parties’ use of IoT.

A gap also exists between awareness of IoT risks and the maturity of risk management programs. While 68 percent of respondents say third party risks are increasing because of the rise in IoT, many companies’ risk management practices are not mature. Specifically, only 45 percent of respondents say their risk management process is aligned with its business goals and only 34 percent of respondents say there is an approved risk appetite framework incorporating clearly expressed risk tolerance levels. Moreover, sufficient budget and staffing is not being allocated to manage third party IoT risks.

To read the full study, visit the Shared Assessments website.

The Santa Fe Group, authorities in risk management, is the managing agent of the Shared Assessments Program.

Is the Internet good or bad? So, Bob… podcast, episode 1

Bob Sullivan

I started covering technology in the late 90s, sitting in a cubicle on the Microsoft campus, but working for a separate company named  At the time, most publications didn’t have technology sections, or even full-time reporters.  Those who did write about tech were business reporters, worried mainly about revenue and stock price, or gadget reporters, worried mainly about what new, cool thing was coming on the market (wearable computers!).  I was immediately attracted to something different — broken technology. I started writing about computer viruses when nobody really cared about them; then the Melissa Virus and the LoveBug took the entire world offline for a day, and everyone cared. I went to hacker conferences before it was cool. I covered online dating scams, eBay fraud, credit card database thefts, child online safety, and the birth of surveillance capitalism.

At the same time, I would go to press conferences hosted by companies like Apple where (alleged) journalists would applaud each new product release.

It all made me wonder continuously: Is all this tech such a good idea? Is anyone stopping to think about any of this?

Eventually, plenty of other people became worried, too.  This story in the Canadian magazine Macleans from 2006 (titled “The Internet sucks”) captures the growing unease people had with the power of giant tech firms.  Read it; it’s cute what a side note Facebook was back then.

Since then, the pace of change has only accelerated, while our introspection about it has not kept up. Social mores haven’t kept up. Law hasn’t kept up.  The closest thing the U.S. has to a federal privacy law does not even mention cell phones or the Internet — because it is the Privacy Act of 1974.

Fortunately, plenty of people care about this now. Do a Google News search for privacy and you’ll find thousands of stories.  Facebook, for better and worse, has placed these issues top-of-mind for most people. As we discussed at the end of the Breach series on Equifax, privacy may be on life support, but it’s not dead.

And I am thrilled and so grateful that a person named Alia Tavakolian is at the top of the list of people who care. An Iranian-American from Dallas, Alia brings an entirely different perspective on these issues to the podcast. She has an amazing ability to ask the right question to get to the heart of the matter. And her emits empathy and understanding in such a way that people can’t wait to talk to her.  I’m incredibly lucky that she is my partner on this project — and with her, come the incredibly talented and passionate people at Spoke Media.  Soon enough, you’ll become familiar with the Spoke Media Method and why the podcast they make really are a cut above what you are used to hearing.

Please don’t interpret my skepticism of all technology as a distaste for it. Quite the contrary: Computers have been in my house since I was a small child (once upon a time, a remarkable thing to say!).  My father taught computers to high school kids in Newark, N.J. for decades. I played my first “video game” on a teletype.  Wrote my first program on a TRS-80.  Used a radio signal hack to add sound effects to a baseball game on a Commodore Pet.  I love this stuff.  I love that tech saved my father’s life after he had a heart attack. I love that I can communicate with old friends in real time at any time.

But there’s lots to worry about. And we don’t talk enough about it.  Mainly, I hate the kind of tricks that tech allows large companies to play on workers and consumers.  Your cable company makes billions of dollars each year, one hidden $9 fee at a time. Uber will make a few people billionaires while turning drivers into minimum-wage employees via slight of hand, and along the way take down some mass transit systems, too.  Facebook threatens democracy and the very notion of truth, all because it didn’t want to pay people to play hall monitor. Smartphones are great for finding your lost 12-year-old on a class trip!  But they are also altering his mind so he’ll never be able to pay attention to other people the way you did.  Tech is often portrayed as magic, able to make “scalable” businesses that provide investors with unicorn-like 1,000x returns. Often, the only magic is the way it fools people.  Tech sometimes provides amazing, ground-breaking solutions to life’s problems.  Just as often, it’s merely a trick to make early investors rich, consequences be damned.

This is what we’ll talk about on So..Bob.  But we won’t just whine about the downfall of small retailers or the curse of short attention spans. We’re going to arm you with real ideas and real solutions so your gadgets don’t rule you — you rule your gadgets. Alia asks amazing questions, and I have a few answers. But mainly, I’ve been at this long enough that I know hundreds of really smart people who are generous with their time, and they’ll have much better answers. As our first guest, Canadian privacy lawyer Sinziana Gutui, suggested to me, I am an expert of experts.  At least, that’s what I hope to be for you.

So, readers — what questions do you have? Send them along to  Follow us on Twitter or Instagram at @SoBobPod.  Give us 25 minutes — hopefully, every week.

Click play below, if a play button appears, or click on this Stitcher link or this iTunes link.

The impact of automation on cyber resilience

Larry Ponemon

The Ponemon Institute and IBM Resilient are pleased to release the findings of the fourth annual study on the importance of cyber resilience to ensure a strong security posture. For the first time, we feature the importance of automation to cyber resilience. In the context of this research, automation refers to enabling security technologies that augment or replace human intervention in the identification and containment of cyber exploits or breaches. Such technologies depend upon artificial intelligence, machine learning, analytics and orchestration.

Other topics covered in this report are:

  • The impact of the skills gap on the ability to be more cyber resilient
  • How complexity can be the enemy of cyber resilience
  • Lessons learned from organizations that have achieved a high level of cyber resilience
  • The importance of including the privacy function in cyber resilience strategies.

Cyber resilience and automation go hand in hand. When asked to rate the value of automation and cyber resilience to their security posture on a scale of 1 = low value to 10 = high value, 62 percent rate the value of cyber resilience as very high and an even higher percentage of respondents (76 percent) find automation very valuable. Moreover, according to the research, 60 percent of respondents say their organizations’ leaders recognize that investments in automation, machine learning, artificial intelligence and orchestration strengthen their cyber resilience.

How automation supports and improves cyber resilience

In this section, we compare the findings of the 23 percent of respondents who self-reported their organizations use automation extensively (high automation) vs. 77 percent of respondents who use automation moderately, insignificantly or not at all (overall sample). Following are six benefits when automation is used extensively in the organization.

  1. High automation organizations are better able to prevent security incidents and disruption to IT and business processes. Measures used to determine improvements in cyber resilience are cyberattacks prevented and a reduction in the time to identify and contain the incident. 
  1. High automation organizations rate their cyber resilience much higher than the overall sample and also rate their ability to prevent, detect, respond to and contain a cyberattack as much higher.  
  1. Automation increases the importance of having skilled cybersecurity professionals such as security analysts, forensic analysts, developers and SecDevOps. Eighty-six percent of respondents in high automation organizations are more likely to recognize the importance of having cybersecurity professionals in their cybersecurity incident response plan (CSIRP) and are not as likely to have difficulty in hiring these professionals.
  1. High automation organizations are maximizing the benefits of threat intelligence sharing and advanced technologies. In every case, respondents in organizations that are extensive users of automation are more likely to believe threat intelligence and sharing, DevOps and secure SDLC, analytics and artificial intelligence are most effective in achieving cyber resilience.
  1. Automation can reduce complexity in the IT infrastructure. High automation organizations are more likely to say their organizations have the right number of security solutions and technologies. This can be accomplished by aligning in-house expertise to tools so that investments are leveraged properly. Respondents in the overall sample are more likely to have too many security solutions and technologies.
  1. High automation organizations recognize the value of the privacy function in achieving cyber resilience. Most respondents in this research recognize that the privacy role is becoming increasingly important, especially due to the EU’s GDPR and the California Consumer Privacy Act. Moreover, high automation organizations are more likely than the overall sample to recognize the importance of aligning the privacy and cybersecurity roles in their organizations (71 percent vs. 62 percent).

Lessons learned from high performing organizations

 As part of this research, we identified certain organizations represented in this study that self-reported as having achieved a high level of cyber resilience and are better able to mitigate risks, vulnerabilities and attacks.

Of the 3,655 organizations represented in this study, 960 respondents (26 percent of the total sample) self-reported 9+ on a scale of 1 = low resilience to 10 = high resilience. Respondents from these organizations, referred to as high performers, are much more confident in the strength of their security posture compared to those who self-reported they have not achieved a high state of high cyber resilience. They are referred to as average performers. Following are seven benefits from achieving a highly effective cyber resilience security posture. 

  1. High performers are significantly more confident in their ability to prevent, detect, contain and recover from a cyberattack. Of respondents in high performing organizations, 71 percent of respondents in high performing organizations are very confident in their ability to prevent a cyberattack, whereas slightly more than half (53 percent of respondents) from the other organizations believe they have a high ability to prevent a cyberattack.  
  1. High performers are far more likely to have a CSIRP that is applied consistently across the entire enterprise, which makes this group far more likely to prevent, detect, contain and respond to a cyberattack. Only 5 percent of high performers do not have a CSIRP. In contrast, 24 percent of organizations in the overall sample do not have a CSIRP.
  1. Communication with senior leaders about the state of cyber resilience occurs more frequently in high performing organizations. More than half of respondents (51 percent) vs. 40 percent in the overall sample communicate the effectiveness of cyber resilience to the prevention, detection, containment and response of cyberattacks to the C-suite and board of directors.
  1. Senior management in high performing organizations are more likely to understand the correlation between cyber resilience and their reputation in the marketplace. Perhaps because of frequent communication with the C-suite. As a result, high performing organizations are more likely to have adequate funding and staffing to achieve cyber resilience.
  1. Senior management’s awareness about the relationship between cyber resilience and reputation seems to result in greater support for investment in automation, machine learning, AI and orchestration to achieve a higher level of cyber resilience. In fact, 82 percent of respondents in high performing organizations use automation significantly or moderately. In the overall sample of organizations, 71 percent of respondents say their organizations use automation significantly or moderately.
  1. High performers are more likely to value automation in achieving a high level of cyber resilience. When asked to rate the value of automation, 90 percent of respondents in high performing organizations say automation is highly valuable to achieving cyber resilience. However, 75 percent of respondents in the overall sample say they place a high value on automation.
  1. High performers are more likely to have streamlined their IT infrastructure and reduced complexity. More than half of respondents (53 percent) vs. only 30 percent of respondents in the overall sample say their organizations have the right number of security solutions and technologies to be cyber resilient. The average number of separate security solutions and technologies in high performing organizations is 39 vs. 45 in the overall sample.

To read the entire report, visit IBM’s website at

The impact of unsecured digital identities (an expired certificate was partly to blame for Equifax)

Larry Ponemon

The Impact of Unsecured Digital Identities, sponsored by Keyfactor, was conducted to understand the challenges and costs facing organizations in the protection and management (or mismanagement) of cryptographic keys and digital identities. Ponemon Institute surveyed 596 IT and IT security practitioners in the United States who are familiar with their companies’ strategy for the protection of digital identities.

As shown in Figure 1, 74 percent of respondents say digital certificates have caused and still cause unanticipated downtime or outages. Seventy-three percent of respondents are also aware that failing to secure keys and certificates undermines the trust their organization relies upon to operate. And, 71 percent of respondents believe their organizations do not know how many keys and certificates they have.


According to the findings, the growth in the use of digital certificates is causing the following operational issues and security threats:


  • Operational costs are increasing with the need to add additional layers of encryption of critical data that requires securing keys and the management of digital certificates to comply with data protection regulations.


  • Failed audits and lack of compliance are the costliest and serious threats to an organization’s ability to minimize the risk of unsecured digital identities and avoid costly fines.


  • The risk of unsecured digital identities is undermining trust with customers and business partners.


  • Unanticipated downtime or outages caused by digital certificates are having significant financial consequences in terms of productivity loss, including the diminishment of the IT security team’s ability to be productive.


  • Most organizations do not have adequate IT security staff to maintain and secure keys and certificates, especially in the deployment of PKI. Further, most organizations do not know how many keys and certificates that IT security needs to manage.
  • Pricing models can prevent organizations from investing in solutions that cover every identity across the enterprise.


  • Organizations have difficulty in securing keys and certificates through all stages of lifecycle from generation, request, renewal, rotation to revocation.


The total cost for failed certificate management practices


The research reveals the seriousness and cost of the following five cybersecurity risks created by ineffective key or certification management problems. For the following five scenarios, respondents were asked to estimate operational and compliance costs, the cost of security exploits and the likelihood they will occur over the next two years:


  • The cost of unplanned outages due to certificate expiration is estimated to average $11.1 million, and there is a 30 percent likelihood organizations will experience these incidents over the next two years.


  • The cost of failed audits or compliance due to undocumented or unenforced key management policies or insufficient key management practices is estimated to average $14.4 million, and there is a 42 percent likelihood that organizations will experience these incidents over the next two years.


  • The cost of server certificate and key misuse is estimated to average $13.4 million, and there is a 39 percent likelihood that organizations will experience these incidents over the next two years.


  • The cost of code signing certificate and key misuse is estimated to average $15 million, and there is a 29 percent likelihood that organizations will experience these incidents over the next two years.


  • The cost of Certificate Authority (CA) compromise or rogue CA for man-in-the-middle (MITM) and phishing attacks is estimated to average $13.2 million, and there is a 38 percent likelihood that organizations will experience these incidents over the next two years.


Based on respondents’ estimates, the average total cost to a single company if all five scenarios occurred would be $67.2 million over a two-year period. The costliest scenarios would be code signing certificate and key misuse and failed audits or compliance due to undocumented or unenforced key management policies or insufficient key management practices (an average of $15 million and $14.4 million, respectively). The research also reveals how likely these scenarios are to occur and how many times organizations represented in the study have experienced these attacks over a period of 24 months.








Equifax: ‘This is … the big one we’ve all been waiting for’ — Breach podcast season 2

Bob Sullivan

“This is, potentially, the motherload. The big one we’ve all been waiting for.” — Ron Lieber, The New York Times “Your Money” columnist.

So begins our second season of Breach, which just dropped this month. We begin with an episode titled “Why, Equifax?” — which means all the things you think it means.

How could a company with so much precious information lose it all in what ultimately turned out to be a cascade of errors? A patch that was never applied, software that didn’t work because a certificate wasn’t updated for 19 months, an IT team that relied on the “honor system” to implement security measure. Then, there’s the biggest irony of all: Hackers broke in through the very system — the dispute resolution portal — that was designed to help American consumers fix errors in their credit reports.

But let’s back up, to the biggest question most people had when Equfax was hacked: Who the heck is Equifax and why does it have all my most intimate personal information?

If those sound like old questions, they aren’t. We have answers — along with ideas about bigger questions, like “What now?” and “Has our privacy been murdered once and for all?”

Alia Tavakolian and I have spent six months researching what many believe is the most important hack ever, along with a team of researchers and producers at Spoke Media, led by Janielle Kastner.  I’m very proud of the result and I think you’ll like it.

Breach is a sponsored podcast paid for by Carbonite; but you’ll glad to know Carbonite didn’t meddle in what say or report on in the podcast.

This season, we are releasing six episodes, one week at a time, each one about 30 minutes.  We’ll explain the history of the credit bureau industry, the run-up to the breach, the bungling of the breach response, and the individuals who are fighting back in the most creative ways possible (wait until you hear what happens in small claims court).  We are also running a great experiment with consumer lawyer Joel Winston where we try to get every credit report on a single consumer (Think you have three? You might have dozens, or even hundreds.)

Then we’ll explain why, I believe, privacy isn’t dead. But it is on life support, and we have no time to waste.

You can listen to episode one by clicking play below, if that embedded link works for you.   If not, click :

here for the Stitcher page

here for our iTunes page

Securing the modern vehicle: a study of automotive industry cybersecurity practices

Larry Ponemon

Today’s vehicle is a connected, mobile computer, which has introduced an issue the automotive industry has little experience dealing with: cybersecurity risk. Automotive manufacturers have become as much software as transportation companies, facing all the challenges inherent to software security.

Synopsys and SAE International partnered to commission this independent survey of the current cybersecurity practices in the automotive industry to fill a gap that has existed far too long—the lack of data needed to understand the automotive industry’s cybersecurity posture and its capability to address software security risks inherent in connected, software-enabled vehicles. Ponemon Institute was selected to conduct the study. Researchers surveyed 593 professionals responsible for contributing to or assessing the security of automotive components.

Software Security Is Not Keeping Pace with Technology in the Auto Industry

When automotive safety is a function of software, the issue of software security becomes paramount—particularly when it comes to new areas such as connected vehicles and autonomous vehicles. Yet, as this report demonstrates, both automobile OEMs and their suppliers are struggling to secure the technologies used in their products. Eighty four percent of the respondents to our survey have concerns that cybersecurity practices are not keeping pace with the ever-evolving security landscape.

Automotive companies are still building up needed cybersecurity skills and resources. The security professionals surveyed for our report indicated that the typical automotive organization has only nine full-time employees in its product cybersecurity management program. Thirty percent of respondents said their organizations do not have an established product cybersecurity program or team. Sixty-three percent of respondents stated that they test less than half of hardware, software, and other technologies for vulnerabilities. Pressure to meet product deadlines, accidental coding errors, lack of education on secure coding practices, and vulnerability testing occurring too late in production are some of the most common factors that render software vulnerabilities. Our report illustrates the need for more focus on cybersecurity; secure coding training; automated tools to find defects and security vulnerabilities in source code; and software composition analysis tools to identify third-party components that may have been introduced by suppliers.

Software in the Automotive Supply Chain Presents a Major Risk

While most automotive manufacturers still produce some original equipment, their true strength is in research and development, designing and marketing vehicles, managing the parts supply chain, and assembling the final product. OEMs rely on hundreds of independent vendors to supply hardware and software components to deliver the latest in vehicle technology and design. Seventy-three percent of respondents surveyed in our report say they are very concerned about the cybersecurity posture of automotive technologies supplied by third parties. However, only 44 percent of respondents say their organizations impose cybersecurity requirements for products provided by upstream suppliers.

Connected Vehicles Offer Unique Security Issues

Automakers and their suppliers also need to consider what the connected vehicle means for consumer privacy and security. As more connected vehicles hit the roads, software vulnerabilities are becoming accessible to malicious hackers using cellular networks, Wi-Fi, and physical connections to exploit them. Failure to address these risks might be a costly mistake, including the impact they may have on consumer confidence, personal privacy, and brand reputation. Respondents to our survey viewed the technologies with the greatest risk to be RF technologies (such as Wi-Fi and Bluetooth), telematics, and self-driving (autonomous) vehicles. This suggests non-critical systems and connectivity are low-hanging fruit for attacks and should be the main focus of cybersecurity efforts.

As will be clear in the following paragraphs, survey respondents in a myriad of sectors of the industry show a significant awareness of the cybersecurity problem and have a strong desire to make improvements. Of concern is the 69 percent of respondents who do not feel empowered to raise their concerns up their corporate ladder, but efforts such as this report may help to bring the needed visibility of the problem to the executive and boardroom level. Just as lean manufacturing and ISO 9000 practices both brought greater quality to the automotive industry, a rigorous approach to cybersecurity is vital to achieve the full range of benefits of new automotive technologies while preserving quality, safety, and rapid time to market.

Sixty-two percent of those surveyed say a malicious or proof-of-concept attack against automotive technologies is likely or very likely in the next 12 months, but 69 percent reveal that they do not feel empowered to raise their concerns up their chain of command. More than half (52 percent) of respondents are aware of potential harm to drivers of vehicles because of insecure automotive technologies, whether developed by third parties or by their organizations. However, only 31 percent say they feel empowered to raise security concerns within their organizations.

Thirty percent of respondents overall say their organizations do not have an established product cybersecurity program or team. Only 10 percent say their organizations have a centralized product cybersecurity team that guides and supports multiple product development teams.

When these data are broken down by OEM or supplier, 41 percent of respondents in suppliers do not have an established product cybersecurity program or team of any kind. In contrast, only 18 percent of OEMs do not have a product security program or team.

A significant percentage of suppliers are overlooking a well-established best practice: to employ a team of experts to conduct security testing throughout the product development process, from the design phase through decommissioning.

The majority of the industry respondents believe they do not have appropriate levels of resources to combat the cybersecurity threats in the automotive space. On average, companies have only nine full-time employees in their product cybersecurity management programs. Sixty-two percent of respondents say their organizations do not have the necessary cybersecurity skills. More than half (51 percent) say they do not have enough budget and human capital to address cybersecurity risks.

Vehicles are now essentially a mobile IT enterprise that includes control systems, rich data, infotainment, and wireless mesh communications through multiple protocols. That connectivity can extend to the driver’s personal electronic devices, to other vehicles and infrastructure, and through the Internet to OEM and aftermarket applications, making them targets for cyberattacks. Unauthorized remote access to the vehicle network and the potential for attackers to pivot to safety-critical systems puts at risk not just drivers’ personal information but their physical safety as well.

Automotive engineers, product developers, and IT professionals highlighted several major security concern areas as well as security controls they use to mitigate risks.

Technologies viewed as causing the greatest risk are RF technologies, telematics, and self-driving vehicles. Of the technological advances making their way into vehicles, these three are seen to pose the greatest cybersecurity risks. Organizations should be allocating a larger portion of their resources to reducing the risk in these technologies.

Respondents say that pressure to meet product deadlines (71 percent), lack of understanding/training on secure coding practices (60 percent), and accidental coding errors (55 percent) are the most common factors that lead to vulnerabilities in their technologies. Engaging in secure coding training for key staff will target two of the main causes of software vulnerabilities in vehicles.

Download the rest of this report from the Synopsis Webs site (PDF).

Target used location-based data to change prices on consumer app

Click to read KARE-TV’s investigation.

Bob Sullivan

Ever see a price for an item online, then look again and see a different price, and think you were going crazy? Probably not. You were probably encountering some form of dynamic pricing, which retailers have quietly dabbled in for many years.  Quietly, because every time consumers find out about it, there’s an uproar and they have to back off – as Target did this week, when a Minnesota TV station exposed the store for charging very different prices on its app and in its physical stores.  A shopper who claimed to have paid $99 for a razor in-store, then spotted the same thing online for $69, had tipped them off.

The stations reproduced this pattern, with some striking results:

“For instance, Target’s app price for a particular Samsung 55-inch Smart TV was $499.99, but when we pulled into the parking lot of the Minnetonka store that price suddenly increased to $599.99 on the app,” the station said. (Give ’em a click, read the whole report).

KARE shopped for more items, and found an even more intriguing pattern: Basically, the closer shoppers were to the store, the more the item cost.   If you are near the store, you don’t need a price enticement, the logic goes.   It also means Target is following you around, virtually, and knows where you are.  And it’s looking over your shoulder to decide what price you deserve on an item.  Spooky.

Target has changed its policies, according to KARE, in response to the story.

The firm sent me a full statement, included at the bottom of this story. It reads, in part, “We’ve made a number of changes within our app to make it easier to understand pricing and our price match policy.”  In essence, the firm has added language to its app that makes clear a price is valid in a store, or online — see the screenshot below, provided by Target.

I saw something vaguely similar recently when I priced rental cars for a trip to Seattle. When I was logged in using my “discount code” and membership, I got higher prices than when I shopped as an anonymous user.

There’s nothing illegal about dynamic pricing, probably,  even though it might seem unsavory, or downright deceptive. It’s definitely a Gotcha.  Why?  Because the rules of this game are not transparent to you.  And it takes advantage of people who might be too busy or distracted to play the “open another browser on another computer just to check” game when they are buying things.

But I’m here to tell you: This is the only way to buy things in the 21st Century. Shopping around used to mean driving around and getting different prices from different stores. Today, it means clicking around to make sure you aren’t being followed when you buy things.  Every. Single. Time.  Never make a hotel reservation without shopping both at an aggregator like Expedia and direct from the hotel. If you have time, call the hotel, too, and ask about the online price. When you are in a store, always pull out your smartphone and do a quick price comparison — not just at THAT retailer, but at Amazon, and at other shops.  And now you know, it’s best to price the item *before* you get to the store, just in case you are being followed.

Christopher Elliot, travel deal expert at — a site you should be reading — makes the point that software can help keep you from being followed by companies and dynamic pricing.

“You definitely have to log in and out and search for prices,” Elliot says. “Also, consider using your browser’s incognito mode. Companies are trying to track you and may change prices based on who you are, or who they think you are.”

You don’t always have to buy where the price is lowest; in fact, I’m against chasing every last dollar as a shopper.  It’s ok to pay a little more if you want to support local businesses, and often, people waste money and gas trying to save every last penny. That’s not the point here. You just want to make sure you aren’t getting ripped off. It’s a pain, I know.  Sorry. That’s Gotchaland.  And until some regulator forbids the practice, you have to live with it.


Image provided by Target. Note the phrases near the price indicating where it’s valid — in a store, or online.

“We appreciate the feedback we recently received on our approach to pricing within the Target app.

“The app is designed to help guests plan, shop and save whether they are shopping in store or on the go. We are constantly making updates and enhancements to offer the best experience for guests shopping at Target.

“We’ve made a number of changes within our app to make it easier to understand pricing and our price match policy. Each product will now include a tag that indicates if the price is valid in store or at In addition, every page that features a product and price will also directly link to our price match policy.

“We’re committed to providing value to our guests and that includes being priced competitively online and in our stores, and as a result, pricing and promotions may vary. Target’s price match policy allows guests to match the price of any item they see at Target or from a competitor, assuring they can always get the lowest price on any item.”

Guests will receive the latest version of the app in the next few days.

Secure file sharing & content collaboration for users, IT & security

Larry Ponemon

The ability to securely and easily share files and content in the workplace is essential to employees’ productivity, compliance with the EU’s General Data Protection Regulation (GDPR) and digital transformation. However, a lack of visibility into how users are accessing sensitive data and the file applications they are using is putting organizations at risk for a data breach. In fact, 63 percent of participants in this research believe it is likely that their companies had a data breach in the past two years because of insecure file sharing and content collaboration.

According to the findings, an average of 44 percent of employees in organizations use file sharing and collaboration solutions to store, edit or share content in the normal course of business. As a result of this extensive use, most respondents (72 percent) say that it is very important to ensure that the sensitive information in these solutions is secure.

Despite their awareness of the risks, only 39 percent of respondents rate their ability to keep sensitive contents secure in the file sharing and collaboration environment as very high. Only 34 percent of respondents rate the tools used to support the safe use of sensitive information assets in the file sharing and collaboration environment as very effective.

Sponsored by Axway Syncplicity, the purpose of this research is to understand file sharing and content collaboration practices in organizations and what practices should be taken to secure the data without impeding the flow of information. Ponemon Institute surveyed 1,371 IT and IT security practitioners in North America, United Kingdom, Germany and France. All respondents are familiar with content collaboration solutions and tools. Further, their job function involves the management, production and protection of content stored in files.

This section presents an analysis of the key findings. More details can be found on Axway’s website. Following are key themes in this research.

Data breaches in the file sharing and content collaboration environment are likely. Sixty-three percent of respondents say it was likely that their organizations experienced the loss or theft of sensitive information in the file sharing and collaboration environment in the past two years.

The best ways to avoid a data breach is to have skilled personnel with data security responsibilities (73 percent of respondents), more effective data loss protection technologies in place (65 percent of respondents), more budget (56 percent of respondents) and fewer silos and/or turf issues among IT, IT security and lines of business (49 percent of respondents).

Data breaches are likely because of risky user behavior. About 70 percent of respondents say they have received files and content not intended for them. Other risky events include: accidentally sharing files or contents with individuals not authorized to receive them, not deleting confidential contents or files as required by policies and accidentally sharing files or content with unauthorized individuals outside the organization, according to 67 percent, 62 percent and 59 percent of respondents, respectively.

A lack of visibility into users’ access puts sensitive information at risk. Only 31 percent of respondents are confident in having visibility into users’ access and file sharing applications. Some 65 percent of respondents say not knowing where sensitive data is constitutes a significant security risk. Only 27 percent of respondents say their organization has clear visibility into what file sharing applications are being used by employees at work. A consequence of not having visibility is the inability for IT leadership to know if lines of business are using file sharing applications without informing them (i.e. shadow IT).

Customer PII and confidential contents and files are the types of sensitive information at risk. The most sensitive types of data shared with colleagues and third parties is customer PII and confidential documents and files. Hence, these need to be most protected in the file sharing and collaboration environment.

The plethora of unstructured data makes managing the threats to sensitive information difficult. As defined in the research, unstructured data is information that either does not have a pre-defined data model or is not organized in a pre-defined manner. Unstructured information is typically text-heavy, but may contain data such as dates, numbers, and facts as well. An average of 53 percent of organizations’ sensitive data is unstructured and organizations have an average of almost 3 petabytes of unstructured data.

Most unstructured data is stored in email file sharing solutions. Respondents estimate an average of 20.5 percent is stored in shared network drives and 20 percent is stored in other file sync and share solutions. Almost half (49 percent of respondents) are concerned about storing unstructured data in the cloud. Only about 20 percent of unstructured data is stored in cloud-based services such as Dropbox or Box (20 percent) and Office 365 (17 percent).

On average, almost half of an organization’s sensitive data is stored on-premises.  According to Figure 7, an average of almost half (49 percent) of organizations’ sensitive information is stored on-premises and approximately 30 percent is located in the public cloud. An average of 22 percent of sensitive information is stored in the hybrid cloud. Hybrid cloud is a cloud computing environment that uses a mix of on-premises, private cloud and third-party, public cloud services with orchestration between the two platforms.

Companies are challenged to keep sensitive content secure in the file sharing and collaboration environment. As mentioned earlier in the report, respondents are aware of the threats to their sensitive information, but admit their governance practices and technologies should be more effective. According to respondents, on average, about one-third of the data in the file sharing and collaboration environment is considered sensitive.

To classify the level of security that is needed, respondents say it is mostly determined by data usage, location of users and sensitivity of data type (62 percent, 61 percent and 60 percent, respectively). Twenty-four percent of respondents say their companies do not determine content and file-level confidentiality.

To read the rest of this report: Click here to visit Axway’s site.