Author Archives: BobSulli

How data breaches affect reputation and share value

Larry Ponemon

How Data Breaches Affect Reputation & Share Value: A Study of U.S. Marketers, IT Practitioners and Consumers, conducted by Ponemon Institute and sponsored by Centrify, examines from the perspective of IT practitioners and marketers how a company’s reputation and share value can be affected by a data breach.  As part of this research, we surveyed consumers to learn their expectations about steps companies should take to safeguard their personal information and prevent data loss.

This study is unique because it presents the views of three diverse groups who have in common the ability to influence share value and reputation. Ponemon Institute surveyed 448 individuals in IT operations and information security (hereafter referred to as IT practitioners) and 334 senior level marketers and corporate communication professionals (hereafter referred to as CMOs).

Forty-three percent of IT practitioner respondents and 31 percent of CMOs in this study say their organization had a data breach involving the loss or theft of more than 1,000 records containing sensitive or confidential customer or business information in the past two years.  We also surveyed 549 consumers. Sixty-two percent of these respondents say in the past two years they have been notified by a company or government agency that their personal information was lost or stolen as a result of one or more data breaches.

The results of this study show how data loss affects shareholder value and customer loyalty.  To protect brand and reputation, it is critical the C-suite and boards of directors address consumers’ expectations about how their personal information is used and secured.  On a positive note, the study reveals the majority of both IT practitioners and CMOs believe their companies’ senior management understands the importance of brand management.

The affect of data breaches on stock price and customer losses

For the economic analysis of the stock price, we selected 113 publicly traded benchmarked companies that experienced a data breach involving the loss of customer or consumer data. We created a portfolio composed of the stock prices of these companies. We tracked the index value for 30 days prior to the announcement of the data breach and 90 days following the data breach.

The key takeaway from the analysis is that companies that achieve a strong security posture through investments in people, process and technologies are less likely to see a decline in their stock prices, especially over the long term. Because of their strong security posture, these companies are better able to quickly respond to the data breach. Following are conclusions from this analysis.

  • Following the data breach, companies’ share price declined soon after the incident was disclosed.
  • Companies that self-reported their security posture as superior and quickly responded to the breach event recovered their stock value after an average of 7 days.
  • In contrast, companies that had a poor security posture at the time of the data breach and did not respond quickly to the incident experienced a stock price decline that on average lasted more than 90 days.
  • The difference in the loss of share price between companies with a low security posture and a high security posture averaged 4 percent.
  • Organizations with a poor security posture were more likely to lose customers. In contrast, a strong security posture supports customer loyalty and trust.
  • The 113 companies in our sample that experienced a low customer loss rate (less than 2 percent) had an average revenue loss of $2.67 million. Organizations that lost more than 5 percent of their customers experienced an average revenue loss of $3.94 million.

 Other key takeaways

The loss of stock price is not the top concern of CMOs and IT practitioners. Reputation loss due to a data breach is the biggest concern to both IT practitioners and CMOs. Only 20 percent of CMOs and 5 percent of IT practitioners say they would be concerned about a decline in their companies’ stock price. In fact, in organizations that had a data breach, only 5 percent of CMOs and 6 percent of IT professionals say a negative consequence of the breach was a decline in their companies’ stock price.

Thirty-one percent of consumers surveyed say they discontinued their relationship with the company that had a data breach. Of those consumers affected by one or more breaches, 65 percent say they lost trust in the breached organization and more than 31 percent say they discontinued their relationship

IT practitioners and CMOs both believe a data breach is a top threat to their companies’ reputation and brand value. A data breach is considered by participants in this research to be a top threat to their companies’ reputation and brand value. On a positive note, the majority of IT practitioners (55 percent) and 58 percent of CMOs do believe their companies’ senior-level executives take brand protection seriously.

More CMOs have confidence than IT practitioners in the resilience of their organizations to recover from a data breach involving high value assets. Only 44 percent of IT practitioners believe their organizations are highly resilient to the consequences of a data breach involving high value assets. However, 63 percent of CMOs are confident their company would be resilient to a data breach that results in the loss or theft of high value assets.

More CMOs believe the biggest cost of a security incident is the loss of brand value. Seventy-one percent of CMOs in this study believe the biggest cost of a security incident is the loss of reputation and brand value. In contrast, less than half of IT practitioners (49 percent) see brand diminishment as the biggest cost of a security incident.  

Following a data breach, the IT function comes under greater scrutiny. IT practitioners in organizations that had a data breach (43 percent) consider the following the most negative consequences of a breach: greater scrutiny of the capabilities of the IT function, significant financial harm and a loss of productivity (56 percent, 44 percent and 40 percent, respectively).

IT practitioners do not believe that brand protection is their responsibility. Sixty-six percent of IT respondents do not believe protecting their company’s brand is their responsibility. However, 50 percent of these respondents do believe a material cybersecurity incident or data breach would diminish the brand value of their company.

CMOs allocate more money in their budgets to brand protection than IT does. Thirty-seven percent of CMOs surveyed say a portion of their marketing and communications budget is allocated to brand preservation and 65 percent of these respondents say their department collaborates with other functions in maintaining its brand. Whereas, only 21 percent of IT practitioners say they allocate a portion of the IT security budget to brand preservation and only19 percent collaborate with other functions on brand protection. This response is understandable because so many IT practitioners do not believe brand protection is the IT function’s responsibility.

Consumers’ expectation for the security of personal information they share with companies is much higher than CMOs and IT practitioners’ expectations. Eighty percent of consumers believe organizations have an obligation to take reasonable steps to secure their personal information. However, only 49 percent of CMOs and 48 percent IT practitioners agree. The research reveals differences in perceptions between IT practitioners and CMOs on issues regarding reputation and brand management practices. However, more serious differences are the gaps between consumers’ expectations and the perceptions of IT practitioners and CMOs about how their personal information should be safeguarded

CMOs and IT practitioners are less likely to believe their organizations have a responsibility to control access to consumers’ information. While 71 percent of consumers surveyed believe organizations have an obligation to control access to their information, 47 percent of CMOs and 46 percent of IT security practitioners believe this is an obligation.

Consumer trust in certain industries may be misplaced. Eighty percent of consumers say they trust healthcare providers to preserve their privacy and to protect personal information. In contrast, only 26 percent of consumers trust credit card companies. Yet, healthcare organizations account for 34 percent of all data breaches while banking, credit and financial organizations account for only 4.8 percent. Banking, credit and financial industries also spend two-to-three times more on cybersecurity than healthcare organizations.

IT practitioners and CMOs share the same concern about the loss of reputation as the biggest impact after a breach, but after that, the concerns are specific to their function. For CMOs, the impact to reputation is followed by a concern over loss of customers and decline in revenue (76 percent, 55 percent and 46 percent of respondents, respectively). For IT, the two biggest concerns are the loss of their jobs (56 percent of IT respondents and time to recover decreases productivity (45 percent).

In Congress, Facebook, Twitter take more blame for Russian election meddling, but there’s more coming

Bob Sullivan

We’ve come a long way since Mark Zuckerberg famously said that it was “crazy” to think fake news on Facebook influenced the 2016 election.  How far? Not long ago, Facebook said it had identified only a few thousands suspicious accounts on its service that might have been linked to Russia.  Today, during Congressional testimony, the firm said 126 million people may have seen Russian propaganda on the service.

During a mostly civil hearing before a Senate intelligence committee hearing on Tuesday, Facebook, Twitter and Google used the strongest language yet admitting their services were abused during the election, and vowed to work against further attacks by foreign governments.  The obstacles they face are enormous however, ranging from the ease of obscuring the origins of such attacks to the problem of “false positives” — tighter controls on content will inevitably infringe on free speech.

Not long ago, Internet firms were content to hide behind their legal designations as agnostic platforms, as opposed to publishers that could be held responsible for content they spread.  The time for that has passed.

“All three companies here…no longer think whatever goes across your platform is not your concern, right?” said Sen Sheldon Whitehouse (D-R.I.).

Facebook’s general counsel Colin Stretch called the Russian disinformation campaign “reprehensible.” Twitter acting general counsel Sean Edgett said the firm was acting “to ensure that experience of 2016 never happens again.”

Sen. Sen Chris Coons (D-Del,) was unimpressed by the firms’ efforts so far, however.

“Why has it taken Facebook 11 months (to offer this information) when former President Obama cautioned your CEO 9 days after the election?” he asked.

During the hearing, Stretch explained how Russian paid ads were used to attract drive users towards Facebook pages, which were then used to spread propaganda through the service’s traditional network effects — they were shared and re-shared by users. That’s how a few thousands paid ads could ultimately reach potentially millions of users.

At one point, Coons held up one example — a Facebook page called Heart of Texas that ultimately collected about 225,000 followers.  Ads for the page were purchased in rubles. One Heart of Texas ad said Hillary Clinton was despised by an overwhelming number of veterans, and urged secession if she won the election.

“That ad has no place on Facebook. It makes me angry. It makes everyone on Facebook angry,” Stretch said.

But Sen. Al Franken (D-Minn.) challenged Stretch about why the firm didn’t spot the Russian influence problem sooner.

“These are American political ads (purchased) with Russian money…how could you not connect the dots?” he said. “People are buying ads on your platform with Rubles. You put billions of data points together all the time….You can’t put together rubles with political ads and go, ‘Hmmm. Those two data points spell out something bad.’ ”

“Senator, that’s a signal we should have been alert to and in hindsight, it’s one we missed,” Stretch said.

Twitter was targeted for similar criticism by Sen. Richard Blumenthal (D-Conn.). He held up an ad saying citizens could vote from home,allegedly shown to likely Hillary Clinton voters.  Twitter said the ads were ultimately removed as illegal voter suppression.

“But they kept reappearing,” Blumenthal complained.

Most of the fake Russian ads and posts– something Facebook calls “coordinated inauthentic activity” — were issue-based, the firms said. They didn’t necessarily support a candidate, but instead sought to cause fights among users.  In Internet lingo, it was a sophisticated troll campaign

“Russia does not have loyalty to a political party. Their goal is to divide us,” Sen. Chuck Grassley (R-Iowa) said.

Much of the hearing focused on the potential for abuse that comes with social media targeting technology,which allows advertisers to be very selective in who sees ads that are purchased.  The tools are tailor-made for micro-targeting propaganda. Blumenthal questioned whether a Russian group could have made micro-targeting decisions without help from political consultants in the U.S., hinting the Russians had help from U.S. agents.

The most chilling part of the hearing occurred after Facebook, Google, and Twitter left, however. Clint Watts, an analyst with the Foreign Policy Research Institute, explained that no single firm could “fully comprehend” the influence that Russians had in 2016 — because Russian propagandists used a holistic plan of attack. A single post on the 4Chan message board would be discussed on Russian-backed Twitter accounts, then spread far and wide on Facebook, then land in news stories on Google, and so on. He called Russia’s 2016 disinformation campaign “the most successful in history,” and said it would certainly be copied.

“The Kremlin playbook will be adopted by others,” he said. Other foreign governments, dark political candidates, and .even corporations would copy Russian techniques unless Congress managed to get control of the issue now, he warned.

Cybercrime costs up 23 percent in just two years; firms investing in wrong technologies

Larry Ponemon

Over the last two years, the accelerating cost of cyber crime means that it is now 23 percent more than last year and is costing organizations, on average, US$11.7
million. Whether managing incidents themselves or
spending to recover from the disruption to the business
and customers, organizations are investing on an unprecedented scale—but current spending priorities show that much of this is misdirected toward security capabilities that fail to deliver the greatest efficiency and effectiveness.

A better understanding of the cost of cyber crime could help executives bridge the gap between their own defenses and the escalating creativity—and numbers— of threat actors. Alongside the increased cost of cyber crime—which runs into an average of more than US$17 million for organizations in industries like Financial Services and Utilities and Energy—attackers are getting smarter. Criminals are evolving new business models, such as ransomware-as-a-service, which mean that attackers are finding it easier to scale cyber crime globally.

With cyber attacks on the rise, successful breaches per company each year has risen more than 27 percent, from an average of 102 to 130. Ransomware attacks alone have doubled in frequency, from 13 percent to 27 percent, with incidents like WannaCry and Petya
affecting thousands of targets and disrupting public services and large corporations across the world. One of the most significant data breaches in recent years has been the successful theft of 143 million customer records from Equifax—a consumer credit reporting agency—a cyber crime with devastating consequences due to the type of personally identifiable information stolen and knock-on effect on the credit markets. Information theft of this type remains
the most expensive consequence of a cyber crime. Among the organizations we studied, information loss represents the largest cost component with a rise from 35 percent in 2015 to 43 percent in 2017. It is this threat landscape that demands organizations reexamine
their investment priorities to keep pace with these more sophisticated and highly motivated attacks.

To better understand the effectiveness of investment decisions, we analyzed nine security technologies across two dimensions: the percentage spending level between them and their value in terms of cost-savings to the business. The findings illustrate that many organizations may be spending too much on the wrong technologies. Five of the nine security technologies had a negative value gap where the percentage spending level is higher than the
relative value to the business. Of the remaining four technologies, three had a significant positive value gap and one was in balance. So, while maintaining the status quo on advanced identity and access governance, the opportunity exists to evaluate potential over-spend in areas which have a negative value gap and rebalance these funds by investing in the breakthrough innovations which deliver positive value.

Following on from the first Cost of Cyber Crime report launched in the United States eight years ago, this study, undertaken by the Ponemon Institute and jointly developed by Accenture, evaluated the responses of 2,182 interviews from 254 companies in seven countries—Australia,
France, Germany, Italy, Japan, United Kingdom and the United States. We aimed to quantify the economic impact of cyber attacks and observe cost trends over time to offer some practical guidance on how organizations can stay ahead of growing cyber threats.

HIGHLIGHTS FROM THE FINDINGS INCLUDE:

Security intelligence systems (67 percent) and advanced identity and access governance (63
percent) are the top two most widely deployed enabling security technologies across the enterprise. They also deliver the highest positive value gap with organizational cost savings of US$2.8 million and US$2.4 million respectively. As the threat landscape constantly evolves, these investments should be monitored closely so that spend is at an appropriate
level and maintains effective outcomes. Aside from systems and governance, other investments show a lack of balance. Of the nine security technologies evaluated, the highest percentage spend was on advanced perimeter controls. Yet, the cost savings associated with technologies in this area were only fifth in the overall ranking with a negative value gap of
minus 4. Clearly, an opportunity exists here to assess spending levels and potentially reallocate investments to higher-value security technologies.

Spending on governance, risk and compliance (GRC) technologies is not a fast-track to increased security. Enterprise-wide deployment of GRC technology and automated policy management showed the lowest effectiveness in reducing cyber crime costs (9 percent and 7 percent respectively) out of nine enabling security technologies. So, while compliance technology is important, organizations must spend to a level that is appropriate to achieve the required capability and effectiveness, enabling them to free up funds for breakthrough innovations.

Innovations are generating the highest returns on investment, yet investment in them is low. For example, two enabling security technology areas identified as “Extensive use of cyber analytics and User Behavior Analytics (UBA)” and “Automation, orchestration and machine learning” were the lowest ranked technologies for enterprise-wide deployment
(32 percent and 28 percent respectively) and yet they provide the third and fourth highest cost savings for security technologies. By balancing investments from less rewarding technologies into these breakthrough innovation areas, organizations could improve the effectiveness of their security programs.

RECOMMENDATIONS
The foundation of a strong and effective security program is to identify and “harden” the higher-value assets. These are the “crown jewels” of a business— the assets most critical to operations, subject to the most stringent regulatory penalties, and the source of important trade secrets and market differentiation. Hardening these assets makes it as difficult and costly as possible for adversaries to achieve their goals, and limits the damage they can cause if they do obtain access.

By taking the following three steps, organizations can further improve the effectiveness of their cybersecurity efforts to fend of and reduce the impact of cyber crime:

Invest in the “brilliant basics” such as security intelligence and advanced access management and yet recognize the need to innovate to stay ahead of the hackers. Organizations should not rely on compliance alone to enhance their security profile but undertake extreme pressure testing to identify vulnerabilities more rigorously than even the most highly motivated attacker.
Balance spend on new technologies, specifically analytics and artificial intelligence, to enhance program effectiveness and scale value.

Organizations need to recognize that spending alone does not always equate to value. Beyond prevention and remediation, if security fails, companies face unexpected costs from not being
able to run their businesses efficiently to compete in the digital economy. Knowing which assets must be protected, and what the consequences will be for the business if protection fails, requires an intelligent security strategy that builds resilience from the inside out and an industry-specific strategy that protects the entire value chain. As this research shows, making wise security investments can help to make a difference.

To learn more about the study, visit Accenture.com

 

Q: Why would anyone at Equifax have access to 143 million SSNs? A: Greed

Click for Beyond Trust Five Deadly Sins white paper.

Bob Sullivan

There’s lots of juicy details about the Equifax hack in a story published today by Bloomberg. It makes the strongest case yet that the massive heist of American SSNs was probably pulled off by a nation-state. That’s likely true about the huge theft of federal employee data back in 2015, also, so it’s not a surprise.

One thing has been gnawing at me from the beginning about Equifax, however, and it should be gnawing at you, too: Why would anyone, anywhere, have access to 143 million Social Security numbers?

What business use would there ever be at a place like Equifax to access a database like that, or to access various data files and put them together?

The answer is: There isn’t one.

Equifax was never going to put money into each of our Social Security “accounts.”  It should never have even contemplated something like a mass mailing to every America that required our SSNs.  CEO Richard Smith was never going home at night and reading a “book” of American personal identification just to understand his business from a holistic point of view.

Nope. I can’t think of a reason. Well, except laziness and arrogance.

Bloomberg’s story provides food for thought on this count. It cites a LinkedIn post by Steve VanWieren, an executive who left Equifax in January 2012.

“It bothered me how much access just about any employee had to the personally identifiable attributes. I would see printed credit files sitting near shredders, and I would hear people speaking about specific cases, speaking aloud consumer’s personally identifiable information,” the post reads.  VanWieren was describing incidents at least five years old, as he left the firm in 2012. Still, they clearly paint the same picture I am.

Too many privileges!

One basic premise of modern security is limiting employees to only those resources they need to do their jobs.  And when those jobs are over, the access must be cut off. For example, desktop support doesn’t need access to human resource files, unless there’s a specific problem — and when there is, access to salary data, etc., should be as limited and temporary as possible.  Access permitted on a need-to-know basis, and no more.

Managing privileges is annoying, but it works.  Morey Haber, vice president of technology at security firm Beyond Trust, recently told me that fully 94 percent of vulnerabilities require administrative rights on targeted machines.  So, no admin rights, no problem.

Back to Equifax.  Who ever created an architecture that would allow anyone to peek at, let alone remove, 143 million SSNs? What account had the rights to do that? Why?

BeyondTrust recently tied up a bunch of security principles in a tidy narrative it called “Five Deadly Sins that Increase the Risks of a Data Breach.”  It includes Envy, Pride, Ignorance, and Apathy.  But I suspect the real blame for the Equifax hack is the first sin:

Greed.

Greedy people, in the security sense, need access to as much data and resources as they can get. And when they get it, they don’t want to give it up.  In the tech world, privileges are like the old workplace concept of “turf.”  Heaven help someone trying to get a worker to give up tech turf.

I asked Haber about the role of greed in the Equifax case. He speculated that one could imagine a marketing use for pulling together that massive Equifax database, but even then, that data should be obfuscated immediately.

“Obviously, (someone) had to have full access to all that data,” he said. “There was no reason to.”

And now, a hacker — perhaps even a nation-state — has access to all that data. Forever.

VanWieren’s comments pretty much make the case here.  Clearly, a wide selection of employees had access to far more than “need-to-know” data. It was standard operating procedure.

Your workplace is probably like this, too.  Greed is common, but despite what you may have heard in the movies, it’s not good. Why is that? In part, Haber said, it’s because employees react very emotionally to having their network privileges restricted, and even worse to having them revoked.

“(It can be) like taking away someone’s guns,” he said.  Tech workers are used to having admin rights and “Doing what I want to do.”

The time for accommodating such greed is over, he warned.

“We live in a different set of times now,” he said. “We have to rethink how to be safe.”

Declining confidence in IT, but still reasons for optimism

Larry Ponemon

Public sector organizations are feeling the pains of digital transformation. Faced with modernization, data center upgrades and continuous cloud-first initiatives, this transformation of the IT environment is making it a challenge to deliver services, comply with service level agreements (SLAs), meet citizens’ expectations and achieve organizational missions.

The evidence is clear from a research study conducted by the Ponemon Institute and sponsored by Splunk of 736 decision makers in federal & department of defense IT Operations.

Challenges & Trends in Federal & Department of Defense: the United States reveals that digital transformation is well underway with budgets shifting from traditional on premise investments to more cloud and agile development paradigms.

This shift in the IT environment, while being embraced, has led to an overall loss of confidence in federal and DoD operations and is evidenced in respondents’ lack of confidence in their organizations’ ability to accomplish the following:

  • Have the people with the right skills to “get the job one”
  • Ensure performance and availability of systems to meet SLAs consistently
  • Manage data center upgrades
  • Perform IT operations efficiently
  • Migrate workloads and applications to the cloud

Research findings explain the reasons for the loss of confidence. These include skills gap among existing resources, according to 71 percent of respondents. Respondents also cite silos of IT systems and technologies and an inability to integrate them (71 percent of respondents) and complexity and diversity of IT systems and technology (67 percent of respondents).

Even with monitoring and data analytics in place, these tools are disconnected from each other and most respondents believe they are ineffective at helping quickly pinpoint issues and determine root cause (78 percent of respondents) because they do not offer end-to-end visibility.

Respondents also say that a lack of collaboration across teams and not enough data fidelity and
context are challenges to timely issue resolutions. Such challenges also affect organizations’ ability to quickly and efficiently respond to system outages and interruptions.

On average, it takes 42 hours and 12 staff members to restore the IT system to operational status following a system outage or interruption.

Despite the loss of confidence, respondents do see a silver lining in the transformation of their IT operations. According to respondents, the move to DevOps (development and operations) is making it easier to deliver quality services on time and within budget. To support the transformation, organizations are shifting spending from on premise to cloud computing, DevOps and new technologies.

Respondents also recognize that machine learning capabilities (27 percent), better network visibility across the entire organization (26 percent) and better enforcement of current policies and regulations (26 percent) can improve their organizations’ IT operations. Respondents are also increasingly aware of the types of data available and how such data can be used across
operational silos to reduce risks to their organizations.

Following are key findings from this research:

Confidence in current IT operations is lower than it was 12 months ago. The primary reasons for this change are not having the staff with the right skills “to get the job done”, the inability to ensure performance and ability to ensure performance and availability of systems to meet SLAs and inability to manage data center upgrades.

The confidence gap seems to stem from a skills gap, silos and complexity. Respondents believe that the greatest difficulties in carrying out their duties arise from a skills gap among existing resources (71 percent of respondents), silos of IT systems and technologies and a lack of ability to integrate them (71 percent of respondents) and complexity and diversity of IT systems and technology (67 percent of respondents).

Machine learning capabilities, visibility and enforcement of policies are seen as critical to improving IT operations. Out of a list of five options of the most effective way to strengthen IT operations, 27 percent of respondents believe that machine learning capabilities would be most effective. Better network visibility across the entire organization and better enforcement of current policies and regulations would strengthen IT operations (both 26 percent of respondents).

Spending on cloud operations and DevOps will grow significantly while on-premise spending dwindles. Almost one-half of respondents (49 percent) say that spending on cloud operations and 48 percent of respondents say DevOps will grow over the next year, while only 31 percent say that on-premise spending would do the same.

Alerts still remain too numerous and erroneous, and current event monitoring tools are not solving the problem. More than half of respondents say they still receive too many alerts (52 percent) and that those alerts generate too many false positives (55 percent). Seventy-eight percent of respondents are unsure or do not think that their current crop of analytics and monitoring tools are helping them pinpoint problems and determine root causes because they lack end-to-end visibility.

The challenges and risks described in this research result in inefficient response to system outages and interruptions. According to 65 percent of respondents, their organizations lack a consistent and formal IT outage response process. On average, it takes 42 hours and 12 staff members to restore the IT system to operational status following a system outage and interruption.

Will IT and security converge? More than two-thirds of respondents (73 percent) do not believe or are unsure if their security and IT operations will converge in the future.

Is it possible to use the same data sets across the organization to solve problems? Sixty-four percent of respondents are unsure or don’t think the data sets they are using can solve multiple challenges such as IT troubleshooting, service monitoring, security and business/mission analytics. Similarly, 66 percent of respondents are doubtful the same data can be used throughout the organization.

For the full study, click here: https://www.splunk.com/en_us/resources/public-sector/ponemon-research.html 

‘Nothing … in the way except motivation’ — Report claims hackers have penetrated deep into energy sector networks

Bob Sullivan

It started off as a fake invitation to a New Year’s Eve party, emailed to energy section employees. It ended with hackers taking screen shots of power grid control computer screens. Well, we can only hope it ended there.

Symantec Corporation released an alarming report this week claiming that a group of power grid hackers it calls “Dragonfly 2.0” have made their most successful raid into critical infrastructure computers in the U.S. and around the world.

“The energy sector in Europe and North America being targeted by a new wave of cyber attacks that could provide attackers with the means to severely disrupt affected operations,” Symantec wrote in its report.

In a chilling statement to Wired, Symatec’s Eric Chien said the incident means the intruders are, as the moment, capable of causing disruptions and power outages as they wish. They are just waiting for the right moment.

“There’s a difference between being a step away from conducting sabotage and actually being in a position to conduct sabotage … being able to flip the switch on power generation,” Eric Chien said. “We’re now talking about on-the-ground technical evidence this could happen in the US, and there’s nothing left standing in the way except the motivation of some actor out in the world.

Security researchers have been watching Dragonfly for years, claiming the group has been probing energy sector machines since at least 2011. Symantec says it went dark until a reemergence in late December 2015, when the New Year’s Even party invite went out. There is a “a distinct increase in activity in 2017,” Symantec said.

“The Dragonfly 2.0 campaigns show how the attackers may be entering into a new phase, with recent campaigns potentially providing them with access to operational systems, access that could be used for more disruptive purposes in future,” according to the report.

Symantec doesn’t say where Dragonfly is from — and its report shows the hackers might be intentionally trying to confuse investigators.  But late last year, the Department of Homeland Security claimed Dragonfly’s origins were Russian, and it was one of several groups groups working to “compromise and exploit networks and endpoints associated with the U.S. election, as well as a range of U.S. Government, political, and private sector entities. was part of organized camp.”

Symantec says the most concerning evidence found during its analysis were the screen captures.

“In one particular instance the attackers used a clear format for naming the screen capture files, [machine description and location].[organization name]. The string “cntrl” (control) is used in many of the machine descriptions, possibly indicating that these machines have access to operational systems,” it said.

Symantec links the initial hacker campaign to this more recent spate of attacks because there are similarities in the malware used. The Dragonfly campaigns that began in 2011 “now appear to have been a more exploratory phase,” Symantec said.

“The Dragonfly 2.0 campaigns show how the attackers may be entering into a new phase, with recent campaigns potentially providing them with access to operational systems, access that could be used for more disruptive purposes in future,” the firm claims. “What (the group) plans to do with all this intelligence has yet to become clear, but its capabilities do extend to materially disrupting targeted organizations should it choose to do so.”

Omer Schneider, CEO and co-founder of security firm CyberX, said this type of attack is inevitable.

“Why is everyone so surprised?” Schneider said. “As early as 2014, the ICS-CERT warned that adversaries had penetrated our control networks to perform cyber-espionage. Over time the adversaries have gotten even more sophisticated and now they’ve stolen credentials that give them direct access to control systems in our energy sector. If I were a foreign power, this would be a great way to threaten the US while I invade other countries or engage in other aggressive actions against US allies.”

Cost of a data breach, 2017 — $225 per record lost, an all-time high

Larry Ponemon

IBM Security and Ponemon Institute are pleased to present the 2017 Cost of Data Breach Study: United States, our 12th annual benchmark study on the cost of data breach incidents for companies located in the United States. The average cost for each lost or stolen record containing sensitive and confidential information increased from $221 to $225. The average total cost experienced by organizations over the past year increased from $7.01 million to $7.35 million. To date, 572 U.S. organizations have participated in the benchmarking process since the inception of this research.

Ponemon Institute conducted its first Cost of Data Breach Study in the United States 12 years ago. Since then, we have expanded the study to include the following countries and regions:

  • The United Kingdom
  • Germany
  • Australia
  • France
  • Brazil
  • Japan
  • Italy
  • India
  • Canada
  • South Africa
  • The Middle East (including the United Arab Emirates and Saudi Arabia)
  • ASEAN region (including Singapore, Indonesia, the Philippines and Malaysia

The 2017 study examines the costs incurred by 63 U.S. companies in 16 industry sectors after those companies experienced the loss or theft of protected personal data and the notification of breach victims as required by various laws. It is important to note that costs presented in this research are not hypothetical but are from actual data-loss incidents. They are based upon cost estimates provided by individuals we interviewed over a 10-month period in the companies that are represented in this research.

The number of breached records per incident this year ranged from 5,563 to 99,500 records. The average number of breached records was 28,512. We did not recruit organizations that have data breaches involving more than 100,000 compromised records. These incidents are not indicative of data breaches most organizations incur. Thus, including them in the study would have artificially skewed the results.

Why the cost of data breach fluctuates across countries

What explains the significant increases in the cost of data breach this year for organizations in the Middle East, the United States and Japan? In contrast, how did organizations in Germany, France, Australia, and the United Kingdom succeed in reducing the costs to respond to and remediate the data breach? Understanding how the cost of data breach is calculated will explain the differences among the countries in this research.

For the 2017 Cost of Data Breach Study: Global Overview, we recruited 419 organizations in 11 countries and two regions to participate in this year’s study. More than 1,900 individuals who are knowledgeable about the data breach incident in these 419 organizations were interviewed. The first data points we collected from these organizations were: (1) how many customer records were lost in the breach (i.e. the size of the breach) and (2) what percentage of their customer base did they lose following the data breach (i.e. customer churn). This information explains why the costs increase or decrease from the past year.

In the course of our interviews, we also asked questions to determine what the organization spent on activities for the discovery of and the immediate response to the data breach, such as forensics and investigations, and those conducted in the aftermath of discovery, such as the notification of victims and legal fees. A list of these activities is shown in Part 3 of this report. Other issues covered that may have an influence on the cost are the root causes of the data breach (i.e. malicious or criminal attack, insider negligence or system glitch) and the time to detect and contain the incident.

It is important to note that only events directly relevant to the data breach experience of the 419 organizations represented in this research and discussed above are used to calculate the cost. For example, new regulations, such as the General Data Protection Regulation (GDPR), ransomware and cyber attacks, such as Shamoon, may encourage organizations to increase investments in their governance practices and security-enabling technologies but do not directly affect the cost of a data breach as presented in this research.

The following are the most salient findings and implications for organizations:

The cost of data breach sets a record high. According to this year’s benchmark findings, data breaches cost companies an average of $225 per compromised record – of which $146 pertains to indirect costs, including abnormal turnover or churn of customers and $79 represents the direct costs incurred to resolve the data breach, such as investments in technologies or legal fees.

The total average organizational cost of data breach reaches a new high. This year, we record the highest average total cost of data breach at $7.35 million. Prior to this year’s research, the most costly breach occurred in 2011 when companies spent an average of $7.24 million. In 2013, companies experienced the lowest total data breach cost at $5.40 million.

Measures reveal why the cost of data breach increases. The average total cost of data breach increased 4.7 percent, the average per capita cost increased by 1.8 percent and abnormal churn of existing customers increased 5 percent. In the context of this paper, abnormal churn is defined as a greater-than-expected loss of customers in the normal course of business. In contrast, the average size of a data breach (number of records lost or stolen) decreased 1.9 percent.

Certain industries have higher data breach costs. Heavily regulated industries such as health care ($380 per capita) and financial services ($336 per capita), had per capita data breach costs well above the overall mean of $225. In contrast, public sector organizations ($110 per capita) had a per capita cost of data breach below the overall mean.

Malicious or criminal attacks continue to be the primary cause of data breach. Fifty-two percent of incidents involved a malicious or criminal attack, 24 percent of incidents were caused by negligent employees, and another 24 percent were caused by system glitches, including both IT and business process failures.

Malicious attacks are the costliest. Organizations that had a data breach due to malicious or criminal attacks had a per capita data breach cost of $244, which is significantly above the mean. In contrast, system glitches or human error as the root cause had per capita costs below the mean ($209 and $200 per capita, respectively).

Four new factors are in this year’s cost analysis.  The following factors that influence data breach costs have been added to this year’s study. They are as follows: (1) compliance failures, (2) the extensive use of mobile platforms, (3) CPO appointment and (4) the use of security analytics. The use of security analytics reduced the per capita cost of data breach by $7.7 and the appointment of a CPO reduced the cost by $4.3. However, extensive use of mobile platforms at the time of the breach increased the cost by $6.5 and compliance failures increased the per capita cost by $19.3.

The more records lost, the higher the cost of data breach. This year, for companies with data breaches involving less than 10,000 records, the average total cost of data breach was $4.5 million and companies with the loss or theft of more than 50,000 records had a cost of data breach of $10.3 million.

The more churn, the higher the cost of data breach. Companies that experienced less than 1 percent churn or the loss of existing customers, had an average total cost of data breach of $5.3 million and those that experienced churn greater than 4 percent had an average total cost of data breach of $10.1 million.

Certain industries are more vulnerable to churn. Financial, life science, health, technology and service organizations experience a relatively high abnormal churn rate and public sector and entertainment organizations experienced a relatively low abnormal churn rate.

Detection and escalation costs are at a record high. These costs include forensic and investigative activities, assessment and audit services, crisis team management, and communications to executive management and board of directors. Average detection and escalation costs increased dramatically from $0.73 million to $1.07 million, suggesting that companies are investing more heavily in these activities.

Notification costs increase slightly. Such costs typically include IT activities associated with the creation of contact databases, determination of all regulatory requirements, engagement of outside experts, postal expenditures, secondary mail contacts or email bounce-backs and inbound communication set-up. This year’s average notification costs increased slightly from $0.59 million in 2016 to $0.69 million in this year’s study.

Post data breach costs decrease. Such costs typically include help desk activities, inbound communications, special investigative activities, remediation activities, legal expenditures, product discounts, identity protection services and regulatory interventions. These costs decreased from $1.72 million in 2016 to $1.56 million in this year’s study.

Lost business costs increase. Such costs include the abnormal turnover of customers, customer acquisition activities, reputation losses and diminished goodwill. The current year’s cost increased from $3.32 million in 2016 to $4.03 million. The highest lost business cost over the past 12 years was $4.59 million in 2009.

Companies continue to spend more on indirect per capita costs than direct per capita costs. Indirect costs include the time employees spend on data breach notification efforts or investigations of the incident. Direct costs refer to what companies spend to minimize the consequences of a data breach and assist victims. These costs include engaging forensic experts to help investigate the data breach, hiring a law firm and offering victims identity protection services. This year, the indirect costs were $146 and direct costs were $79.

The time to identify and contain data breaches impact costs. In this year’s study, it took companies an average of 206 days to detect that an incident occurred and an average of 55 days to contain the incident. If the mean time to identify (MTTI) was less than 100 days, the average cost to identify was $5.99 million. However, if the mean time to identify was greater than 100 days the cost rose significantly to $8.70 million. If the mean time to contain (MTTC) the breach was less than 30 days, the average cost to contain was $5.87 million. If it took 30 days or longer, the cost rose significantly to $8.83 million.

To read the full report, click here. 

Disney, Viacom child privacy lawsuits try novel legal theory

Bob Sullivan

A California mom is suing Disney and some of its software partners for allegedly collecting personal information about her kids through mobile phone game apps. I was on the TODAY show this week talking about it.

Within days, the same mom also sued Viacom.

There’s a novel legal argument in these cases that I’m going to watch with great interest; an “intrusion upon seclusion” claim that I hadn’t seen before.  If the mom — and potentially others, if class-action status is granted — succeed at winning such a claim and collecting damages, it could open doors to a new kind of privacy lawsuit.

The Disney allegations, which the firm denies, are what you’d expect.  The suit claims Disney software places unique identifiers on mobile phones which can track app users — both in and out of game play — so Disney’s partners can serve targeted advertising.  You can expect the usual debate about what constitutes personal information.  Corporations that want to target ads usually claim they anonymize such data. Privacy advocate say that’s bunk. With just a few data points, people can be pretty precisely identified.

Federal law — the Child Online Privacy Protection Act, or COPPA — has strict rules about what can be collected from kids under 13.  The Federal Trade Commission has weighed in on the issue, making clear that unique identifiers fall under COPPA, meaning they generally shouldn’t be used or collected when kids are involved.

The lawsuit claims Disney and its partners violated COPPA, but that doesn’t really  get her far. COPPA does not provide a “private right of action.”  Consumers can’t sue “under COPPA” and get anything; they can merely ask a federal agency (the FTC) to fine the violator.

So lawyers in the case have seized upon the “intrusion upon seclusion” tort.  From what I can tell, this legal strategy is generally used when someone’s physical space is violated — as in sneaking into a home or hotel room.  It has been used in previous digital privacy cases, however, said Douglas I. Cuthbertson, a lawyer at the firm pressing the case. He cited invasion of privacy cases involving Vizio (Smart TVs) and Nickelodeon (Tracking videos watched; click for more). Both recently survived dismissal motions. It remains to be seen how much the cases are worth to plaintiffs, however.

According to Harvard’s publication of the American Law Institute’s guide to torts, here’s what ‘Inrusion Upon Seclusion” requires:

“The invasion may be by physical intrusion into a place in which the plaintiff has secluded himself, as when the defendant forces his way into the plaintiff’s room in a hotel or insists over the plaintiff’s objection in entering his home. It may also be by the use of the defendant’s senses, with or without mechanical aids, to oversee or overhear the plaintiff’s private affairs, as by looking into his upstairs windows with binoculars or tapping his telephone wires. It may be by some other form of investigation or examination into his private concerns, as by opening his private and personal mail, searching his safe or his wallet, examining his private bank account, or compelling him by a forged court order to permit an inspection of his personal documents.”

The four-pronged test to succeed in such a case, according to the Digital Media Law Project,  involves:

  • First, that the defendant, without authorization, must have intentionally invaded the private affairs of the plaintiff;
  • Second, the invasion must be offensive to a reasonable person;
  • Third, the matter that the defendant intruded upon must involve a private matter; and
  • Finally, the intrusion must have caused mental anguish or suffering to the plaintiff.

In the Disney lawsuit, plaintiff’s lawyers use the alleged COPPA violation to establish that the data collection is offensive, and to pass several of those tests.

Eduard Goodman, global privacy officer at security firm Cyberscout, says he’s seen the intrusion upon seclusion legal strategy deployed in data breach lawsuits before.  But that fourth prong of the test is the trickiest to meet. (Note: I am sometimes paid to write freelance stories for Cyberscout)

“The problem, as with most all privacy torts in the U.S., what is the harm and damage here,” he said. Damages and financial compensation for torts like causing injury in a car accident are well established. What’s the harm in collecting someone’s personal data?  That’s yet to be determined.

 

Almost four times more budget is being spent on property related risks vs. cyber risk

Larry Ponemon

This unique cyber study found a serious disconnect in risk management. What’s interesting is that the majority of companies cover plant, property and equipment losses, insuring an average of 59 percent and self-insuring 28 percent. Cyber is almost the opposite, as companies are insuring an average of 15 percent and self-insuring 59 percent.

The purpose of this research is to compare the relative insurance protection of certain tangible versus intangible assets. How do cyber asset values and potential losses compare to tangible asset values and potential losses from an organization’s other perils, such as fires and weather?

The probability of any particular building burning down is significantly lower than one percent (1%). However, most organizations spend much more on fire-insurance premiums than on cyber insurance despite stating in their publicly disclosed documents that a majority of the organization’s value is attributed to intangible assets. One recent concrete example is the sale of Yahoo!: Verizon recently reduced the purchase price by $350 million because of the severity of cyber incidents in 2013 and 2014.

Acceleration in the scope, scale and economic impact of technology multiplied by the concomitant data revolution, which places unprecedented amounts of information in the hands of consumers and businesses alike, and the proliferation of technology-enabled business models, force organizations to examine the benefits and consequences of emerging technologies.

This financial-statement quantification study demonstrates that organizations recognize the growing value of technology and data assets relative to historical tangible assets, yet a disconnect remains regarding cost-benefit analysis resource allocation. Particularly, a disproportionate amount is spent on tangible asset insurance protection compared to cyber asset protection based on the respective relative financial statement impact and potential expected losses.

Quantitative models are being developed that evaluate the return on investment of various cyber risk management IT security and process solutions, which can incorporate cost-benefit analysis for different levels of insurance. As such, organizations are driven toward a holistic capital expenditure discussion spanning functional teams rather than being segmented in traditional silos. The goal of these models is to identify and protect critical assets by aligning macro-level risk tolerance more consistently.

How do organizations qualify and quantify the corresponding impact of financial statement exposure? Our goal is to compare the financial statement impact of tangible property and network risk exposures. A better understanding of the relative financial statement impact will assist organizations in allocating resources and determining the appropriate amount of risk transfer (insurance) resources to allocate to the mitigation of the financial statement impact of network risk exposures.

Network risk exposures can broadly include breach of privacy and security of personally identifiable information, stealing an organization’s intellectual property, confiscating online bank accounts, creating and distributing viruses on computers, posting confidential business information on the Internet, robotic malfunctions and disrupting a country’s critical national infrastructure.

We surveyed 709 individuals in North America involved in their company’s cyber risk management as well as enterprise risk management activities. Most respondents are either in finance, treasury and accounting (34 percent of respondents) or risk management (27 percent of respondents). Other respondents are in corporate compliance/audit (13 percent of respondents) and general management (12 percent of respondents).

All respondents are familiar with the cyber risks facing their company. In the context of this research, cyber risk means any risk of financial loss, disruption or damage to the reputation of an organization from some sort of failure of its information technology systems.

Despite the greater average potential loss to information assets ($1,020 million) compared to Property, Plant & Equipment (PP&E) ($843 million), the latter has much higher insurance coverage (62 percent vs. 16 percent).

Following are some of the key takeaways from this research:

  • Information assets are underinsured against theft or destruction based on the value, probable maximum loss (PML) and likelihood of an incident.
  • Disclosure of a material loss of PP&E and disclosure of information assets differ. Forty-five percent of respondents say their company would disclose the loss of PP&E in its financial statements as a footnote disclosure. However, 34 percent of respondents say a material loss to information assets does not require disclosure.
  • Despite the risk, companies are reluctant to purchase cyber insurance coverage. Sixty-four percent of respondents believe their company’s exposure to cyber risk will increase over the next 24 months. However, only 30 percent of respondents say their company has cyber insurance coverage.
  • Fifty-six percent of companies represented in this study experienced a material or significantly disruptive security exploit or data breach one or more times during the past two years, with an average economic impact of $4.4 million.
  • Eighty-nine percent of respondents believe cyber liability is one of the top 10 business risks for their company.

To read the full report, click here. 

 

 

What’s really scary about Petya ‘ransomware’ attack? It might not be ransomware

Bob Sullivan

The recent “ransomware” computer virus outbreak is over, but the speculation is just beginning. And it begins with those quotes around the term ransomware.

New!

In late June, organizations in 64 countries around the globe, according to Microsoft, found themselves beating back a virus that’s been variously named Petya, GoldenEye, or even “NotPetya.”  Infected computers suffered devestating attacks that disabled the machines and made files uselss — encrypted, with instructions for paying a ransom, in typical ransomware fashion.

But there was something very atypical about this attack.  The program itself was very sophisticated — far more sophisticated than WannaCry, last month’s most famous virus attack. Petya stole login credentials. It spread itself in multiple ways, meaning many folks who thought they were patched against Petya were not safe from it.  Microsoft’s analysis of the malware shows how much effort was put into the crafting of the program.

But the ransom payout mechanism was as fragile as a single email address. That was disabled almost immediately, meaning victims couldn’t contact the virus writers to save their files.

That makes no sense. So much work on the software, so little work on the ‘revenue’ side — unless Petya wasn’t really about stealing money. Plenty of security experts have alighted on this theory.

Kaspersky Labs was most assertive in its analysis: it refused to call the malware ransomware, saying it was designed only to destroy data, not to raise money.

“This malware campaign was not designed as a ransomware attack for financial gain. Instead, it appears it was designed as a wiper pretending to be ransomware,” Kaspersky wrote on its SecureList.com site.

Other analysts came to much the same conclusion.

“The attackers behind the NotPetya had to know that they were making it very difficult for anyone to actually get their files back.  Specifically, they provided just a single email address for victims to contact, to provide proof of payment,” said security firm SecureWorks in an email to me.

“Rather than being motivated by financial gain, these attackers created a disruptive attack masquerading as a ransomware campaign, and based on our investigation, it has been determined that (is) more likely,” SecureWorks said on its blog post about the attack. “While we recognize the possibility that this was a traditional ransomware campaign with some elements of poor execution, based on what we currently know… it is more likely that those apparent mistakes reflect elements of the campaign that were not important to the actor’s ultimate goal.”

So if the attack wasn’t about money, what was it about? Disruption, certainly.  But why?

It’s dangerous to speculate on attribution because it’s so easy to leave false flags during an attack. But the virus got its start in Ukraine, and infected the most machines there, experts agree. That’s certainly fodder for speculation.

“We saw the first infections in Ukraine, where more than 12,500 machines encountered the threat. We then observed infections in another 64 countries, including Belgium, Brazil, Germany, Russia, and the United States,” wrote Microsoft in its analysis.

There’s been rampant speculation that the attack actually began with infection of tax software used in Ukraine called MEDoc.  Criminals infected an automated update with the malware, which then was pushed out to unsuspecting victims, several outlets reported.

In its report, Microsoft said it had evidence that such a “supply chain attack” was indeed to blame.

“Microsoft now has evidence that a few active infections of the ransomware initially started from the legitimate MEDoc updater process,” it said.

Other circumstantial evidence suggests the attack targeted Ukraine. SecureWorks points out that the outbreak happened on the day before Ukrainian Constitution Day, which was Wednesday. It’s easy to raise the possibility that a nation-state or even rogue actors within it who are resentful of Ukrainian independence might seek to disrupt the nation on that day.

But, in the world of digital evidence, it’s hard to be conclusive about such attribution. The New York Times quoted an expert saying the I.P. address used in the attack was in Iran, who then pointed out that a hacker could have merely made it look like the attack came from Iran.  This reminds me of a line from an 1980s TV comedy about a faux murder: “The killer is either a member of the family, or not a member of the family.” By now, Internet should be used to the idea that things often aren’t what they seem.

More important, the Petya attack is clear evidence that ransomware-style attacks are getting more sophisticated and more dangerous. Virus writers are learning from each other, and developing nastier payloads and better spreading mechanisms.  Pay attention now. If you have escaped WannaCry and Petya, consider yourself lucky. There is a high likelihood that a future ransomware attack will attack you. There’s only one way to be ready:  Back up.  Make a copy of all the business files and photographs you care about and store them, physically, somewhere else.

For technologists, perhaps the biggest fear of all is the notion of the supply chain attack, raised by Microsoft recently.  All computer users are now groomed to accept regular updates — ironically for security reasons — from software firms.  If hackers learn to reliably infiltrate this update process, they will have found a powerful new attack vector.

Here’s a to-do list for network administrators from BeyondTrust:

  • Remove administrator rights from end users
  • Implement application control for only trusted applications
  • Perform vulnerability assessment and install security patches promptly
  • Train team members on how to identify phishing emails
  • Disable application (specifically MS Office) macros