Declining confidence in IT, but still reasons for optimism

Larry Ponemon

Public sector organizations are feeling the pains of digital transformation. Faced with modernization, data center upgrades and continuous cloud-first initiatives, this transformation of the IT environment is making it a challenge to deliver services, comply with service level agreements (SLAs), meet citizens’ expectations and achieve organizational missions.

The evidence is clear from a research study conducted by the Ponemon Institute and sponsored by Splunk of 736 decision makers in federal & department of defense IT Operations.

Challenges & Trends in Federal & Department of Defense: the United States reveals that digital transformation is well underway with budgets shifting from traditional on premise investments to more cloud and agile development paradigms.

This shift in the IT environment, while being embraced, has led to an overall loss of confidence in federal and DoD operations and is evidenced in respondents’ lack of confidence in their organizations’ ability to accomplish the following:

  • Have the people with the right skills to “get the job one”
  • Ensure performance and availability of systems to meet SLAs consistently
  • Manage data center upgrades
  • Perform IT operations efficiently
  • Migrate workloads and applications to the cloud

Research findings explain the reasons for the loss of confidence. These include skills gap among existing resources, according to 71 percent of respondents. Respondents also cite silos of IT systems and technologies and an inability to integrate them (71 percent of respondents) and complexity and diversity of IT systems and technology (67 percent of respondents).

Even with monitoring and data analytics in place, these tools are disconnected from each other and most respondents believe they are ineffective at helping quickly pinpoint issues and determine root cause (78 percent of respondents) because they do not offer end-to-end visibility.

Respondents also say that a lack of collaboration across teams and not enough data fidelity and
context are challenges to timely issue resolutions. Such challenges also affect organizations’ ability to quickly and efficiently respond to system outages and interruptions.

On average, it takes 42 hours and 12 staff members to restore the IT system to operational status following a system outage or interruption.

Despite the loss of confidence, respondents do see a silver lining in the transformation of their IT operations. According to respondents, the move to DevOps (development and operations) is making it easier to deliver quality services on time and within budget. To support the transformation, organizations are shifting spending from on premise to cloud computing, DevOps and new technologies.

Respondents also recognize that machine learning capabilities (27 percent), better network visibility across the entire organization (26 percent) and better enforcement of current policies and regulations (26 percent) can improve their organizations’ IT operations. Respondents are also increasingly aware of the types of data available and how such data can be used across
operational silos to reduce risks to their organizations.

Following are key findings from this research:

Confidence in current IT operations is lower than it was 12 months ago. The primary reasons for this change are not having the staff with the right skills “to get the job done”, the inability to ensure performance and ability to ensure performance and availability of systems to meet SLAs and inability to manage data center upgrades.

The confidence gap seems to stem from a skills gap, silos and complexity. Respondents believe that the greatest difficulties in carrying out their duties arise from a skills gap among existing resources (71 percent of respondents), silos of IT systems and technologies and a lack of ability to integrate them (71 percent of respondents) and complexity and diversity of IT systems and technology (67 percent of respondents).

Machine learning capabilities, visibility and enforcement of policies are seen as critical to improving IT operations. Out of a list of five options of the most effective way to strengthen IT operations, 27 percent of respondents believe that machine learning capabilities would be most effective. Better network visibility across the entire organization and better enforcement of current policies and regulations would strengthen IT operations (both 26 percent of respondents).

Spending on cloud operations and DevOps will grow significantly while on-premise spending dwindles. Almost one-half of respondents (49 percent) say that spending on cloud operations and 48 percent of respondents say DevOps will grow over the next year, while only 31 percent say that on-premise spending would do the same.

Alerts still remain too numerous and erroneous, and current event monitoring tools are not solving the problem. More than half of respondents say they still receive too many alerts (52 percent) and that those alerts generate too many false positives (55 percent). Seventy-eight percent of respondents are unsure or do not think that their current crop of analytics and monitoring tools are helping them pinpoint problems and determine root causes because they lack end-to-end visibility.

The challenges and risks described in this research result in inefficient response to system outages and interruptions. According to 65 percent of respondents, their organizations lack a consistent and formal IT outage response process. On average, it takes 42 hours and 12 staff members to restore the IT system to operational status following a system outage and interruption.

Will IT and security converge? More than two-thirds of respondents (73 percent) do not believe or are unsure if their security and IT operations will converge in the future.

Is it possible to use the same data sets across the organization to solve problems? Sixty-four percent of respondents are unsure or don’t think the data sets they are using can solve multiple challenges such as IT troubleshooting, service monitoring, security and business/mission analytics. Similarly, 66 percent of respondents are doubtful the same data can be used throughout the organization.

For the full study, click here: 

‘Nothing … in the way except motivation’ — Report claims hackers have penetrated deep into energy sector networks

Bob Sullivan

It started off as a fake invitation to a New Year’s Eve party, emailed to energy section employees. It ended with hackers taking screen shots of power grid control computer screens. Well, we can only hope it ended there.

Symantec Corporation released an alarming report this week claiming that a group of power grid hackers it calls “Dragonfly 2.0” have made their most successful raid into critical infrastructure computers in the U.S. and around the world.

“The energy sector in Europe and North America being targeted by a new wave of cyber attacks that could provide attackers with the means to severely disrupt affected operations,” Symantec wrote in its report.

In a chilling statement to Wired, Symatec’s Eric Chien said the incident means the intruders are, as the moment, capable of causing disruptions and power outages as they wish. They are just waiting for the right moment.

“There’s a difference between being a step away from conducting sabotage and actually being in a position to conduct sabotage … being able to flip the switch on power generation,” Eric Chien said. “We’re now talking about on-the-ground technical evidence this could happen in the US, and there’s nothing left standing in the way except the motivation of some actor out in the world.

Security researchers have been watching Dragonfly for years, claiming the group has been probing energy sector machines since at least 2011. Symantec says it went dark until a reemergence in late December 2015, when the New Year’s Even party invite went out. There is a “a distinct increase in activity in 2017,” Symantec said.

“The Dragonfly 2.0 campaigns show how the attackers may be entering into a new phase, with recent campaigns potentially providing them with access to operational systems, access that could be used for more disruptive purposes in future,” according to the report.

Symantec doesn’t say where Dragonfly is from — and its report shows the hackers might be intentionally trying to confuse investigators.  But late last year, the Department of Homeland Security claimed Dragonfly’s origins were Russian, and it was one of several groups groups working to “compromise and exploit networks and endpoints associated with the U.S. election, as well as a range of U.S. Government, political, and private sector entities. was part of organized camp.”

Symantec says the most concerning evidence found during its analysis were the screen captures.

“In one particular instance the attackers used a clear format for naming the screen capture files, [machine description and location].[organization name]. The string “cntrl” (control) is used in many of the machine descriptions, possibly indicating that these machines have access to operational systems,” it said.

Symantec links the initial hacker campaign to this more recent spate of attacks because there are similarities in the malware used. The Dragonfly campaigns that began in 2011 “now appear to have been a more exploratory phase,” Symantec said.

“The Dragonfly 2.0 campaigns show how the attackers may be entering into a new phase, with recent campaigns potentially providing them with access to operational systems, access that could be used for more disruptive purposes in future,” the firm claims. “What (the group) plans to do with all this intelligence has yet to become clear, but its capabilities do extend to materially disrupting targeted organizations should it choose to do so.”

Omer Schneider, CEO and co-founder of security firm CyberX, said this type of attack is inevitable.

“Why is everyone so surprised?” Schneider said. “As early as 2014, the ICS-CERT warned that adversaries had penetrated our control networks to perform cyber-espionage. Over time the adversaries have gotten even more sophisticated and now they’ve stolen credentials that give them direct access to control systems in our energy sector. If I were a foreign power, this would be a great way to threaten the US while I invade other countries or engage in other aggressive actions against US allies.”

Cost of a data breach, 2017 — $225 per record lost, an all-time high

Larry Ponemon

IBM Security and Ponemon Institute are pleased to present the 2017 Cost of Data Breach Study: United States, our 12th annual benchmark study on the cost of data breach incidents for companies located in the United States. The average cost for each lost or stolen record containing sensitive and confidential information increased from $221 to $225. The average total cost experienced by organizations over the past year increased from $7.01 million to $7.35 million. To date, 572 U.S. organizations have participated in the benchmarking process since the inception of this research.

Ponemon Institute conducted its first Cost of Data Breach Study in the United States 12 years ago. Since then, we have expanded the study to include the following countries and regions:

  • The United Kingdom
  • Germany
  • Australia
  • France
  • Brazil
  • Japan
  • Italy
  • India
  • Canada
  • South Africa
  • The Middle East (including the United Arab Emirates and Saudi Arabia)
  • ASEAN region (including Singapore, Indonesia, the Philippines and Malaysia

The 2017 study examines the costs incurred by 63 U.S. companies in 16 industry sectors after those companies experienced the loss or theft of protected personal data and the notification of breach victims as required by various laws. It is important to note that costs presented in this research are not hypothetical but are from actual data-loss incidents. They are based upon cost estimates provided by individuals we interviewed over a 10-month period in the companies that are represented in this research.

The number of breached records per incident this year ranged from 5,563 to 99,500 records. The average number of breached records was 28,512. We did not recruit organizations that have data breaches involving more than 100,000 compromised records. These incidents are not indicative of data breaches most organizations incur. Thus, including them in the study would have artificially skewed the results.

Why the cost of data breach fluctuates across countries

What explains the significant increases in the cost of data breach this year for organizations in the Middle East, the United States and Japan? In contrast, how did organizations in Germany, France, Australia, and the United Kingdom succeed in reducing the costs to respond to and remediate the data breach? Understanding how the cost of data breach is calculated will explain the differences among the countries in this research.

For the 2017 Cost of Data Breach Study: Global Overview, we recruited 419 organizations in 11 countries and two regions to participate in this year’s study. More than 1,900 individuals who are knowledgeable about the data breach incident in these 419 organizations were interviewed. The first data points we collected from these organizations were: (1) how many customer records were lost in the breach (i.e. the size of the breach) and (2) what percentage of their customer base did they lose following the data breach (i.e. customer churn). This information explains why the costs increase or decrease from the past year.

In the course of our interviews, we also asked questions to determine what the organization spent on activities for the discovery of and the immediate response to the data breach, such as forensics and investigations, and those conducted in the aftermath of discovery, such as the notification of victims and legal fees. A list of these activities is shown in Part 3 of this report. Other issues covered that may have an influence on the cost are the root causes of the data breach (i.e. malicious or criminal attack, insider negligence or system glitch) and the time to detect and contain the incident.

It is important to note that only events directly relevant to the data breach experience of the 419 organizations represented in this research and discussed above are used to calculate the cost. For example, new regulations, such as the General Data Protection Regulation (GDPR), ransomware and cyber attacks, such as Shamoon, may encourage organizations to increase investments in their governance practices and security-enabling technologies but do not directly affect the cost of a data breach as presented in this research.

The following are the most salient findings and implications for organizations:

The cost of data breach sets a record high. According to this year’s benchmark findings, data breaches cost companies an average of $225 per compromised record – of which $146 pertains to indirect costs, including abnormal turnover or churn of customers and $79 represents the direct costs incurred to resolve the data breach, such as investments in technologies or legal fees.

The total average organizational cost of data breach reaches a new high. This year, we record the highest average total cost of data breach at $7.35 million. Prior to this year’s research, the most costly breach occurred in 2011 when companies spent an average of $7.24 million. In 2013, companies experienced the lowest total data breach cost at $5.40 million.

Measures reveal why the cost of data breach increases. The average total cost of data breach increased 4.7 percent, the average per capita cost increased by 1.8 percent and abnormal churn of existing customers increased 5 percent. In the context of this paper, abnormal churn is defined as a greater-than-expected loss of customers in the normal course of business. In contrast, the average size of a data breach (number of records lost or stolen) decreased 1.9 percent.

Certain industries have higher data breach costs. Heavily regulated industries such as health care ($380 per capita) and financial services ($336 per capita), had per capita data breach costs well above the overall mean of $225. In contrast, public sector organizations ($110 per capita) had a per capita cost of data breach below the overall mean.

Malicious or criminal attacks continue to be the primary cause of data breach. Fifty-two percent of incidents involved a malicious or criminal attack, 24 percent of incidents were caused by negligent employees, and another 24 percent were caused by system glitches, including both IT and business process failures.

Malicious attacks are the costliest. Organizations that had a data breach due to malicious or criminal attacks had a per capita data breach cost of $244, which is significantly above the mean. In contrast, system glitches or human error as the root cause had per capita costs below the mean ($209 and $200 per capita, respectively).

Four new factors are in this year’s cost analysis.  The following factors that influence data breach costs have been added to this year’s study. They are as follows: (1) compliance failures, (2) the extensive use of mobile platforms, (3) CPO appointment and (4) the use of security analytics. The use of security analytics reduced the per capita cost of data breach by $7.7 and the appointment of a CPO reduced the cost by $4.3. However, extensive use of mobile platforms at the time of the breach increased the cost by $6.5 and compliance failures increased the per capita cost by $19.3.

The more records lost, the higher the cost of data breach. This year, for companies with data breaches involving less than 10,000 records, the average total cost of data breach was $4.5 million and companies with the loss or theft of more than 50,000 records had a cost of data breach of $10.3 million.

The more churn, the higher the cost of data breach. Companies that experienced less than 1 percent churn or the loss of existing customers, had an average total cost of data breach of $5.3 million and those that experienced churn greater than 4 percent had an average total cost of data breach of $10.1 million.

Certain industries are more vulnerable to churn. Financial, life science, health, technology and service organizations experience a relatively high abnormal churn rate and public sector and entertainment organizations experienced a relatively low abnormal churn rate.

Detection and escalation costs are at a record high. These costs include forensic and investigative activities, assessment and audit services, crisis team management, and communications to executive management and board of directors. Average detection and escalation costs increased dramatically from $0.73 million to $1.07 million, suggesting that companies are investing more heavily in these activities.

Notification costs increase slightly. Such costs typically include IT activities associated with the creation of contact databases, determination of all regulatory requirements, engagement of outside experts, postal expenditures, secondary mail contacts or email bounce-backs and inbound communication set-up. This year’s average notification costs increased slightly from $0.59 million in 2016 to $0.69 million in this year’s study.

Post data breach costs decrease. Such costs typically include help desk activities, inbound communications, special investigative activities, remediation activities, legal expenditures, product discounts, identity protection services and regulatory interventions. These costs decreased from $1.72 million in 2016 to $1.56 million in this year’s study.

Lost business costs increase. Such costs include the abnormal turnover of customers, customer acquisition activities, reputation losses and diminished goodwill. The current year’s cost increased from $3.32 million in 2016 to $4.03 million. The highest lost business cost over the past 12 years was $4.59 million in 2009.

Companies continue to spend more on indirect per capita costs than direct per capita costs. Indirect costs include the time employees spend on data breach notification efforts or investigations of the incident. Direct costs refer to what companies spend to minimize the consequences of a data breach and assist victims. These costs include engaging forensic experts to help investigate the data breach, hiring a law firm and offering victims identity protection services. This year, the indirect costs were $146 and direct costs were $79.

The time to identify and contain data breaches impact costs. In this year’s study, it took companies an average of 206 days to detect that an incident occurred and an average of 55 days to contain the incident. If the mean time to identify (MTTI) was less than 100 days, the average cost to identify was $5.99 million. However, if the mean time to identify was greater than 100 days the cost rose significantly to $8.70 million. If the mean time to contain (MTTC) the breach was less than 30 days, the average cost to contain was $5.87 million. If it took 30 days or longer, the cost rose significantly to $8.83 million.

To read the full report, click here. 

Disney, Viacom child privacy lawsuits try novel legal theory

Bob Sullivan

A California mom is suing Disney and some of its software partners for allegedly collecting personal information about her kids through mobile phone game apps. I was on the TODAY show this week talking about it.

Within days, the same mom also sued Viacom.

There’s a novel legal argument in these cases that I’m going to watch with great interest; an “intrusion upon seclusion” claim that I hadn’t seen before.  If the mom — and potentially others, if class-action status is granted — succeed at winning such a claim and collecting damages, it could open doors to a new kind of privacy lawsuit.

The Disney allegations, which the firm denies, are what you’d expect.  The suit claims Disney software places unique identifiers on mobile phones which can track app users — both in and out of game play — so Disney’s partners can serve targeted advertising.  You can expect the usual debate about what constitutes personal information.  Corporations that want to target ads usually claim they anonymize such data. Privacy advocate say that’s bunk. With just a few data points, people can be pretty precisely identified.

Federal law — the Child Online Privacy Protection Act, or COPPA — has strict rules about what can be collected from kids under 13.  The Federal Trade Commission has weighed in on the issue, making clear that unique identifiers fall under COPPA, meaning they generally shouldn’t be used or collected when kids are involved.

The lawsuit claims Disney and its partners violated COPPA, but that doesn’t really  get her far. COPPA does not provide a “private right of action.”  Consumers can’t sue “under COPPA” and get anything; they can merely ask a federal agency (the FTC) to fine the violator.

So lawyers in the case have seized upon the “intrusion upon seclusion” tort.  From what I can tell, this legal strategy is generally used when someone’s physical space is violated — as in sneaking into a home or hotel room.  It has been used in previous digital privacy cases, however, said Douglas I. Cuthbertson, a lawyer at the firm pressing the case. He cited invasion of privacy cases involving Vizio (Smart TVs) and Nickelodeon (Tracking videos watched; click for more). Both recently survived dismissal motions. It remains to be seen how much the cases are worth to plaintiffs, however.

According to Harvard’s publication of the American Law Institute’s guide to torts, here’s what ‘Inrusion Upon Seclusion” requires:

“The invasion may be by physical intrusion into a place in which the plaintiff has secluded himself, as when the defendant forces his way into the plaintiff’s room in a hotel or insists over the plaintiff’s objection in entering his home. It may also be by the use of the defendant’s senses, with or without mechanical aids, to oversee or overhear the plaintiff’s private affairs, as by looking into his upstairs windows with binoculars or tapping his telephone wires. It may be by some other form of investigation or examination into his private concerns, as by opening his private and personal mail, searching his safe or his wallet, examining his private bank account, or compelling him by a forged court order to permit an inspection of his personal documents.”

The four-pronged test to succeed in such a case, according to the Digital Media Law Project,  involves:

  • First, that the defendant, without authorization, must have intentionally invaded the private affairs of the plaintiff;
  • Second, the invasion must be offensive to a reasonable person;
  • Third, the matter that the defendant intruded upon must involve a private matter; and
  • Finally, the intrusion must have caused mental anguish or suffering to the plaintiff.

In the Disney lawsuit, plaintiff’s lawyers use the alleged COPPA violation to establish that the data collection is offensive, and to pass several of those tests.

Eduard Goodman, global privacy officer at security firm Cyberscout, says he’s seen the intrusion upon seclusion legal strategy deployed in data breach lawsuits before.  But that fourth prong of the test is the trickiest to meet. (Note: I am sometimes paid to write freelance stories for Cyberscout)

“The problem, as with most all privacy torts in the U.S., what is the harm and damage here,” he said. Damages and financial compensation for torts like causing injury in a car accident are well established. What’s the harm in collecting someone’s personal data?  That’s yet to be determined.


Almost four times more budget is being spent on property related risks vs. cyber risk

Larry Ponemon

This unique cyber study found a serious disconnect in risk management. What’s interesting is that the majority of companies cover plant, property and equipment losses, insuring an average of 59 percent and self-insuring 28 percent. Cyber is almost the opposite, as companies are insuring an average of 15 percent and self-insuring 59 percent.

The purpose of this research is to compare the relative insurance protection of certain tangible versus intangible assets. How do cyber asset values and potential losses compare to tangible asset values and potential losses from an organization’s other perils, such as fires and weather?

The probability of any particular building burning down is significantly lower than one percent (1%). However, most organizations spend much more on fire-insurance premiums than on cyber insurance despite stating in their publicly disclosed documents that a majority of the organization’s value is attributed to intangible assets. One recent concrete example is the sale of Yahoo!: Verizon recently reduced the purchase price by $350 million because of the severity of cyber incidents in 2013 and 2014.

Acceleration in the scope, scale and economic impact of technology multiplied by the concomitant data revolution, which places unprecedented amounts of information in the hands of consumers and businesses alike, and the proliferation of technology-enabled business models, force organizations to examine the benefits and consequences of emerging technologies.

This financial-statement quantification study demonstrates that organizations recognize the growing value of technology and data assets relative to historical tangible assets, yet a disconnect remains regarding cost-benefit analysis resource allocation. Particularly, a disproportionate amount is spent on tangible asset insurance protection compared to cyber asset protection based on the respective relative financial statement impact and potential expected losses.

Quantitative models are being developed that evaluate the return on investment of various cyber risk management IT security and process solutions, which can incorporate cost-benefit analysis for different levels of insurance. As such, organizations are driven toward a holistic capital expenditure discussion spanning functional teams rather than being segmented in traditional silos. The goal of these models is to identify and protect critical assets by aligning macro-level risk tolerance more consistently.

How do organizations qualify and quantify the corresponding impact of financial statement exposure? Our goal is to compare the financial statement impact of tangible property and network risk exposures. A better understanding of the relative financial statement impact will assist organizations in allocating resources and determining the appropriate amount of risk transfer (insurance) resources to allocate to the mitigation of the financial statement impact of network risk exposures.

Network risk exposures can broadly include breach of privacy and security of personally identifiable information, stealing an organization’s intellectual property, confiscating online bank accounts, creating and distributing viruses on computers, posting confidential business information on the Internet, robotic malfunctions and disrupting a country’s critical national infrastructure.

We surveyed 709 individuals in North America involved in their company’s cyber risk management as well as enterprise risk management activities. Most respondents are either in finance, treasury and accounting (34 percent of respondents) or risk management (27 percent of respondents). Other respondents are in corporate compliance/audit (13 percent of respondents) and general management (12 percent of respondents).

All respondents are familiar with the cyber risks facing their company. In the context of this research, cyber risk means any risk of financial loss, disruption or damage to the reputation of an organization from some sort of failure of its information technology systems.

Despite the greater average potential loss to information assets ($1,020 million) compared to Property, Plant & Equipment (PP&E) ($843 million), the latter has much higher insurance coverage (62 percent vs. 16 percent).

Following are some of the key takeaways from this research:

  • Information assets are underinsured against theft or destruction based on the value, probable maximum loss (PML) and likelihood of an incident.
  • Disclosure of a material loss of PP&E and disclosure of information assets differ. Forty-five percent of respondents say their company would disclose the loss of PP&E in its financial statements as a footnote disclosure. However, 34 percent of respondents say a material loss to information assets does not require disclosure.
  • Despite the risk, companies are reluctant to purchase cyber insurance coverage. Sixty-four percent of respondents believe their company’s exposure to cyber risk will increase over the next 24 months. However, only 30 percent of respondents say their company has cyber insurance coverage.
  • Fifty-six percent of companies represented in this study experienced a material or significantly disruptive security exploit or data breach one or more times during the past two years, with an average economic impact of $4.4 million.
  • Eighty-nine percent of respondents believe cyber liability is one of the top 10 business risks for their company.

To read the full report, click here. 



What’s really scary about Petya ‘ransomware’ attack? It might not be ransomware

Bob Sullivan

The recent “ransomware” computer virus outbreak is over, but the speculation is just beginning. And it begins with those quotes around the term ransomware.


In late June, organizations in 64 countries around the globe, according to Microsoft, found themselves beating back a virus that’s been variously named Petya, GoldenEye, or even “NotPetya.”  Infected computers suffered devestating attacks that disabled the machines and made files uselss — encrypted, with instructions for paying a ransom, in typical ransomware fashion.

But there was something very atypical about this attack.  The program itself was very sophisticated — far more sophisticated than WannaCry, last month’s most famous virus attack. Petya stole login credentials. It spread itself in multiple ways, meaning many folks who thought they were patched against Petya were not safe from it.  Microsoft’s analysis of the malware shows how much effort was put into the crafting of the program.

But the ransom payout mechanism was as fragile as a single email address. That was disabled almost immediately, meaning victims couldn’t contact the virus writers to save their files.

That makes no sense. So much work on the software, so little work on the ‘revenue’ side — unless Petya wasn’t really about stealing money. Plenty of security experts have alighted on this theory.

Kaspersky Labs was most assertive in its analysis: it refused to call the malware ransomware, saying it was designed only to destroy data, not to raise money.

“This malware campaign was not designed as a ransomware attack for financial gain. Instead, it appears it was designed as a wiper pretending to be ransomware,” Kaspersky wrote on its site.

Other analysts came to much the same conclusion.

“The attackers behind the NotPetya had to know that they were making it very difficult for anyone to actually get their files back.  Specifically, they provided just a single email address for victims to contact, to provide proof of payment,” said security firm SecureWorks in an email to me.

“Rather than being motivated by financial gain, these attackers created a disruptive attack masquerading as a ransomware campaign, and based on our investigation, it has been determined that (is) more likely,” SecureWorks said on its blog post about the attack. “While we recognize the possibility that this was a traditional ransomware campaign with some elements of poor execution, based on what we currently know… it is more likely that those apparent mistakes reflect elements of the campaign that were not important to the actor’s ultimate goal.”

So if the attack wasn’t about money, what was it about? Disruption, certainly.  But why?

It’s dangerous to speculate on attribution because it’s so easy to leave false flags during an attack. But the virus got its start in Ukraine, and infected the most machines there, experts agree. That’s certainly fodder for speculation.

“We saw the first infections in Ukraine, where more than 12,500 machines encountered the threat. We then observed infections in another 64 countries, including Belgium, Brazil, Germany, Russia, and the United States,” wrote Microsoft in its analysis.

There’s been rampant speculation that the attack actually began with infection of tax software used in Ukraine called MEDoc.  Criminals infected an automated update with the malware, which then was pushed out to unsuspecting victims, several outlets reported.

In its report, Microsoft said it had evidence that such a “supply chain attack” was indeed to blame.

“Microsoft now has evidence that a few active infections of the ransomware initially started from the legitimate MEDoc updater process,” it said.

Other circumstantial evidence suggests the attack targeted Ukraine. SecureWorks points out that the outbreak happened on the day before Ukrainian Constitution Day, which was Wednesday. It’s easy to raise the possibility that a nation-state or even rogue actors within it who are resentful of Ukrainian independence might seek to disrupt the nation on that day.

But, in the world of digital evidence, it’s hard to be conclusive about such attribution. The New York Times quoted an expert saying the I.P. address used in the attack was in Iran, who then pointed out that a hacker could have merely made it look like the attack came from Iran.  This reminds me of a line from an 1980s TV comedy about a faux murder: “The killer is either a member of the family, or not a member of the family.” By now, Internet should be used to the idea that things often aren’t what they seem.

More important, the Petya attack is clear evidence that ransomware-style attacks are getting more sophisticated and more dangerous. Virus writers are learning from each other, and developing nastier payloads and better spreading mechanisms.  Pay attention now. If you have escaped WannaCry and Petya, consider yourself lucky. There is a high likelihood that a future ransomware attack will attack you. There’s only one way to be ready:  Back up.  Make a copy of all the business files and photographs you care about and store them, physically, somewhere else.

For technologists, perhaps the biggest fear of all is the notion of the supply chain attack, raised by Microsoft recently.  All computer users are now groomed to accept regular updates — ironically for security reasons — from software firms.  If hackers learn to reliably infiltrate this update process, they will have found a powerful new attack vector.

Here’s a to-do list for network administrators from BeyondTrust:

  • Remove administrator rights from end users
  • Implement application control for only trusted applications
  • Perform vulnerability assessment and install security patches promptly
  • Train team members on how to identify phishing emails
  • Disable application (specifically MS Office) macros


Medical Device Security: An Industry Under Attack and Unprepared to Defend

Larry Ponemon

Ponemon Institute is pleased to present the findings of Medical Device Security: An Industry Under Attack and Unprepared to Defend, sponsored by Synopsys. (Click here for full report.) The purpose of this research is to understand the risks to clinicians and patients because of insecure medical devices. We surveyed both device makers and healthcare delivery organizations (HDO) to determine if both groups are in alignment about the need to address risks to medical device. To ensure a knowledgeable respondent participants in this research have a role or involvement in the assessment of and contribution to the security of medical devices.

Please join us on Wednesday June 21 at 9 AM PT/12 PM ET to learn more about the findings of this research

In the context of this research, medical devices are any instrument, apparatus, appliance, or other article, whether used alone or in combination, including the software intended by its manufacturer to be used for diagnostic and/or therapeutic purposes. Medical devices vary according to their intended use. Examples range from simple devices such as medical thermometers to those that connect to the Internet to assist in the conduct of medical testing, implants, and prostheses.

The following medical devices are manufactured or used by the organizations represented in this research: robots, implantable devices, radiation equipment, diagnostic & monitoring equipment, networking equipment designed specifically for medical devices and mobile medical apps.

How vulnerable are these medical devices to attack and why both device makers and HDOs lack confidence in their security? Our survey shows 67 percent of device makers in this study believe an attack on one or more medical devices they have built by their organization is likely and 56 percent of HDOs believe such an attack is likely. Despite the likelihood of an attack, only 17 percent of device makers and 15 percent of HDOs are taking significant steps to prevent attacks. Further, only 22 percent of HDOs say their organizations have an incident response plan in place in the event of an attack on vulnerable medical devices and 41 percent of device makers say such a plan is in place.

In fact, patients have already suffered adverse events and attacks. Thirty-one percent of device makers and 40 percent of HDOs represented in this study say they are aware of these incidents. Of these respondents, 38 percent of respondents in HDOs say they are aware of inappropriate therapy/treatment delivered to the patient because of an insecure medical device and 39 percent of device makers confirm that attackers have taken control of medical devices.

Despite the risks, few organizations are taking steps to prevent attacks on medical devices. Only 17 percent of device makers are taking significant steps to prevent attacks and 15 percent of HDOs are taking significant steps.

The research reveals the following risks to medical devices and why clinicians and patients are at risk.

Both device makers and users have little confidence that patients and clinicians are protected. Both device makers and HDOs have little confidence that the security protocols or architecture built inside medical devices provide clinicians and patients with protection. HDOs are more confident than device makers that they can detect security vulnerabilities in medical devices (59 percent vs. 37 percent).

The use of mobile devices is affecting the security risk posture in healthcare organizations. Clinicians depend upon their mobile devices to more efficiently serve patients. However, 60 percent of device makers and 49 percent of HDOs say the use of mobile devices in hospitals and other healthcare organizations is significantly increasing security risks.

Medical devices are very difficult to secure. Eighty percent of medical device manufacturers and users in this study say medical devices are very difficult to secure. Further, only 25 percent of respondents say security protocols or architecture built inside devices adequately protects clinicians and patients.

In many cases, budget increases to improve the security of medical devices would occur only after a serious hacking incident occurred. Respondents believe their organizations would increase the budget only if a potentially life threatening attack took place. Only 19 percent of HDOs say concern over potential loss of customers/patients due to a security incident would result in more funds for medical device security.

 Medical device security practices in place are not the most effective. Both manufacturers and users rely upon security requirements instead of more thorough practices such as security testing throughout the SDLC, code review and debugging systems and dynamic application security testing. As a result, both manufacturers and users concur that medical devices contain vulnerable code due to lack of quality assurance and testing procedures and rush to release pressures on the product development team.

Most organizations do not encrypt traffic among IoT devices. Only a third of device makers say their organizations encrypt traffic among IoT devices and 29 percent of HDOs deploy encryption to protect data transmitted from medical devices. Of these respondents, only 39 percent of device makers and 35 percent of HDOs use key management systems on encrypted traffic.

Medical devices contain vulnerable code because of a lack of quality assurance and testing procedures as well as the rush to release. Fifty-three percent of device makers and 58 percent of HDOs say there is a lack of quality assurance and testing procedures that lead to vulnerabilities in medical devices. Device makers say another problem is the rush to release pressures on the product development team (50 percent). HDOs say accidental coding errors (52 percent) is a problem.

Testing of medical devices rarely occurs. Only 9 percent of manufacturers and 5 percent of users say they test medical devices at least annually. Instead, 53 percent of HDOs do not test (45 percent) or are unsure if testing occurs (8 percent). Forty-three percent of manufacturers do not test (36 percent) or are unsure if testing takes place (7 percent).

Accountability for the security of medical devices manufactured or used is lacking. While 41 percent of HDOs believe they are primarily responsible for the security of medical devices, almost one-third of both device makers and HDOs say no one person or function is primarily responsible.

Manufacturers and users of medical devices are not in alignment about current risks to medical devices. The findings reveal a serious disconnect between the perceptions of device manufacturers and users about the state of medical device security and could prevent collaboration in achieving greater security. Some disconnects, as detailed in this report, include the following: HDOs are more likely to be concerned about medical device security and to raise concerns about risks. They are also far more concerned about the medical industry’s lack of action to protect patients/users of medical devices.

How effective is the FDA in the security of medical devices? Only 44 percent of HDOs follow guidance from the FDA to mitigate or reduce inherent security risks in medical devices. Slightly more than half of device makers (51 percent) follow guidance. Only 24 percent of device makers have recalled a product because of security vulnerabilities with or without FDA guidance. Only 19 percent of HDOs have recalled a product.

Most device makers and users do not disclose privacy and security risks of their medical devices. Sixty percent of device makers and 59 percent of HDOs do not share information about security risks with clinicians and patients. If they do, it is primarily in contractual agreements or policy disclosure. Such disclosures would typically include information about how patient data is collected, stored and shared and how the security of the device could be affected.

Click here to read the study’s detailed findings. 

Remarkable look inside the underground 'fake news' economy shows how lucrative truth hacking can be

Bob Sullivan

Fake news is the new computer virus.

That’s the conclusion I came to when reading a remarkable new report from computer security firm Trend Micro (PDF). If you doubt the massive efforts of underground “hackers” to influence you — and the massive cash they can make doing so — flip through the pages of this report. A few years ago, it could have been written about the spam, computer virus or click fraud economies. Today, “news” has now  been weaponized, both for political gain and profit.

While Americans bicker over who might have gained the most from hacking in our last presidential campaign, they are missing the larger point: a massive infrastructure has been put in place from China to Russia to India to make money off polarization.  The truth is for sale in a way that most people couldn’t have imagined just a few years ago. As the report crucially notes: there’s no such thing as “moderate” fake news.  Whichever side you are on, if you play in extremism, you are probably helping make these truth hackers rich.

Here are some highlights from the report, but you should really read it yourself.

“(Russian)  forums offer services for each stage of the campaign—from writing press releases, promoting them in news outlets, to sustaining their momentum with positive or negative comments, some of which can be even supplied by the customer in a template. Advertisements for such services are frequently found in both public and private sections of forums, as well as on banner ads on the forums themselves.”

Many services have a crowd source model, meaning users can either buy credits for clicks, or “earn” them though participating in others’ campaigns.

“(One service) allows contributors to promote internet sites and pages, flaunting a 500,000-strong registered user base that can provide traffic (and statistics) from real visitors to supported platforms. It uses a coin system, which is also available in the underground.”

A price list claims the service can make a video appear on YouTube’s home page for about $600, or get 10,000 site visitors for less than $20.

Such services aren’t limited to Russia, of course.  According to the report, a Middle Eastern firm offers, “auto-likes on Facebook (for) a monthly subscription of $25; 2,200 auto-likes from Arabic/Middle East based users fetch $150 per month…(another service) has a customizable auto-comment function, with templates of comments customers can choose from. Prices vary, from $45 per month for eight comments per day, to $250 for 1,000 comments in a month.”

In China, the report says, “For … less than $2,600 spent on services in the Chinese underground, a social media profile can easily fetch more than 300,000 followers in a month. ”

It goes on to claim that fake news campaigns have incited riots and caused journalists to be attacked.  Here’s an example of the latter:

“If an attacker aims to silence a journalist from speaking out or publishing a story that can be detrimental to an attacker’s agenda or reputation, he can also be singled out and discredited by mounting campaigns against him.

“An attacker can mount a four-week fake news campaign to defame the journalist using services available in gray or underground marketplaces. Fake news unfavorable to the journalist can be bought once a week, which can be promoted by purchasing 50,000 retweets or likes and 100,000 visits. These cost around $2,700 per week. Another option for the attacker is to buy four related videos and turn them into trending videos on YouTube, each of which can sell for around $2,500 per video.

“The attacker can also buy comments; to create an illusion of believability, the purchase can start with 500 comments, 400 of which can be positive, 80 neutral, and 20 negative. Spending $1,000 for this kind of service will translate to 4,000 comments.

“After establishing an imagined credibility, an attacker can launch his smear campaign against his target.

“Poisoning a Twitter account with 200,000 bot followers will cost $240. Ordering a total of 12,000 comments with most bearing negative sentiment and references/links to fake stories against the journalist will cost around $3,000. Dislikes and negative comments on a journalist’s article, and promoting them with 10,000 retweets or likes and 25,000 visits, can cost $20,400 in the underground.

“The result? For around $55,000, a user who reads, watches, and further searches the campaign’s fake
content can be swayed into having a fragmented and negative impression of the journalist. A more
daunting consequence would be how the story, exposé or points the journalist wanted to divulge or raise will be drowned out by a sea of noise fabricated by the campaign.”

The key for all these attacks, the report notes, is appealing to the more extreme nature of our political discourse today.

“In the realm of political opinion manipulation, this tends to be in the form of highly partisan content. Political fake news tends to align with the extremes of the political spectrum; ‘moderate’ fake news does not really exist.”

The reports offers tips for news consumers to avoid being unwitting partners in a fake news campaign. The target of fake news is the general public, the report notes, so “Ultimately, the burden of differentiating the truth from untruth falls on the audience.”

Here are some signs users can look out for if the news they’re reading is fake:
• Hyperbolic and clickbait headlines
• Suspicious website domains that spoof legitimate news media
• Misspellings in content and awkwardly laid out website
• Doctored photos and images
• Absence of publishing timestamps
• Lack of author, sources, and data


Handle with Care: Protecting Sensitive Data in Microsoft SharePoint, Collaboration Tools and File Share Applications

Larry Ponemon

With the plethora of collaboration and file sharing tools in the workplace, the risk of data leakage due to insecure sharing of information among employees and third parties is growing. As discussed in this report, Handle with Care: Protecting Sensitive Data in Microsoft SharePoint, Collaboration Tools and File Share Applications in US,UK and German Organizations, sponsored by Metalogix, although security concerns about the use of collaboration and file sharing tools is high, companies are not taking sufficient steps to protect their sensitive data.

Without appropriate technologies, data breaches in the SharePoint environment can go undetected. Almost half of respondents (49 percent) say their organizations have had at least one data breach in the SharePoint environment in the past two years. However, 22 percent of respondents believe it was likely their organization had a data breach but are not able to know this with certainty.

This research reveals that employees on a frequent basis are accidentally sharing files or documents with other employees or third parties not authorized to receive them. Employees are also receiving content they should not have access to or they are not deleting confidential materials as required by policies.

Although respondents express concern about the risk of a data breach stemming from use of collaboration and file sharing technologies, they are struggling to meet the challenge using their existing security processes and tools. Seventy percent of organizations believe that if their organization had a data breach involving the loss or theft of confidential information in the SharePoint environment they would only be able to detect it some of the time or not at all.

Most companies are not taking steps to reduce the risk through training programs, routine security audits or deployment of specific technologies that discover where sensitive or confidential information resides and how it is used. The survey found that important data governance practices that are not in place for collaboration applications in general, and that when it comes to SharePoint specifically, security tools and practices are even more lacking.

We surveyed 1,403 individuals in the US, UK and Germany who are involved in ensuring the protection of confidential information. Respondents work in IT and IT security as well as lines of business in a variety of industries. On average, respondents say they spend approximately 28 percent of their time in the protection of documents and other content assets in SharePoint.

All companies represented in this research use SharePoint solutions for sharing confidential documents and files. Other solutions include Office 365 and cloud-based services such as Dropbox and/or Box. Other means of collaboration include shared network drives and other file sync and share solutions.

Key findings

In this section, we provide a deeper analysis of the findings. The complete audited findings are presented in the Appendix of this report. The report is organized according to the following seven topics:

  1. Sensitive content within the organization
  2. Risky user behavior
  3. Lack of collaboration in security and governance practices and tools
  4. Challenges in controlling risks in the SharePoint environment
  5. Country differences: United States, United Kingdom and Germany
  6. Industry differences
  7. Conclusions and recommendations


  1. Sensitive content within the organization

 Not knowing who is sharing sensitive data or where such data is stored increases the likelihood of a breach — 63 percent say the inability to know where sensitive data resides represents a serious security risk. Further, only 34 percent of respondents say their organizations have clear visibility into what file sharing applications are being used by employees at work.

These findings demonstrate the need for automated technologies that enable organizations to discover and classify sensitive or confidential information and monitor how it is used.

  1. Risky user behavior

Employee and third party use of SharePoint are greater security concerns than external threat agents.

The pressure to be productive sometimes causes individuals to put sensitive data at risk. Negligent employees are inviting data loss or theft by accidentally exposing information (73 percent of respondents). Eighty-four percent of respondents are worried about third parties having access to data they should not see. Based on the findings, third parties and negligent insiders are more worrisome than external hackers (28 percent of respondents) or malicious employees (19 percent of respondents).

  1. Lack of collaboration in security and governance practices and tools

 Despite the volume of sensitive content stored in collaboration and file sharing tools and the acknowledgement of risky employee behavior, respondents do not have sufficient policies or security tools in place to prevent either accidental exposure or intentional misuse of information.

Only 28 percent of respondents rate their organizations as being highly effective in keeping confidential documents secure in the SharePoint environment. Consequently, as reported previously, almost half of respondents (49 percent) report their companies had at least one data breach resulting from the loss or theft of confidential information in the SharePoint environment in the past two years and 22 percent of respondents say they are not aware of a data breach, but one is likely to have occurred.

  1. Challenges in controlling risks in the SharePoint environment

 If companies are aware of the risk of data breaches due to insecure collaboration and they don’t believe their current approaches are sufficient to keeping content safe, what is preventing them from deploying more effective security solutions?

 A lack of integration is the biggest challenge to reducing SharePoint security risks.

 Seventy-nine percent of respondents say they do not have the right tools in place to support the safe use of sensitive or confidential information assets in SharePoint. Either they believe their tools are only somewhat effective (41 percent of respondents), not effective (49 percent of respondents) or they do not have enough information to know (10 percent of respondents).

  1. Country differences: United States, United Kingdom and Germany

The study identifies clear differences in attitudes and behaviors related to file sharing and collaboration tools among respondents in the United States (US), United Kingdom (UK) and Germany. As shown in Figure 17, German respondents are less concerned than US or UK respondents about the potential for security breaches in their SharePoint environment, regardless of whether the source of the breach is internal or external to their organization.

  1. Industry differences

 In addition to differences among respondents in the different countries represented in this research, we provide an analysis of respondents in nine different industries in the study. Two industries of particular interest are financial services and health and pharma.

Consistent with previous studies conducted by Ponemon Institute, financial services seems to be most effective in dealing with security vulnerabilities. Awareness of information security concerns is consistently high in the financial services industry. A possible reason is the myriad of compliance requirements also requires financial services companies to invest in security tools and develop governance processes at a higher rate than other industries. Typically, financial services companies employ a larger security team with a more diverse set of skills.


7. Conclusions and recommendations

 Despite evidence of data breaches and the increasing pressure from regulators, customers and shareholders to protect confidential data from accidental exposure, companies in this study do not seem to be taking security in file sharing and collaboration environments as seriously as they should.

Following are recommendations for creating a more secure environment for sensitive content.

  • Use automated tools to improve the organization’s ability to discover where sensitive or confidential information resides within SharePoint, file sharing and collaboration tools.


  • Instead of relying upon document owners to classify sensitive or confidential information, use automated tools to improve the ability to secure data in the SharePoint environment. Assign centralized accountability and responsibility for securing documents and files containing confidential information to the department with the necessary expertise, such as IT security.


  • Be aware that personnel and organizational changes can trigger security vulnerabilities. According to respondents, negligent or malicious behaviors can occur when employees leave the organization or there is downsizing. Consider the use of automated user access history with real time monitoring.


  • Conduct meaningful training programs that specifically address the consequences of negligent or careless file sharing practices. These types of behaviors include keeping documents or files no longer needed, receiving and not deleting files and documents not intended for the recipient, forwarding confidential files or documents to individuals not authorized to receive them, using personal or unauthorized file sharing apps to exchange confidential documents and files in the workplace and sending confidential files or documents to unauthorized individuals outside the organization.


  • Address the risks created by third parties, contractors and temporary workers by monitoring and restricting their access to sensitive or confidential information.


  • Have policies that restrict or limit the sharing of confidential documents and enforce those policies, especially to reduce the risks associated with allowing workers to have confidential information on their home computers and devices.


  • Conduct audits to determine the security vulnerabilities and non-compliance of the sharing and accessing practices of employees and third parties. The research proves the ability of such audits to reveal security vulnerabilities in the protection of confidential documents and files.

Download the full report, with accompanying infographics, at this link.

WannCry a symptom of much deeper problems

Bob Sullivan

For a long time, many health care providers have been worried about the wrong thing — compliance rather than patient safety.  Last week, we see the most frightening example yet of the devastating consequences.

So far, one of the worst cyberattacks in recent memory has hit computers in 150 countries, Europol said, with WannaCry encrypting files and demanding ransom from victims. The software can run in 27 different language, according to U.S. cybersecurity officials.

“Our emergency surgeries are running doors open, we can access our software but ransomware window pops up every 20-30 seconds so we are slow,” wrote @fendifille in a post about the attack from a U.K. medical center. 

A feared second spike of attacks from the WannaCry ransomware virus didn’t materialize on Monday, but there’s still plenty to worry about. New variants of the malware have been released, others are most certainly under development, and a Twitter account logging ransom payments shows victims are indeed coughing up roughly $300 in bitcoins to recover their files. As of Monday morning, payments totaled just over $50,000 — tiny compared to the damage caused, but a tidy sum for the criminals. Meanwhile, the required ransom jumped to $600 this week, according to security firm F-Secure.

A confluence of events led to discovery of and then spread of the devastating malware. The technology behind WannaCry was actually developed by the National Security Agency in the U.S., then stolen by hackers using the moniker Shadow Crew. It attacks unpatched Microsoft Windows computers. Most modern Windows PCs were automatically updated to prevent the exploit, but older computers — those running Windows XP, for example — are no longer routinely supported by Microsoft. Many of those were unpatched, and an easy mark for WannaCry.

U.K. hospitals had thousands of these older machines; that’s why the virus hit hard there. I’ve reported earlier on why health care providers often have older computers. Many run single tasks, and are rarely updated, or even noticed, by IT staff.

Spread of the malware slowed for a variety of reasons during the weekend (including this heroic effect by a security researcher). But as workers returned Monday morning, a fresh round of infections were possible, authorities have warned.

“It is important to understand that the way these attacks work means that compromises of machines and networks that have already occurred may not yet have been detected, and that existing infections from the malware can spread within networks,” wrote the U.K.’s National Cyber Security Centre. “This means that as a new working week begins it is likely, in the UK and elsewhere, that further cases of ransomware may come to light, possibly at a significant scale.”

Microsoft has now offered security patches for older Windows machines, and technicians have spent the weekend racing to updates those computers.

The real legacy of WannCry will be the malware’s government-based origins. During the weekend, Microsoft called out the NSA for researching and hiding vulnerabilities, comparing this incident to theft of a U.S. missile

“This attack provides yet another example of why the stockpiling of vulnerabilities by governments is such a problem. This is an emerging pattern in 2017,” chief counsel Brad Smith wrote in a blog post. “We have seen vulnerabilities stored by the CIA show up on WikiLeaks, and now this vulnerability stolen from the NSA has affected customers around the world. Repeatedly, exploits in the hands of governments have leaked into the public domain and caused widespread damage. An equivalent scenario with conventional weapons would be the U.S. military having some of its Tomahawk missiles stolen.”

Does NSA bug hunting (and hoarding) make the world safer, or more dangerous?  WannaCry certainly hints at the answer.