Category Archives: Uncategorized

Cost of a data breach, 2017 — $225 per record lost, an all-time high

Larry Ponemon

IBM Security and Ponemon Institute are pleased to present the 2017 Cost of Data Breach Study: United States, our 12th annual benchmark study on the cost of data breach incidents for companies located in the United States. The average cost for each lost or stolen record containing sensitive and confidential information increased from $221 to $225. The average total cost experienced by organizations over the past year increased from $7.01 million to $7.35 million. To date, 572 U.S. organizations have participated in the benchmarking process since the inception of this research.

Ponemon Institute conducted its first Cost of Data Breach Study in the United States 12 years ago. Since then, we have expanded the study to include the following countries and regions:

  • The United Kingdom
  • Germany
  • Australia
  • France
  • Brazil
  • Japan
  • Italy
  • India
  • Canada
  • South Africa
  • The Middle East (including the United Arab Emirates and Saudi Arabia)
  • ASEAN region (including Singapore, Indonesia, the Philippines and Malaysia

The 2017 study examines the costs incurred by 63 U.S. companies in 16 industry sectors after those companies experienced the loss or theft of protected personal data and the notification of breach victims as required by various laws. It is important to note that costs presented in this research are not hypothetical but are from actual data-loss incidents. They are based upon cost estimates provided by individuals we interviewed over a 10-month period in the companies that are represented in this research.

The number of breached records per incident this year ranged from 5,563 to 99,500 records. The average number of breached records was 28,512. We did not recruit organizations that have data breaches involving more than 100,000 compromised records. These incidents are not indicative of data breaches most organizations incur. Thus, including them in the study would have artificially skewed the results.

Why the cost of data breach fluctuates across countries

What explains the significant increases in the cost of data breach this year for organizations in the Middle East, the United States and Japan? In contrast, how did organizations in Germany, France, Australia, and the United Kingdom succeed in reducing the costs to respond to and remediate the data breach? Understanding how the cost of data breach is calculated will explain the differences among the countries in this research.

For the 2017 Cost of Data Breach Study: Global Overview, we recruited 419 organizations in 11 countries and two regions to participate in this year’s study. More than 1,900 individuals who are knowledgeable about the data breach incident in these 419 organizations were interviewed. The first data points we collected from these organizations were: (1) how many customer records were lost in the breach (i.e. the size of the breach) and (2) what percentage of their customer base did they lose following the data breach (i.e. customer churn). This information explains why the costs increase or decrease from the past year.

In the course of our interviews, we also asked questions to determine what the organization spent on activities for the discovery of and the immediate response to the data breach, such as forensics and investigations, and those conducted in the aftermath of discovery, such as the notification of victims and legal fees. A list of these activities is shown in Part 3 of this report. Other issues covered that may have an influence on the cost are the root causes of the data breach (i.e. malicious or criminal attack, insider negligence or system glitch) and the time to detect and contain the incident.

It is important to note that only events directly relevant to the data breach experience of the 419 organizations represented in this research and discussed above are used to calculate the cost. For example, new regulations, such as the General Data Protection Regulation (GDPR), ransomware and cyber attacks, such as Shamoon, may encourage organizations to increase investments in their governance practices and security-enabling technologies but do not directly affect the cost of a data breach as presented in this research.

The following are the most salient findings and implications for organizations:

The cost of data breach sets a record high. According to this year’s benchmark findings, data breaches cost companies an average of $225 per compromised record – of which $146 pertains to indirect costs, including abnormal turnover or churn of customers and $79 represents the direct costs incurred to resolve the data breach, such as investments in technologies or legal fees.

The total average organizational cost of data breach reaches a new high. This year, we record the highest average total cost of data breach at $7.35 million. Prior to this year’s research, the most costly breach occurred in 2011 when companies spent an average of $7.24 million. In 2013, companies experienced the lowest total data breach cost at $5.40 million.

Measures reveal why the cost of data breach increases. The average total cost of data breach increased 4.7 percent, the average per capita cost increased by 1.8 percent and abnormal churn of existing customers increased 5 percent. In the context of this paper, abnormal churn is defined as a greater-than-expected loss of customers in the normal course of business. In contrast, the average size of a data breach (number of records lost or stolen) decreased 1.9 percent.

Certain industries have higher data breach costs. Heavily regulated industries such as health care ($380 per capita) and financial services ($336 per capita), had per capita data breach costs well above the overall mean of $225. In contrast, public sector organizations ($110 per capita) had a per capita cost of data breach below the overall mean.

Malicious or criminal attacks continue to be the primary cause of data breach. Fifty-two percent of incidents involved a malicious or criminal attack, 24 percent of incidents were caused by negligent employees, and another 24 percent were caused by system glitches, including both IT and business process failures.

Malicious attacks are the costliest. Organizations that had a data breach due to malicious or criminal attacks had a per capita data breach cost of $244, which is significantly above the mean. In contrast, system glitches or human error as the root cause had per capita costs below the mean ($209 and $200 per capita, respectively).

Four new factors are in this year’s cost analysis.  The following factors that influence data breach costs have been added to this year’s study. They are as follows: (1) compliance failures, (2) the extensive use of mobile platforms, (3) CPO appointment and (4) the use of security analytics. The use of security analytics reduced the per capita cost of data breach by $7.7 and the appointment of a CPO reduced the cost by $4.3. However, extensive use of mobile platforms at the time of the breach increased the cost by $6.5 and compliance failures increased the per capita cost by $19.3.

The more records lost, the higher the cost of data breach. This year, for companies with data breaches involving less than 10,000 records, the average total cost of data breach was $4.5 million and companies with the loss or theft of more than 50,000 records had a cost of data breach of $10.3 million.

The more churn, the higher the cost of data breach. Companies that experienced less than 1 percent churn or the loss of existing customers, had an average total cost of data breach of $5.3 million and those that experienced churn greater than 4 percent had an average total cost of data breach of $10.1 million.

Certain industries are more vulnerable to churn. Financial, life science, health, technology and service organizations experience a relatively high abnormal churn rate and public sector and entertainment organizations experienced a relatively low abnormal churn rate.

Detection and escalation costs are at a record high. These costs include forensic and investigative activities, assessment and audit services, crisis team management, and communications to executive management and board of directors. Average detection and escalation costs increased dramatically from $0.73 million to $1.07 million, suggesting that companies are investing more heavily in these activities.

Notification costs increase slightly. Such costs typically include IT activities associated with the creation of contact databases, determination of all regulatory requirements, engagement of outside experts, postal expenditures, secondary mail contacts or email bounce-backs and inbound communication set-up. This year’s average notification costs increased slightly from $0.59 million in 2016 to $0.69 million in this year’s study.

Post data breach costs decrease. Such costs typically include help desk activities, inbound communications, special investigative activities, remediation activities, legal expenditures, product discounts, identity protection services and regulatory interventions. These costs decreased from $1.72 million in 2016 to $1.56 million in this year’s study.

Lost business costs increase. Such costs include the abnormal turnover of customers, customer acquisition activities, reputation losses and diminished goodwill. The current year’s cost increased from $3.32 million in 2016 to $4.03 million. The highest lost business cost over the past 12 years was $4.59 million in 2009.

Companies continue to spend more on indirect per capita costs than direct per capita costs. Indirect costs include the time employees spend on data breach notification efforts or investigations of the incident. Direct costs refer to what companies spend to minimize the consequences of a data breach and assist victims. These costs include engaging forensic experts to help investigate the data breach, hiring a law firm and offering victims identity protection services. This year, the indirect costs were $146 and direct costs were $79.

The time to identify and contain data breaches impact costs. In this year’s study, it took companies an average of 206 days to detect that an incident occurred and an average of 55 days to contain the incident. If the mean time to identify (MTTI) was less than 100 days, the average cost to identify was $5.99 million. However, if the mean time to identify was greater than 100 days the cost rose significantly to $8.70 million. If the mean time to contain (MTTC) the breach was less than 30 days, the average cost to contain was $5.87 million. If it took 30 days or longer, the cost rose significantly to $8.83 million.

To read the full report, click here. 

Disney, Viacom child privacy lawsuits try novel legal theory

Bob Sullivan

A California mom is suing Disney and some of its software partners for allegedly collecting personal information about her kids through mobile phone game apps. I was on the TODAY show this week talking about it.

Within days, the same mom also sued Viacom.

There’s a novel legal argument in these cases that I’m going to watch with great interest; an “intrusion upon seclusion” claim that I hadn’t seen before.  If the mom — and potentially others, if class-action status is granted — succeed at winning such a claim and collecting damages, it could open doors to a new kind of privacy lawsuit.

The Disney allegations, which the firm denies, are what you’d expect.  The suit claims Disney software places unique identifiers on mobile phones which can track app users — both in and out of game play — so Disney’s partners can serve targeted advertising.  You can expect the usual debate about what constitutes personal information.  Corporations that want to target ads usually claim they anonymize such data. Privacy advocate say that’s bunk. With just a few data points, people can be pretty precisely identified.

Federal law — the Child Online Privacy Protection Act, or COPPA — has strict rules about what can be collected from kids under 13.  The Federal Trade Commission has weighed in on the issue, making clear that unique identifiers fall under COPPA, meaning they generally shouldn’t be used or collected when kids are involved.

The lawsuit claims Disney and its partners violated COPPA, but that doesn’t really  get her far. COPPA does not provide a “private right of action.”  Consumers can’t sue “under COPPA” and get anything; they can merely ask a federal agency (the FTC) to fine the violator.

So lawyers in the case have seized upon the “intrusion upon seclusion” tort.  From what I can tell, this legal strategy is generally used when someone’s physical space is violated — as in sneaking into a home or hotel room.  It has been used in previous digital privacy cases, however, said Douglas I. Cuthbertson, a lawyer at the firm pressing the case. He cited invasion of privacy cases involving Vizio (Smart TVs) and Nickelodeon (Tracking videos watched; click for more). Both recently survived dismissal motions. It remains to be seen how much the cases are worth to plaintiffs, however.

According to Harvard’s publication of the American Law Institute’s guide to torts, here’s what ‘Inrusion Upon Seclusion” requires:

“The invasion may be by physical intrusion into a place in which the plaintiff has secluded himself, as when the defendant forces his way into the plaintiff’s room in a hotel or insists over the plaintiff’s objection in entering his home. It may also be by the use of the defendant’s senses, with or without mechanical aids, to oversee or overhear the plaintiff’s private affairs, as by looking into his upstairs windows with binoculars or tapping his telephone wires. It may be by some other form of investigation or examination into his private concerns, as by opening his private and personal mail, searching his safe or his wallet, examining his private bank account, or compelling him by a forged court order to permit an inspection of his personal documents.”

The four-pronged test to succeed in such a case, according to the Digital Media Law Project,  involves:

  • First, that the defendant, without authorization, must have intentionally invaded the private affairs of the plaintiff;
  • Second, the invasion must be offensive to a reasonable person;
  • Third, the matter that the defendant intruded upon must involve a private matter; and
  • Finally, the intrusion must have caused mental anguish or suffering to the plaintiff.

In the Disney lawsuit, plaintiff’s lawyers use the alleged COPPA violation to establish that the data collection is offensive, and to pass several of those tests.

Eduard Goodman, global privacy officer at security firm Cyberscout, says he’s seen the intrusion upon seclusion legal strategy deployed in data breach lawsuits before.  But that fourth prong of the test is the trickiest to meet. (Note: I am sometimes paid to write freelance stories for Cyberscout)

“The problem, as with most all privacy torts in the U.S., what is the harm and damage here,” he said. Damages and financial compensation for torts like causing injury in a car accident are well established. What’s the harm in collecting someone’s personal data?  That’s yet to be determined.

 

Almost four times more budget is being spent on property related risks vs. cyber risk

Larry Ponemon

This unique cyber study found a serious disconnect in risk management. What’s interesting is that the majority of companies cover plant, property and equipment losses, insuring an average of 59 percent and self-insuring 28 percent. Cyber is almost the opposite, as companies are insuring an average of 15 percent and self-insuring 59 percent.

The purpose of this research is to compare the relative insurance protection of certain tangible versus intangible assets. How do cyber asset values and potential losses compare to tangible asset values and potential losses from an organization’s other perils, such as fires and weather?

The probability of any particular building burning down is significantly lower than one percent (1%). However, most organizations spend much more on fire-insurance premiums than on cyber insurance despite stating in their publicly disclosed documents that a majority of the organization’s value is attributed to intangible assets. One recent concrete example is the sale of Yahoo!: Verizon recently reduced the purchase price by $350 million because of the severity of cyber incidents in 2013 and 2014.

Acceleration in the scope, scale and economic impact of technology multiplied by the concomitant data revolution, which places unprecedented amounts of information in the hands of consumers and businesses alike, and the proliferation of technology-enabled business models, force organizations to examine the benefits and consequences of emerging technologies.

This financial-statement quantification study demonstrates that organizations recognize the growing value of technology and data assets relative to historical tangible assets, yet a disconnect remains regarding cost-benefit analysis resource allocation. Particularly, a disproportionate amount is spent on tangible asset insurance protection compared to cyber asset protection based on the respective relative financial statement impact and potential expected losses.

Quantitative models are being developed that evaluate the return on investment of various cyber risk management IT security and process solutions, which can incorporate cost-benefit analysis for different levels of insurance. As such, organizations are driven toward a holistic capital expenditure discussion spanning functional teams rather than being segmented in traditional silos. The goal of these models is to identify and protect critical assets by aligning macro-level risk tolerance more consistently.

How do organizations qualify and quantify the corresponding impact of financial statement exposure? Our goal is to compare the financial statement impact of tangible property and network risk exposures. A better understanding of the relative financial statement impact will assist organizations in allocating resources and determining the appropriate amount of risk transfer (insurance) resources to allocate to the mitigation of the financial statement impact of network risk exposures.

Network risk exposures can broadly include breach of privacy and security of personally identifiable information, stealing an organization’s intellectual property, confiscating online bank accounts, creating and distributing viruses on computers, posting confidential business information on the Internet, robotic malfunctions and disrupting a country’s critical national infrastructure.

We surveyed 709 individuals in North America involved in their company’s cyber risk management as well as enterprise risk management activities. Most respondents are either in finance, treasury and accounting (34 percent of respondents) or risk management (27 percent of respondents). Other respondents are in corporate compliance/audit (13 percent of respondents) and general management (12 percent of respondents).

All respondents are familiar with the cyber risks facing their company. In the context of this research, cyber risk means any risk of financial loss, disruption or damage to the reputation of an organization from some sort of failure of its information technology systems.

Despite the greater average potential loss to information assets ($1,020 million) compared to Property, Plant & Equipment (PP&E) ($843 million), the latter has much higher insurance coverage (62 percent vs. 16 percent).

Following are some of the key takeaways from this research:

  • Information assets are underinsured against theft or destruction based on the value, probable maximum loss (PML) and likelihood of an incident.
  • Disclosure of a material loss of PP&E and disclosure of information assets differ. Forty-five percent of respondents say their company would disclose the loss of PP&E in its financial statements as a footnote disclosure. However, 34 percent of respondents say a material loss to information assets does not require disclosure.
  • Despite the risk, companies are reluctant to purchase cyber insurance coverage. Sixty-four percent of respondents believe their company’s exposure to cyber risk will increase over the next 24 months. However, only 30 percent of respondents say their company has cyber insurance coverage.
  • Fifty-six percent of companies represented in this study experienced a material or significantly disruptive security exploit or data breach one or more times during the past two years, with an average economic impact of $4.4 million.
  • Eighty-nine percent of respondents believe cyber liability is one of the top 10 business risks for their company.

To read the full report, click here. 

 

 

What’s really scary about Petya ‘ransomware’ attack? It might not be ransomware

Bob Sullivan

The recent “ransomware” computer virus outbreak is over, but the speculation is just beginning. And it begins with those quotes around the term ransomware.

New!

In late June, organizations in 64 countries around the globe, according to Microsoft, found themselves beating back a virus that’s been variously named Petya, GoldenEye, or even “NotPetya.”  Infected computers suffered devestating attacks that disabled the machines and made files uselss — encrypted, with instructions for paying a ransom, in typical ransomware fashion.

But there was something very atypical about this attack.  The program itself was very sophisticated — far more sophisticated than WannaCry, last month’s most famous virus attack. Petya stole login credentials. It spread itself in multiple ways, meaning many folks who thought they were patched against Petya were not safe from it.  Microsoft’s analysis of the malware shows how much effort was put into the crafting of the program.

But the ransom payout mechanism was as fragile as a single email address. That was disabled almost immediately, meaning victims couldn’t contact the virus writers to save their files.

That makes no sense. So much work on the software, so little work on the ‘revenue’ side — unless Petya wasn’t really about stealing money. Plenty of security experts have alighted on this theory.

Kaspersky Labs was most assertive in its analysis: it refused to call the malware ransomware, saying it was designed only to destroy data, not to raise money.

“This malware campaign was not designed as a ransomware attack for financial gain. Instead, it appears it was designed as a wiper pretending to be ransomware,” Kaspersky wrote on its SecureList.com site.

Other analysts came to much the same conclusion.

“The attackers behind the NotPetya had to know that they were making it very difficult for anyone to actually get their files back.  Specifically, they provided just a single email address for victims to contact, to provide proof of payment,” said security firm SecureWorks in an email to me.

“Rather than being motivated by financial gain, these attackers created a disruptive attack masquerading as a ransomware campaign, and based on our investigation, it has been determined that (is) more likely,” SecureWorks said on its blog post about the attack. “While we recognize the possibility that this was a traditional ransomware campaign with some elements of poor execution, based on what we currently know… it is more likely that those apparent mistakes reflect elements of the campaign that were not important to the actor’s ultimate goal.”

So if the attack wasn’t about money, what was it about? Disruption, certainly.  But why?

It’s dangerous to speculate on attribution because it’s so easy to leave false flags during an attack. But the virus got its start in Ukraine, and infected the most machines there, experts agree. That’s certainly fodder for speculation.

“We saw the first infections in Ukraine, where more than 12,500 machines encountered the threat. We then observed infections in another 64 countries, including Belgium, Brazil, Germany, Russia, and the United States,” wrote Microsoft in its analysis.

There’s been rampant speculation that the attack actually began with infection of tax software used in Ukraine called MEDoc.  Criminals infected an automated update with the malware, which then was pushed out to unsuspecting victims, several outlets reported.

In its report, Microsoft said it had evidence that such a “supply chain attack” was indeed to blame.

“Microsoft now has evidence that a few active infections of the ransomware initially started from the legitimate MEDoc updater process,” it said.

Other circumstantial evidence suggests the attack targeted Ukraine. SecureWorks points out that the outbreak happened on the day before Ukrainian Constitution Day, which was Wednesday. It’s easy to raise the possibility that a nation-state or even rogue actors within it who are resentful of Ukrainian independence might seek to disrupt the nation on that day.

But, in the world of digital evidence, it’s hard to be conclusive about such attribution. The New York Times quoted an expert saying the I.P. address used in the attack was in Iran, who then pointed out that a hacker could have merely made it look like the attack came from Iran.  This reminds me of a line from an 1980s TV comedy about a faux murder: “The killer is either a member of the family, or not a member of the family.” By now, Internet should be used to the idea that things often aren’t what they seem.

More important, the Petya attack is clear evidence that ransomware-style attacks are getting more sophisticated and more dangerous. Virus writers are learning from each other, and developing nastier payloads and better spreading mechanisms.  Pay attention now. If you have escaped WannaCry and Petya, consider yourself lucky. There is a high likelihood that a future ransomware attack will attack you. There’s only one way to be ready:  Back up.  Make a copy of all the business files and photographs you care about and store them, physically, somewhere else.

For technologists, perhaps the biggest fear of all is the notion of the supply chain attack, raised by Microsoft recently.  All computer users are now groomed to accept regular updates — ironically for security reasons — from software firms.  If hackers learn to reliably infiltrate this update process, they will have found a powerful new attack vector.

Here’s a to-do list for network administrators from BeyondTrust:

  • Remove administrator rights from end users
  • Implement application control for only trusted applications
  • Perform vulnerability assessment and install security patches promptly
  • Train team members on how to identify phishing emails
  • Disable application (specifically MS Office) macros

 

Medical Device Security: An Industry Under Attack and Unprepared to Defend

Larry Ponemon

Ponemon Institute is pleased to present the findings of Medical Device Security: An Industry Under Attack and Unprepared to Defend, sponsored by Synopsys. (Click here for full report.) The purpose of this research is to understand the risks to clinicians and patients because of insecure medical devices. We surveyed both device makers and healthcare delivery organizations (HDO) to determine if both groups are in alignment about the need to address risks to medical device. To ensure a knowledgeable respondent participants in this research have a role or involvement in the assessment of and contribution to the security of medical devices.

Please join us on Wednesday June 21 at 9 AM PT/12 PM ET to learn more about the findings of this research https://www.brighttalk.com/webcast/11447/263163

In the context of this research, medical devices are any instrument, apparatus, appliance, or other article, whether used alone or in combination, including the software intended by its manufacturer to be used for diagnostic and/or therapeutic purposes. Medical devices vary according to their intended use. Examples range from simple devices such as medical thermometers to those that connect to the Internet to assist in the conduct of medical testing, implants, and prostheses.

The following medical devices are manufactured or used by the organizations represented in this research: robots, implantable devices, radiation equipment, diagnostic & monitoring equipment, networking equipment designed specifically for medical devices and mobile medical apps.

How vulnerable are these medical devices to attack and why both device makers and HDOs lack confidence in their security? Our survey shows 67 percent of device makers in this study believe an attack on one or more medical devices they have built by their organization is likely and 56 percent of HDOs believe such an attack is likely. Despite the likelihood of an attack, only 17 percent of device makers and 15 percent of HDOs are taking significant steps to prevent attacks. Further, only 22 percent of HDOs say their organizations have an incident response plan in place in the event of an attack on vulnerable medical devices and 41 percent of device makers say such a plan is in place.

In fact, patients have already suffered adverse events and attacks. Thirty-one percent of device makers and 40 percent of HDOs represented in this study say they are aware of these incidents. Of these respondents, 38 percent of respondents in HDOs say they are aware of inappropriate therapy/treatment delivered to the patient because of an insecure medical device and 39 percent of device makers confirm that attackers have taken control of medical devices.

Despite the risks, few organizations are taking steps to prevent attacks on medical devices. Only 17 percent of device makers are taking significant steps to prevent attacks and 15 percent of HDOs are taking significant steps.

The research reveals the following risks to medical devices and why clinicians and patients are at risk.

Both device makers and users have little confidence that patients and clinicians are protected. Both device makers and HDOs have little confidence that the security protocols or architecture built inside medical devices provide clinicians and patients with protection. HDOs are more confident than device makers that they can detect security vulnerabilities in medical devices (59 percent vs. 37 percent).

The use of mobile devices is affecting the security risk posture in healthcare organizations. Clinicians depend upon their mobile devices to more efficiently serve patients. However, 60 percent of device makers and 49 percent of HDOs say the use of mobile devices in hospitals and other healthcare organizations is significantly increasing security risks.

Medical devices are very difficult to secure. Eighty percent of medical device manufacturers and users in this study say medical devices are very difficult to secure. Further, only 25 percent of respondents say security protocols or architecture built inside devices adequately protects clinicians and patients.

In many cases, budget increases to improve the security of medical devices would occur only after a serious hacking incident occurred. Respondents believe their organizations would increase the budget only if a potentially life threatening attack took place. Only 19 percent of HDOs say concern over potential loss of customers/patients due to a security incident would result in more funds for medical device security.

 Medical device security practices in place are not the most effective. Both manufacturers and users rely upon security requirements instead of more thorough practices such as security testing throughout the SDLC, code review and debugging systems and dynamic application security testing. As a result, both manufacturers and users concur that medical devices contain vulnerable code due to lack of quality assurance and testing procedures and rush to release pressures on the product development team.

Most organizations do not encrypt traffic among IoT devices. Only a third of device makers say their organizations encrypt traffic among IoT devices and 29 percent of HDOs deploy encryption to protect data transmitted from medical devices. Of these respondents, only 39 percent of device makers and 35 percent of HDOs use key management systems on encrypted traffic.

Medical devices contain vulnerable code because of a lack of quality assurance and testing procedures as well as the rush to release. Fifty-three percent of device makers and 58 percent of HDOs say there is a lack of quality assurance and testing procedures that lead to vulnerabilities in medical devices. Device makers say another problem is the rush to release pressures on the product development team (50 percent). HDOs say accidental coding errors (52 percent) is a problem.

Testing of medical devices rarely occurs. Only 9 percent of manufacturers and 5 percent of users say they test medical devices at least annually. Instead, 53 percent of HDOs do not test (45 percent) or are unsure if testing occurs (8 percent). Forty-three percent of manufacturers do not test (36 percent) or are unsure if testing takes place (7 percent).

Accountability for the security of medical devices manufactured or used is lacking. While 41 percent of HDOs believe they are primarily responsible for the security of medical devices, almost one-third of both device makers and HDOs say no one person or function is primarily responsible.

Manufacturers and users of medical devices are not in alignment about current risks to medical devices. The findings reveal a serious disconnect between the perceptions of device manufacturers and users about the state of medical device security and could prevent collaboration in achieving greater security. Some disconnects, as detailed in this report, include the following: HDOs are more likely to be concerned about medical device security and to raise concerns about risks. They are also far more concerned about the medical industry’s lack of action to protect patients/users of medical devices.

How effective is the FDA in the security of medical devices? Only 44 percent of HDOs follow guidance from the FDA to mitigate or reduce inherent security risks in medical devices. Slightly more than half of device makers (51 percent) follow guidance. Only 24 percent of device makers have recalled a product because of security vulnerabilities with or without FDA guidance. Only 19 percent of HDOs have recalled a product.

Most device makers and users do not disclose privacy and security risks of their medical devices. Sixty percent of device makers and 59 percent of HDOs do not share information about security risks with clinicians and patients. If they do, it is primarily in contractual agreements or policy disclosure. Such disclosures would typically include information about how patient data is collected, stored and shared and how the security of the device could be affected.

Click here to read the study’s detailed findings. 

Remarkable look inside the underground 'fake news' economy shows how lucrative truth hacking can be

Bob Sullivan

Fake news is the new computer virus.

That’s the conclusion I came to when reading a remarkable new report from computer security firm Trend Micro (PDF). If you doubt the massive efforts of underground “hackers” to influence you — and the massive cash they can make doing so — flip through the pages of this report. A few years ago, it could have been written about the spam, computer virus or click fraud economies. Today, “news” has now  been weaponized, both for political gain and profit.

While Americans bicker over who might have gained the most from hacking in our last presidential campaign, they are missing the larger point: a massive infrastructure has been put in place from China to Russia to India to make money off polarization.  The truth is for sale in a way that most people couldn’t have imagined just a few years ago. As the report crucially notes: there’s no such thing as “moderate” fake news.  Whichever side you are on, if you play in extremism, you are probably helping make these truth hackers rich.

Here are some highlights from the report, but you should really read it yourself.

“(Russian)  forums offer services for each stage of the campaign—from writing press releases, promoting them in news outlets, to sustaining their momentum with positive or negative comments, some of which can be even supplied by the customer in a template. Advertisements for such services are frequently found in both public and private sections of forums, as well as on banner ads on the forums themselves.”

Many services have a crowd source model, meaning users can either buy credits for clicks, or “earn” them though participating in others’ campaigns.

“(One service) allows contributors to promote internet sites and pages, flaunting a 500,000-strong registered user base that can provide traffic (and statistics) from real visitors to supported platforms. It uses a coin system, which is also available in the underground.”

A price list claims the service can make a video appear on YouTube’s home page for about $600, or get 10,000 site visitors for less than $20.

Such services aren’t limited to Russia, of course.  According to the report, a Middle Eastern firm offers, “auto-likes on Facebook (for) a monthly subscription of $25; 2,200 auto-likes from Arabic/Middle East based users fetch $150 per month…(another service) has a customizable auto-comment function, with templates of comments customers can choose from. Prices vary, from $45 per month for eight comments per day, to $250 for 1,000 comments in a month.”

In China, the report says, “For … less than $2,600 spent on services in the Chinese underground, a social media profile can easily fetch more than 300,000 followers in a month. ”

It goes on to claim that fake news campaigns have incited riots and caused journalists to be attacked.  Here’s an example of the latter:

“If an attacker aims to silence a journalist from speaking out or publishing a story that can be detrimental to an attacker’s agenda or reputation, he can also be singled out and discredited by mounting campaigns against him.

“An attacker can mount a four-week fake news campaign to defame the journalist using services available in gray or underground marketplaces. Fake news unfavorable to the journalist can be bought once a week, which can be promoted by purchasing 50,000 retweets or likes and 100,000 visits. These cost around $2,700 per week. Another option for the attacker is to buy four related videos and turn them into trending videos on YouTube, each of which can sell for around $2,500 per video.

“The attacker can also buy comments; to create an illusion of believability, the purchase can start with 500 comments, 400 of which can be positive, 80 neutral, and 20 negative. Spending $1,000 for this kind of service will translate to 4,000 comments.

“After establishing an imagined credibility, an attacker can launch his smear campaign against his target.

“Poisoning a Twitter account with 200,000 bot followers will cost $240. Ordering a total of 12,000 comments with most bearing negative sentiment and references/links to fake stories against the journalist will cost around $3,000. Dislikes and negative comments on a journalist’s article, and promoting them with 10,000 retweets or likes and 25,000 visits, can cost $20,400 in the underground.

“The result? For around $55,000, a user who reads, watches, and further searches the campaign’s fake
content can be swayed into having a fragmented and negative impression of the journalist. A more
daunting consequence would be how the story, exposé or points the journalist wanted to divulge or raise will be drowned out by a sea of noise fabricated by the campaign.”

The key for all these attacks, the report notes, is appealing to the more extreme nature of our political discourse today.

“In the realm of political opinion manipulation, this tends to be in the form of highly partisan content. Political fake news tends to align with the extremes of the political spectrum; ‘moderate’ fake news does not really exist.”

The reports offers tips for news consumers to avoid being unwitting partners in a fake news campaign. The target of fake news is the general public, the report notes, so “Ultimately, the burden of differentiating the truth from untruth falls on the audience.”

Here are some signs users can look out for if the news they’re reading is fake:
• Hyperbolic and clickbait headlines
• Suspicious website domains that spoof legitimate news media
• Misspellings in content and awkwardly laid out website
• Doctored photos and images
• Absence of publishing timestamps
• Lack of author, sources, and data

 

Handle with Care: Protecting Sensitive Data in Microsoft SharePoint, Collaboration Tools and File Share Applications

Larry Ponemon

With the plethora of collaboration and file sharing tools in the workplace, the risk of data leakage due to insecure sharing of information among employees and third parties is growing. As discussed in this report, Handle with Care: Protecting Sensitive Data in Microsoft SharePoint, Collaboration Tools and File Share Applications in US,UK and German Organizations, sponsored by Metalogix, although security concerns about the use of collaboration and file sharing tools is high, companies are not taking sufficient steps to protect their sensitive data.

Without appropriate technologies, data breaches in the SharePoint environment can go undetected. Almost half of respondents (49 percent) say their organizations have had at least one data breach in the SharePoint environment in the past two years. However, 22 percent of respondents believe it was likely their organization had a data breach but are not able to know this with certainty.

This research reveals that employees on a frequent basis are accidentally sharing files or documents with other employees or third parties not authorized to receive them. Employees are also receiving content they should not have access to or they are not deleting confidential materials as required by policies.

Although respondents express concern about the risk of a data breach stemming from use of collaboration and file sharing technologies, they are struggling to meet the challenge using their existing security processes and tools. Seventy percent of organizations believe that if their organization had a data breach involving the loss or theft of confidential information in the SharePoint environment they would only be able to detect it some of the time or not at all.

Most companies are not taking steps to reduce the risk through training programs, routine security audits or deployment of specific technologies that discover where sensitive or confidential information resides and how it is used. The survey found that important data governance practices that are not in place for collaboration applications in general, and that when it comes to SharePoint specifically, security tools and practices are even more lacking.

We surveyed 1,403 individuals in the US, UK and Germany who are involved in ensuring the protection of confidential information. Respondents work in IT and IT security as well as lines of business in a variety of industries. On average, respondents say they spend approximately 28 percent of their time in the protection of documents and other content assets in SharePoint.

All companies represented in this research use SharePoint solutions for sharing confidential documents and files. Other solutions include Office 365 and cloud-based services such as Dropbox and/or Box. Other means of collaboration include shared network drives and other file sync and share solutions.

Key findings

In this section, we provide a deeper analysis of the findings. The complete audited findings are presented in the Appendix of this report. The report is organized according to the following seven topics:

  1. Sensitive content within the organization
  2. Risky user behavior
  3. Lack of collaboration in security and governance practices and tools
  4. Challenges in controlling risks in the SharePoint environment
  5. Country differences: United States, United Kingdom and Germany
  6. Industry differences
  7. Conclusions and recommendations

 

  1. Sensitive content within the organization

 Not knowing who is sharing sensitive data or where such data is stored increases the likelihood of a breach — 63 percent say the inability to know where sensitive data resides represents a serious security risk. Further, only 34 percent of respondents say their organizations have clear visibility into what file sharing applications are being used by employees at work.

These findings demonstrate the need for automated technologies that enable organizations to discover and classify sensitive or confidential information and monitor how it is used.

  1. Risky user behavior

Employee and third party use of SharePoint are greater security concerns than external threat agents.

The pressure to be productive sometimes causes individuals to put sensitive data at risk. Negligent employees are inviting data loss or theft by accidentally exposing information (73 percent of respondents). Eighty-four percent of respondents are worried about third parties having access to data they should not see. Based on the findings, third parties and negligent insiders are more worrisome than external hackers (28 percent of respondents) or malicious employees (19 percent of respondents).

  1. Lack of collaboration in security and governance practices and tools

 Despite the volume of sensitive content stored in collaboration and file sharing tools and the acknowledgement of risky employee behavior, respondents do not have sufficient policies or security tools in place to prevent either accidental exposure or intentional misuse of information.

Only 28 percent of respondents rate their organizations as being highly effective in keeping confidential documents secure in the SharePoint environment. Consequently, as reported previously, almost half of respondents (49 percent) report their companies had at least one data breach resulting from the loss or theft of confidential information in the SharePoint environment in the past two years and 22 percent of respondents say they are not aware of a data breach, but one is likely to have occurred.

  1. Challenges in controlling risks in the SharePoint environment

 If companies are aware of the risk of data breaches due to insecure collaboration and they don’t believe their current approaches are sufficient to keeping content safe, what is preventing them from deploying more effective security solutions?

 A lack of integration is the biggest challenge to reducing SharePoint security risks.

 Seventy-nine percent of respondents say they do not have the right tools in place to support the safe use of sensitive or confidential information assets in SharePoint. Either they believe their tools are only somewhat effective (41 percent of respondents), not effective (49 percent of respondents) or they do not have enough information to know (10 percent of respondents).

  1. Country differences: United States, United Kingdom and Germany

The study identifies clear differences in attitudes and behaviors related to file sharing and collaboration tools among respondents in the United States (US), United Kingdom (UK) and Germany. As shown in Figure 17, German respondents are less concerned than US or UK respondents about the potential for security breaches in their SharePoint environment, regardless of whether the source of the breach is internal or external to their organization.

  1. Industry differences

 In addition to differences among respondents in the different countries represented in this research, we provide an analysis of respondents in nine different industries in the study. Two industries of particular interest are financial services and health and pharma.

Consistent with previous studies conducted by Ponemon Institute, financial services seems to be most effective in dealing with security vulnerabilities. Awareness of information security concerns is consistently high in the financial services industry. A possible reason is the myriad of compliance requirements also requires financial services companies to invest in security tools and develop governance processes at a higher rate than other industries. Typically, financial services companies employ a larger security team with a more diverse set of skills.

 

7. Conclusions and recommendations

 Despite evidence of data breaches and the increasing pressure from regulators, customers and shareholders to protect confidential data from accidental exposure, companies in this study do not seem to be taking security in file sharing and collaboration environments as seriously as they should.

Following are recommendations for creating a more secure environment for sensitive content.

  • Use automated tools to improve the organization’s ability to discover where sensitive or confidential information resides within SharePoint, file sharing and collaboration tools.

 

  • Instead of relying upon document owners to classify sensitive or confidential information, use automated tools to improve the ability to secure data in the SharePoint environment. Assign centralized accountability and responsibility for securing documents and files containing confidential information to the department with the necessary expertise, such as IT security.

 

  • Be aware that personnel and organizational changes can trigger security vulnerabilities. According to respondents, negligent or malicious behaviors can occur when employees leave the organization or there is downsizing. Consider the use of automated user access history with real time monitoring.

 

  • Conduct meaningful training programs that specifically address the consequences of negligent or careless file sharing practices. These types of behaviors include keeping documents or files no longer needed, receiving and not deleting files and documents not intended for the recipient, forwarding confidential files or documents to individuals not authorized to receive them, using personal or unauthorized file sharing apps to exchange confidential documents and files in the workplace and sending confidential files or documents to unauthorized individuals outside the organization.

 

  • Address the risks created by third parties, contractors and temporary workers by monitoring and restricting their access to sensitive or confidential information.

 

  • Have policies that restrict or limit the sharing of confidential documents and enforce those policies, especially to reduce the risks associated with allowing workers to have confidential information on their home computers and devices.

 

  • Conduct audits to determine the security vulnerabilities and non-compliance of the sharing and accessing practices of employees and third parties. The research proves the ability of such audits to reveal security vulnerabilities in the protection of confidential documents and files.

Download the full report, with accompanying infographics, at this link.

WannCry a symptom of much deeper problems

Bob Sullivan

For a long time, many health care providers have been worried about the wrong thing — compliance rather than patient safety.  Last week, we see the most frightening example yet of the devastating consequences.

So far, one of the worst cyberattacks in recent memory has hit computers in 150 countries, Europol said, with WannaCry encrypting files and demanding ransom from victims. The software can run in 27 different language, according to U.S. cybersecurity officials.

“Our emergency surgeries are running doors open, we can access our software but ransomware window pops up every 20-30 seconds so we are slow,” wrote @fendifille in a post about the attack from a U.K. medical center. 

A feared second spike of attacks from the WannaCry ransomware virus didn’t materialize on Monday, but there’s still plenty to worry about. New variants of the malware have been released, others are most certainly under development, and a Twitter account logging ransom payments shows victims are indeed coughing up roughly $300 in bitcoins to recover their files. As of Monday morning, payments totaled just over $50,000 — tiny compared to the damage caused, but a tidy sum for the criminals. Meanwhile, the required ransom jumped to $600 this week, according to security firm F-Secure.

A confluence of events led to discovery of and then spread of the devastating malware. The technology behind WannaCry was actually developed by the National Security Agency in the U.S., then stolen by hackers using the moniker Shadow Crew. It attacks unpatched Microsoft Windows computers. Most modern Windows PCs were automatically updated to prevent the exploit, but older computers — those running Windows XP, for example — are no longer routinely supported by Microsoft. Many of those were unpatched, and an easy mark for WannaCry.

U.K. hospitals had thousands of these older machines; that’s why the virus hit hard there. I’ve reported earlier on why health care providers often have older computers. Many run single tasks, and are rarely updated, or even noticed, by IT staff.

Spread of the malware slowed for a variety of reasons during the weekend (including this heroic effect by a security researcher). But as workers returned Monday morning, a fresh round of infections were possible, authorities have warned.

“It is important to understand that the way these attacks work means that compromises of machines and networks that have already occurred may not yet have been detected, and that existing infections from the malware can spread within networks,” wrote the U.K.’s National Cyber Security Centre. “This means that as a new working week begins it is likely, in the UK and elsewhere, that further cases of ransomware may come to light, possibly at a significant scale.”

Microsoft has now offered security patches for older Windows machines, and technicians have spent the weekend racing to updates those computers.

The real legacy of WannCry will be the malware’s government-based origins. During the weekend, Microsoft called out the NSA for researching and hiding vulnerabilities, comparing this incident to theft of a U.S. missile

“This attack provides yet another example of why the stockpiling of vulnerabilities by governments is such a problem. This is an emerging pattern in 2017,” chief counsel Brad Smith wrote in a blog post. “We have seen vulnerabilities stored by the CIA show up on WikiLeaks, and now this vulnerability stolen from the NSA has affected customers around the world. Repeatedly, exploits in the hands of governments have leaked into the public domain and caused widespread damage. An equivalent scenario with conventional weapons would be the U.S. military having some of its Tomahawk missiles stolen.”

Does NSA bug hunting (and hoarding) make the world safer, or more dangerous?  WannaCry certainly hints at the answer.

 

Corporate cyber-resilience — bad and getting worse

Larry Ponemon

Resilient, an IBM Company, and Ponemon Institute are pleased to release the findings of the second annual study on the importance of cyber resilience for a strong security posture. In a survey of more than 2,000 IT and IT security professionals from around the world1 , only 32 percent of respondents say their organization has a high level of cyber resilience—down slightly from 35 percent in 2015. The 2016 study also found that 66 percent of respondents believe their organization is not prepared to recover from cyber attacks.

In the context of this research we define cyber resilience as the alignment of prevention, detection and response capabilities to manage, mitigate and move on from cyberattacks. It refers to an enterprise’s capacity to maintain its core purpose and integrity in the face of cyberattacks. A cyber resilient enterprise is one that can prevent, detect, contain and recover from a myriad of serious threats against data, applications and IT infrastructure.

Cyber resilience supports a stronger security posture. In this report, we look at those organizations that believe they have achieved a very high level of cyber resilience and compare them to organizations that believe they have achieved only an average level of cyber resilience. This comparison reveals that high level cyber resilience reduces the occurrence of data breaches, enables organizations to resolve cyber incidents faster and results in fewer disruptions to business processes or IT services. The research also shows that a cybersecurity incident response plan (CSIRP) applied consistently across the entire enterprise with senior management’s support makes a significant difference in the ability to achieve high level cyber resilience.

Despite its importance for cyber resilience, the research demonstrates the continued challenges to implementing a CSIRP. Seventy-five percent of respondents admit they do not have a formal CSIRP applied consistently across the organization. Of those with a CSIRP in place, 52 percent have either not reviewed or updated the plan since it was put in place or have no set plan for doing so. Additionally, 41 percent of respondents say the time to resolve a cyber incident has increased in the past 12 months, compared to only 31 percent of respondents who say it has decreased.

Key components of cyber resilience are not improving. The key components of cyber resilience are the ability to prevent, detect, contain and recover from a cyber attack. As shown in Figure 1, respondents confidence in achieving these components has changed very little since last year’s study.

Last year, 38 percent of respondents rated their ability to prevent a cyber attack as high; this year 40 percent of respondents rated their ability to prevent a cyber attack as high.

Confidence in the ability to quickly detect and contain a cyber attack increased slightly from 47 percent of respondents to 50 percent of respondents and from 52 percent of respondents to 53 percent of respondents, respectively.

Confidence in the ability to recover from a cyber attack declined slightly. Last year, 38 percent of respondents rated their ability as high and this year, only 34 percent of respondents rate their ability as high.

Other key research findings

Investments in training, staffing and managed security services providers improves cyber resilience. In the past 12 months, only 27 percent of respondents say their cyber resilience has significantly improved (9 percent of respondents) or improved (18 percent of respondents). These respondents say if cyber resilience improved it was due to an investment in training of staff (54 percent of respondents) or engaging a managed security services provider (42 percent of respondents).

Business complexity is having a greater impact on cyber resilience. However, insufficient planning and preparedness remain the biggest barriers to cyber resilience. In 2015, 65 percent of respondents said insufficient planning and preparedness was the biggest barrier. This increased to 66 percent in 2016.

Complexity is having a greater impact on cyber resilience. In 2015, 36 percent of respondents said the complexity of IT processes was a barrier to a high level of cyber resilience and this increased significantly to 46 percent of respondents in 2016. More respondents also believe that the complexity of business processes has increased (47 percent of respondents in 2015 and 52 percent of respondents in 2016).

Incident response plans often do not exist or are ad hoc. Seventy-nine percent of respondents rate the importance of a CSIRP with skilled cybersecurity professionals as very important, and more organizations represented in this research have a CSIRP. However, only 25 percent of respondents say they have a CSIRP that is applied consistently across the enterprise (yet this does represent an increase from 18 percent in 2015). Similarly, the percentage of respondents who say their organizations do not have a CSIRP declined from 31 percent to 23 percent of respondents.

Cyber resilience is affected by the length of time it takes to respond to a security incident. Forty-one percent of respondents say the time to resolve a cyber incident has increased significantly (16 percent of respondents) or increased (25 percent of respondents). Only 31 percent of respondents say the time to resolve has decreased (22 percent of respondents) or decreased significantly (9 percent of respondents).

Human error is the top cyber threat affecting cyber resilience. When asked to rate seven IT security threats that may affect cyber resilience, the biggest threat is human error, followed by advanced persistent threats (APTs). Seventy-four percent of respondents say the incidents experienced involved human error. IT system failures and data exfiltration were also significant according to 46 percent of respondents and 45 percent of respondents, respectively. Malware and phishing are the most frequent compromises to an organization’s IT networks or endpoints. Forty-four percent of respondents say disruptions to business processes or IT services as a consequence of cybersecurity breaches occur very frequently (16 percent of respondents) or frequently (28 percent of respondents).

Malware and phishing are the most frequent compromises to an organization’s IT networks or endpoints. Forty-four percent of respondents say disruptions to business processes or IT services as a consequence of cybersecurity breaches occur very frequently (16 percent of respondents) or frequently (28 percent of respondents).

A lack of resources and no perceived benefits are reasons not to share. Why are some companies reluctant to share intelligence? According to the 47 percent of respondents who do not share threat intelligence say it is because there is no perceived benefit (42 percent of respondents), there is a lack of resources (42 percent of respondents) and it costs too much (33 percent of respondents).

Senior management’s perception of the importance of cyber resilience has not changed. A trend that has not improved is the recognition of how cyber resilience affects revenues and brand reputation. In 2015, 52 percent of respondents said their leaders recognize that cyber resilience affects revenues and this declined slightly to 47 percent in 2016. Similarly, in 2015, 43 percent of respondents said cyber resilience affects brand reputation, and this stayed virtually the same in 2016 (45 percent of respondents). Almost half (48 percent of respondents) recognize that enterprise risks affect cyber resilience, a slight increase from 47 percent of respondents in 2015.

Funding increases slightly for cybersecurity budgets. In 2015, the average cybersecurity budget was $10 million. In 2016, this increased to an average of $11.4 million. More funding has been allocated to cyber resilience-related activities. In 2015, 26 percent of the IT security budget was allocated to cyber-resilience activities. This increased to 30 percent in 2016.

Global privacy regulations drive IT security funding. When asked about regulations that drive IT security funding, most respondents believe it is the new EU General Data Protection Regulation (51 percent of respondents) or international laws by country (50 percent of respondents). Only 22 percent of respondents rate their organization’s ability to comply with the EU General Data Protection Regulation as high

To read the rest of this report, visit ResilientSystems.com

Hacked Dallas sirens, maintained by office furniture movers, shows U.S. not serious about critical infrastructure

We’d better not ignore these sirens.

Bob Sullivan

It’s tempting to ignore the warning sirens that blared Dallas out of bed Saturday night — but that would be a very serious mistake.

We hear so much about the importance of securing America’s critical infrastructure systems. Then you find out that the company responsible for maintaining the Dallas outdoor warning siren network — the one that was hacked Saturday night — is also as an office furniture moving company.

In case you missed it, Dallas’s outdoor sirens screeched continuously overnight Saturday, harassing many of the city residents with the ultimate false alarm.  Initially believed to be a malfunction, city officials conceded it was a hack by Sunday.

The sirens are supposed to warn residents about immediate danger, like tornadoes.

They did their job.

America just received perhaps the clearest warning ever that our essential services are comically easy to attack, putting our citizens in serious peril.  Will we listen, or just go back to sleep?

One can’t say it any plainer: When bricks start falling off a bridge into the water, you fix the bridge.  (Maybe.) That’s what we have here.

No one died Sunday morning. There was no blood, so there weren’t any dramatic pictures.  But there will be. It doesn’t take much imagination to see how easily this hacker prank (or, was it a test?) could have gone very wrong. For starters, it served as a denial of service attack on the city’s 911 system, which was overwhelmed with calls.

More than 4,400 911 calls were received from 11:30 p.m. to 3 a.m., the city said.  About 800 came right after midnight, causing wait times of six minutes. As far as we know, no one died because of this.  But that could have happened.

But that’s only the tip of the iceberg. Security experts I’ve chatted with have warned for years of a hybrid attack that could easily cause panic in a big city. Imagine if this hack had been combined with a couple of convincing fake news stories suggesting there was an ongoing chemical attack on Dallas.  Without firing a shot, you could easily see real catastrophes.  Take it a step further, and combine it with some kind of physical attack, and you have a serious, long-lasting incident on your hands. Death, followed by massive confusion, then panic, then a bunch of sitting ducks stuck in traffic.

Playing the “what…if” game sometimes leads to exaggeration. But it is called for when someone is about to ignore a warning sign.  So I asked security consultant Jeff Bardin of Treadstone71 to tell me why the Dallas incident should be taken seriously.

For one, it could have been a diversionary tactic.

“Testing the emergency systems, getting to a ‘cry wolf’ state of affairs, getting authorities into a full state of chaos and confusion while hacking and penetrating something else.  Kansas City shuffle,” he said.  “This looks to me to be a test of the systems. Could also be more than a test meaning what was hacked during this fake emergency?”

Dallas has been hit by “prank” hacks before.  Last year, traffic signs were hijacked to display funny messages like “Work is Canceled — Go Back Home.”  Very funny. But this means we know the city’s systems are being actively probed.  Any intelligent person has to consider what other systems this person or gang has toyed with. And, more important, what other cities have they toyed with.

“If I as a hacker can control the emergency systems, alarms, building security, HVAC, traffic lights, first responder system, medical facility interfaces, law enforcement, etc., all the normal physical systems that now have internet interfaces, I can control the whole of the city,” Bardin said. “What else was penetrated during this ‘test?’  How many other major cities in the US operate the same way? What was injected into these systems during the test for later access?”

Hopefully, the Dallas siren hacker is this is a kid who found flaws in a very old, insecure system and had some fun for a night, Bardin said. Perhaps it was someone trying to “prove a point,” though in a careless, dangerous way that did put lives in danger.

Point not made.  Life is full of disasters averted, then ignored. The planes that almost collided. The car accident narrowly averted. The key that was lost (without a duplicate!) but is found.

It’s 48 hours after a major U.S. city had its sirens blaring all night long. Are you hearing about federal investigations? Are you hearing about executive orders around critical infrastructure? (You did. But then, you didn’t.)

“Amazing this is not getting headlines,” Bardin said. “Not amazing that they have the uninitiated managing the systems who have a side job in furniture. Perfect. Just f**ing perfect.”

As for the furniture-moving company behind the sirens, it’s probably unfair to blame them.  The Dallas Morning News reported that Michigan-based West Shore Services was in charge of maintaining the system.

Indeed, here is the resolution from the city council back in 2015 authorizing payment of $567,000 to West Shore during a six-year period.  Yup, that’s around $100,000 annually, for repair and maintenance. And that’s a MAXIMUM.  I suspect it includes the price of replacing broken equipment. I’d think it doesn’t include penetration testing. I’m sure it doesn’t include overhauling the system from its old, practically indefensible architecture.

No wonder the firm needs a side business.

An operations manager for West Short told the Dallas Morning news he didn’t know anything about the incident.  The firm didn’t respond to my questions sent via email.

But the biggest question of all:  Will anyone hear this warning siren? Or will we all go back to sleep, like Dallas did?

UPDATE 6:30 p.m. 4/10/17 – Federal Signal Corporation, which made the Dallas sirens but does not currently manage them, said it was working with authorities to determine what happened.

“The City of Dallas, Texas, has multiple outdoor warning sirens installed across the Dallas area. The outdoor warning sirens were manufactured by and purchased from Federal Signal Corporation …  Although, Federal Signal does not currently have the contract to maintain the City of Dallas outdoor warning siren system, the company is actively working with the Dallas Office of Emergency Management to determine the cause of the unintended activation,” the firm said in a statement emailed to me.

Dallas Mayor Mike Rawlings seemed to get it, and called for serious investment in the wake of the attack.

“This is yet another serious example of the need for us to upgrade and better safeguard our city’s technology infrastructure,” he wrote on his Facebook page. “It’s a costly proposition, which is why every dollar of taxpayer money must be spent with critical needs such as this in mind. Making the necessary improvements is imperative for the safety of our citizens.”

Let’s hope someone listens, and those sirens are heard far outside Texas.